chash stringlengths 16 16 | content stringlengths 267 674k |
|---|---|
7513dd6d5a53527e | You are currently browsing the tag archive for the ‘Larry Guth’ tag.
Just a brief post to record some notable papers in my fields of interest that appeared on the arXiv recently.
• A sharp square function estimate for the cone in {\bf R}^3“, by Larry Guth, Hong Wang, and Ruixiang Zhang. This paper establishes an optimal (up to epsilon losses) square function estimate for the three-dimensional light cone that was essentially conjectured by Mockenhaupt, Seeger, and Sogge, which has a number of other consequences including Sogge’s local smoothing conjecture for the wave equation in two spatial dimensions, which in turn implies the (already known) Bochner-Riesz, restriction, and Kakeya conjectures in two dimensions. Interestingly, modern techniques such as polynomial partitioning and decoupling estimates are not used in this argument; instead, the authors mostly rely on an induction on scales argument and Kakeya type estimates. Many previous authors (including myself) were able to get weaker estimates of this type by an induction on scales method, but there were always significant inefficiencies in doing so; in particular knowing the sharp square function estimate at smaller scales did not imply the sharp square function estimate at the given larger scale. The authors here get around this issue by finding an even stronger estimate that implies the square function estimate, but behaves significantly better with respect to induction on scales.
• On the Chowla and twin primes conjectures over {\mathbb F}_q[T]“, by Will Sawin and Mark Shusterman. This paper resolves a number of well known open conjectures in analytic number theory, such as the Chowla conjecture and the twin prime conjecture (in the strong form conjectured by Hardy and Littlewood), in the case of function fields where the field is a prime power q=p^j which is fixed (in contrast to a number of existing results in the “large q” limit) but has a large exponent j. The techniques here are orthogonal to those used in recent progress towards the Chowla conjecture over the integers (e.g., in this previous paper of mine); the starting point is an algebraic observation that in certain function fields, the Mobius function behaves like a quadratic Dirichlet character along certain arithmetic progressions. In principle, this reduces problems such as Chowla’s conjecture to problems about estimating sums of Dirichlet characters, for which more is known; but the task is still far from trivial.
• Bounds for sets with no polynomial progressions“, by Sarah Peluse. This paper can be viewed as part of a larger project to obtain quantitative density Ramsey theorems of Szemeredi type. For instance, Gowers famously established a relatively good quantitative bound for Szemeredi’s theorem that all dense subsets of integers contain arbitrarily long arithmetic progressions a, a+r, \dots, a+(k-1)r. The corresponding question for polynomial progressions a+P_1(r), \dots, a+P_k(r) is considered more difficult for a number of reasons. One of them is that dilation invariance is lost; a dilation of an arithmetic progression is again an arithmetic progression, but a dilation of a polynomial progression will in general not be a polynomial progression with the same polynomials P_1,\dots,P_k. Another issue is that the ranges of the two parameters a,r are now at different scales. Peluse gets around these difficulties in the case when all the polynomials P_1,\dots,P_k have distinct degrees, which is in some sense the opposite case to that considered by Gowers (in particular, she avoids the need to obtain quantitative inverse theorems for high order Gowers norms; which was recently obtained in this integer setting by Manners but with bounds that are probably not strong enough to for the bounds in Peluse’s results, due to a degree lowering argument that is available in this case). To resolve the first difficulty one has to make all the estimates rather uniform in the coefficients of the polynomials P_j, so that one can still run a density increment argument efficiently. To resolve the second difficulty one needs to find a quantitative concatenation theorem for Gowers uniformity norms. Many of these ideas were developed in previous papers of Peluse and Peluse-Prendiville in simpler settings.
• On blow up for the energy super critical defocusing non linear Schrödinger equations“, by Frank Merle, Pierre Raphael, Igor Rodnianski, and Jeremie Szeftel. This paper (when combined with two companion papers) resolves a long-standing problem as to whether finite time blowup occurs for the defocusing supercritical nonlinear Schrödinger equation (at least in certain dimensions and nonlinearities). I had a previous paper establishing a result like this if one “cheated” by replacing the nonlinear Schrodinger equation by a system of such equations, but remarkably they are able to tackle the original equation itself without any such cheating. Given the very analogous situation with Navier-Stokes, where again one can create finite time blowup by “cheating” and modifying the equation, it does raise hope that finite time blowup for the incompressible Navier-Stokes and Euler equations can be established… In fact the connection may not just be at the level of analogy; a surprising key ingredient in the proofs here is the observation that a certain blowup ansatz for the nonlinear Schrodinger equation is governed by solutions to the (compressible) Euler equation, and finite time blowup examples for the latter can be used to construct finite time blowup examples for the former.
Given any finite collection of elements {(f_i)_{i \in I}} in some Banach space {X}, the triangle inequality tells us that
\displaystyle \| \sum_{i \in I} f_i \|_X \leq \sum_{i \in I} \|f_i\|_X.
However, when the {f_i} all “oscillate in different ways”, one expects to improve substantially upon the triangle inequality. For instance, if {X} is a Hilbert space and the {f_i} are mutually orthogonal, we have the Pythagorean theorem
\displaystyle \| \sum_{i \in I} f_i \|_X = (\sum_{i \in I} \|f_i\|_X^2)^{1/2}.
For sake of comparison, from the triangle inequality and Cauchy-Schwarz one has the general inequality
\displaystyle \| \sum_{i \in I} f_i \|_X \leq (\# I)^{1/2} (\sum_{i \in I} \|f_i\|_X^2)^{1/2} \ \ \ \ \ (1)
for any finite collection {(f_i)_{i \in I}} in any Banach space {X}, where {\# I} denotes the cardinality of {I}. Thus orthogonality in a Hilbert space yields “square root cancellation”, saving a factor of {(\# I)^{1/2}} or so over the trivial bound coming from the triangle inequality.
More generally, let us somewhat informally say that a collection {(f_i)_{i \in I}} exhibits decoupling in {X} if one has the Pythagorean-like inequality
\displaystyle \| \sum_{i \in I} f_i \|_X \ll_\varepsilon (\# I)^\varepsilon (\sum_{i \in I} \|f_i\|_X^2)^{1/2}
for any {\varepsilon>0}, thus one obtains almost the full square root cancellation in the {X} norm. The theory of almost orthogonality can then be viewed as the theory of decoupling in Hilbert spaces such as {L^2({\bf R}^n)}. In {L^p} spaces for {p < 2} one usually does not expect this sort of decoupling; for instance, if the {f_i} are disjointly supported one has
\displaystyle \| \sum_{i \in I} f_i \|_{L^p} = (\sum_{i \in I} \|f_i\|_{L^p}^p)^{1/p}
and the right-hand side can be much larger than {(\sum_{i \in I} \|f_i\|_{L^p}^2)^{1/2}} when {p < 2}. At the opposite extreme, one usually does not expect to get decoupling in {L^\infty}, since one could conceivably align the {f_i} to all attain a maximum magnitude at the same location with the same phase, at which point the triangle inequality in {L^\infty} becomes sharp.
However, in some cases one can get decoupling for certain {2 < p < \infty}. For instance, suppose we are in {L^4}, and that {f_1,\dots,f_N} are bi-orthogonal in the sense that the products {f_i f_j} for {1 \leq i < j \leq N} are pairwise orthogonal in {L^2}. Then we have
\displaystyle \| \sum_{i = 1}^N f_i \|_{L^4}^2 = \| (\sum_{i=1}^N f_i)^2 \|_{L^2}
\displaystyle = \| \sum_{1 \leq i,j \leq N} f_i f_j \|_{L^2}
\displaystyle \ll (\sum_{1 \leq i,j \leq N} \|f_i f_j \|_{L^2}^2)^{1/2}
\displaystyle = \| (\sum_{1 \leq i,j \leq N} |f_i f_j|^2)^{1/2} \|_{L^2}
\displaystyle = \| \sum_{i=1}^N |f_i|^2 \|_{L^2}
\displaystyle \leq \sum_{i=1}^N \| |f_i|^2 \|_{L^2}
\displaystyle = \sum_{i=1}^N \|f_i\|_{L^4}^2
giving decoupling in {L^4}. (Similarly if each of the {f_i f_j} is orthogonal to all but {O_\varepsilon( N^\varepsilon )} of the other {f_{i'} f_{j'}}.) A similar argument also gives {L^6} decoupling when one has tri-orthogonality (with the {f_i f_j f_k} mostly orthogonal to each other), and so forth. As a slight variant, Khintchine’s inequality also indicates that decoupling should occur for any fixed {2 < p < \infty} if one multiplies each of the {f_i} by an independent random sign {\epsilon_i \in \{-1,+1\}}.
In recent years, Bourgain and Demeter have been establishing decoupling theorems in {L^p({\bf R}^n)} spaces for various key exponents of {2 < p < \infty}, in the “restriction theory” setting in which the {f_i} are Fourier transforms of measures supported on different portions of a given surface or curve; this builds upon the earlier decoupling theorems of Wolff. In a recent paper with Guth, they established the following decoupling theorem for the curve {\gamma({\bf R}) \subset {\bf R}^n} parameterised by the polynomial curve
\displaystyle \gamma: t \mapsto (t, t^2, \dots, t^n).
For any ball {B = B(x_0,r)} in {{\bf R}^n}, let {w_B: {\bf R}^n \rightarrow {\bf R}^+} denote the weight
\displaystyle w_B(x) := \frac{1}{(1 + \frac{|x-x_0|}{r})^{100n}},
which should be viewed as a smoothed out version of the indicator function {1_B} of {B}. In particular, the space {L^p(w_B) = L^p({\bf R}^n, w_B(x)\ dx)} can be viewed as a smoothed out version of the space {L^p(B)}. For future reference we observe a fundamental self-similarity of the curve {\gamma({\bf R})}: any arc {\gamma(I)} in this curve, with {I} a compact interval, is affinely equivalent to the standard arc {\gamma([0,1])}.
Theorem 1 (Decoupling theorem) Let {n \geq 1}. Subdivide the unit interval {[0,1]} into {N} equal subintervals {I_i} of length {1/N}, and for each such {I_i}, let {f_i: {\bf R}^n \rightarrow {\bf R}} be the Fourier transform
\displaystyle f_i(x) = \int_{\gamma(I_i)} e(x \cdot \xi)\ d\mu_i(\xi)
of a finite Borel measure {\mu_i} on the arc {\gamma(I_i)}, where {e(\theta) := e^{2\pi i \theta}}. Then the {f_i} exhibit decoupling in {L^{n(n+1)}(w_B)} for any ball {B} of radius {N^n}.
Orthogonality gives the {n=1} case of this theorem. The bi-orthogonality type arguments sketched earlier only give decoupling in {L^p} up to the range {2 \leq p \leq 2n}; the point here is that we can now get a much larger value of {n}. The {n=2} case of this theorem was previously established by Bourgain and Demeter (who obtained in fact an analogous theorem for any curved hypersurface). The exponent {n(n+1)} (and the radius {N^n}) is best possible, as can be seen by the following basic example. If
\displaystyle f_i(x) := \int_{I_i} e(x \cdot \gamma(\xi)) g_i(\xi)\ d\xi
where {g_i} is a bump function adapted to {I_i}, then standard Fourier-analytic computations show that {f_i} will be comparable to {1/N} on a rectangular box of dimensions {N \times N^2 \times \dots \times N^n} (and thus volume {N^{n(n+1)/2}}) centred at the origin, and exhibit decay away from this box, with {\|f_i\|_{L^{n(n+1)}(w_B)}} comparable to
\displaystyle 1/N \times (N^{n(n+1)/2})^{1/(n(n+1))} = 1/\sqrt{N}.
On the other hand, {\sum_{i=1}^N f_i} is comparable to {1} on a ball of radius comparable to {1} centred at the origin, so {\|\sum_{i=1}^N f_i\|_{L^{n(n+1)}(w_B)}} is {\gg 1}, which is just barely consistent with decoupling. This calculation shows that decoupling will fail if {n(n+1)} is replaced by any larger exponent, and also if the radius of the ball {B} is reduced to be significantly smaller than {N^n}.
This theorem has the following consequence of importance in analytic number theory:
Corollary 2 (Vinogradov main conjecture) Let {s, n, N \geq 1} be integers, and let {\varepsilon > 0}. Then
\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{2s}\ dx_1 \dots dx_n
\displaystyle \ll_{\varepsilon,s,n} N^{s+\varepsilon} + N^{2s - \frac{n(n+1)}{2}+\varepsilon}.
Proof: By the Hölder inequality (and the trivial bound of {N} for the exponential sum), it suffices to treat the critical case {s = n(n+1)/2}, that is to say to show that
\displaystyle \int_{[0,1]^n} |\sum_{j=1}^N e( j x_1 + j^2 x_2 + \dots + j^n x_n)|^{n(n+1)}\ dx_1 \dots dx_n \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+\varepsilon}.
We can rescale this as
\displaystyle \int_{[0,N] \times [0,N^2] \times \dots \times [0,N^n]} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{n(n+1)+\varepsilon}.
As the integrand is periodic along the lattice {N{\bf Z} \times N^2 {\bf Z} \times \dots \times N^n {\bf Z}}, this is equivalent to
\displaystyle \int_{[0,N^n]^n} |\sum_{j=1}^N e( x \cdot \gamma(j/N) )|^{n(n+1)}\ dx \ll_{\varepsilon,n} N^{\frac{n(n+1)}{2}+n^2+\varepsilon}.
The left-hand side may be bounded by {\ll \| \sum_{j=1}^N f_j \|_{L^{n(n+1)}(w_B)}^{n(n+1)}}, where {B := B(0,N^n)} and {f_j(x) := e(x \cdot \gamma(j/N))}. Since
\displaystyle \| f_j \|_{L^{n(n+1)}(w_B)} \ll (N^{n^2})^{\frac{1}{n(n+1)}},
the claim now follows from the decoupling theorem and a brief calculation. \Box
Using the Plancherel formula, one may equivalently (when {s} is an integer) write the Vinogradov main conjecture in terms of solutions {j_1,\dots,j_s,k_1,\dots,k_s \in \{1,\dots,N\}} to the system of equations
\displaystyle j_1^i + \dots + j_s^i = k_1^i + \dots + k_s^i \forall i=1,\dots,n,
but we will not use this formulation here.
A history of the Vinogradov main conjecture may be found in this survey of Wooley; prior to the Bourgain-Demeter-Guth theorem, the conjecture was solved completely for {n \leq 3}, or for {n > 3} and {s} either below {n(n+1)/2 - n/3 + O(n^{2/3})} or above {n(n-1)}, with the bulk of recent progress coming from the efficient congruencing technique of Wooley. It has numerous applications to exponential sums, Waring’s problem, and the zeta function; to give just one application, the main conjecture implies the predicted asymptotic for the number of ways to express a large number as the sum of {23} fifth powers (the previous best result required {28} fifth powers). The Bourgain-Demeter-Guth approach to the Vinogradov main conjecture, based on decoupling, is ostensibly very different from the efficient congruencing technique, which relies heavily on the arithmetic structure of the program, but it appears (as I have been told from second-hand sources) that the two methods are actually closely related, with the former being a sort of “Archimedean” version of the latter (with the intervals {I_i} in the decoupling theorem being analogous to congruence classes in the efficient congruencing method); hopefully there will be some future work making this connection more precise. One advantage of the decoupling approach is that it generalises to non-arithmetic settings in which the set {\{1,\dots,N\}} that {j} is drawn from is replaced by some other similarly separated set of real numbers. (A random thought – could this allow the Vinogradov-Korobov bounds on the zeta function to extend to Beurling zeta functions?)
Below the fold we sketch the Bourgain-Demeter-Guth argument proving Theorem 1.
I thank Jean Bourgain and Andrew Granville for helpful discussions.
Read the rest of this entry »
One of my favourite unsolved problems in harmonic analysis is the restriction problem. This problem, first posed explicitly by Elias Stein, can take many equivalent forms, but one of them is this: one starts with a smooth compact hypersurface {S} (possibly with boundary) in {{\bf R}^d}, such as the unit sphere {S = S^2} in {{\bf R}^3}, and equips it with surface measure {d\sigma}. One then takes a bounded measurable function {f \in L^\infty(S,d\sigma)} on this surface, and then computes the (inverse) Fourier transform
\displaystyle \widehat{fd\sigma}(x) = \int_S e^{2\pi i x \cdot \omega} f(\omega) d\sigma(\omega)
of the measure {fd\sigma}. As {f} is bounded and {d\sigma} is a finite measure, this is a bounded function on {{\bf R}^d}; from the dominated convergence theorem, it is also continuous. The restriction problem asks whether this Fourier transform also decays in space, and specifically whether {\widehat{fd\sigma}} lies in {L^q({\bf R}^d)} for some {q < \infty}. (This is a natural space to control decay because it is translation invariant, which is compatible on the frequency space side with the modulation invariance of {L^\infty(S,d\sigma)}.) By the closed graph theorem, this is the case if and only if there is an estimate of the form
\displaystyle \| \widehat{f d\sigma} \|_{L^q({\bf R}^d)} \leq C_{q,d,S} \|f\|_{L^\infty(S,d\sigma)} \ \ \ \ \ (1)
for some constant {C_{q,d,S}} that can depend on {q,d,S} but not on {f}. By a limiting argument, to provide such an estimate, it suffices to prove such an estimate under the additional assumption that {f} is smooth.
Strictly speaking, the above problem should be called the extension problem, but it is dual to the original formulation of the restriction problem, which asks to find those exponents {1 \leq q' \leq \infty} for which the Fourier transform of an {L^{q'}({\bf R}^d)} function {g} can be meaningfully restricted to a hypersurface {S}, in the sense that the map {g \mapsto \hat g|_{S}} can be continuously defined from {L^{q'}({\bf R}^d)} to, say, {L^1(S,d\sigma)}. A duality argument shows that the exponents {q'} for which the restriction property holds are the dual exponents to the exponents {q} for which the extension problem holds.
There are several motivations for studying the restriction problem. The problem is connected to the classical question of determining the nature of the convergence of various Fourier summation methods (and specifically, Bochner-Riesz summation); very roughly speaking, if one wishes to perform a partial Fourier transform by restricting the frequencies (possibly using a well-chosen weight) to some region {B} (such as a ball), then one expects this operation to well behaved if the boundary {\partial B} of this region has good restriction (or extension) properties. More generally, the restriction problem for a surface {S} is connected to the behaviour of Fourier multipliers whose symbols are singular at {S}. The problem is also connected to the analysis of various linear PDE such as the Helmholtz equation, Schro\”dinger equation, wave equation, and the (linearised) Korteweg-de Vries equation, because solutions to such equations can be expressed via the Fourier transform in the form {fd\sigma} for various surfaces {S} (the sphere, paraboloid, light cone, and cubic for the Helmholtz, Schrödinger, wave, and linearised Korteweg de Vries equation respectively). A particular family of restriction-type theorems for such surfaces, known as Strichartz estimates, play a foundational role in the nonlinear perturbations of these linear equations (e.g. the nonlinear Schrödinger equation, the nonlinear wave equation, and the Korteweg-de Vries equation). Last, but not least, there is a a fundamental connection between the restriction problem and the Kakeya problem, which roughly speaking concerns how tubes that point in different directions can overlap. Indeed, by superimposing special functions of the type {\widehat{fd\sigma}}, known as wave packets, and which are concentrated on tubes in various directions, one can “encode” the Kakeya problem inside the restriction problem; in particular, the conjectured solution to the restriction problem implies the conjectured solution to the Kakeya problem. Finally, the restriction problem serves as a simplified toy model for studying discrete exponential sums whose coefficients do not have a well controlled phase; this perspective was, for instance, used by Ben Green when he established Roth’s theorem in the primes by Fourier-analytic methods, which was in turn one of the main inspirations for our later work establishing arbitrarily long progressions in the primes, although we ended up using ergodic-theoretic arguments instead of Fourier-analytic ones and so did not directly use restriction theory in that paper.
The estimate (1) is trivial for {q=\infty} and becomes harder for smaller {q}. The geometry, and more precisely the curvature, of the surface {S}, plays a key role: if {S} contains a portion which is completely flat, then it is not difficult to concoct an {f} for which {\widehat{f d\sigma}} fails to decay in the normal direction to this flat portion, and so there are no restriction estimates for any finite {q}. Conversely, if {S} is not infinitely flat at any point, then from the method of stationary phase, the Fourier transform {\widehat{d\sigma}} can be shown to decay at a power rate at infinity, and this together with a standard method known as the {TT^*} argument can be used to give non-trivial restriction estimates for finite {q}. However, these arguments fall somewhat short of obtaining the best possible exponents {q}. For instance, in the case of the sphere {S = S^{d-1} \subset {\bf R}^d}, the Fourier transform {\widehat{d\sigma}(x)} is known to decay at the rate {O(|x|^{-(d-1)/2})} and no better as {d \rightarrow \infty}, which shows that the condition {q > \frac{2d}{d-1}} is necessary in order for (1) to hold for this surface. The restriction conjecture for {S^{d-1}} asserts that this necessary condition is also sufficient. However, the {TT^*}-based argument gives only the Tomas-Stein theorem, which in this context gives (1) in the weaker range {q \geq \frac{2(d+1)}{d-1}}. (On the other hand, by the nature of the {TT^*} method, the Tomas-Stein theorem does allow the {L^\infty(S,d\sigma)} norm on the right-hand side to be relaxed to {L^2(S,d\sigma)}, at which point the Tomas-Stein exponent {\frac{2(d+1)}{d-1}} becomes best possible. The fact that the Tomas-Stein theorem has an {L^2} norm on the right-hand side is particularly valuable for applications to PDE, leading in particular to the Strichartz estimates mentioned earlier.)
Over the last two decades, there was a fair amount of work in pushing past the Tomas-Stein barrier. For sake of concreteness let us work just with the restriction problem for the unit sphere {S^2} in {{\bf R}^3}. Here, the restriction conjecture asserts that (1) holds for all {q > 3}, while the Tomas-Stein theorem gives only {q \geq 4}. By combining a multiscale analysis approach with some new progress on the Kakeya conjecture, Bourgain was able to obtain the first improvement on this range, establishing the restriction conjecture for {q > 4 - \frac{2}{15}}. The methods were steadily refined over the years; until recently, the best result (due to myself) was that the conjecture held for all {q > 3 \frac{1}{3}}, which proceeded by analysing a “bilinear {L^2}” variant of the problem studied previously by Bourgain and by Wolff. This is essentially the limit of that method; the relevant bilinear {L^2} estimate fails for {q < 3 + \frac{1}{3}}. (This estimate was recently established at the endpoint {q=3+\frac{1}{3}} by Jungjin Lee (personal communication), though this does not quite improve the range of exponents in (1) due to a logarithmic inefficiency in converting the bilinear estimate to a linear one.)
On the other hand, the full range {q>3} of exponents in (1) was obtained by Bennett, Carbery, and myself (with an alternate proof later given by Guth), but only under the additional assumption of non-coplanar interactions. In three dimensions, this assumption was enforced by replacing (1) with the weaker trilinear (and localised) variant
\displaystyle \| \widehat{f_1 d\sigma_1} \widehat{f_2 d\sigma_2} \widehat{f_3 d\sigma_3} \|_{L^{q/3}(B(0,R))} \leq C_{q,d,S_1,S_2,S_3,\epsilon} R^\epsilon \ \ \ \ \ (2)
\displaystyle \|f_1\|_{L^\infty(S_1,d\sigma_1)} \|f_2\|_{L^\infty(S_2,d\sigma_2)} \|f_3\|_{L^\infty(S_3,d\sigma_3)}
where {\epsilon>0} and {R \geq 1} are arbitrary, {B(0,R)} is the ball of radius {R} in {{\bf R}^3}, and {S_1,S_2,S_3} are compact portions of {S} whose unit normals {n_1(),n_2(),n_3()} are never coplanar, thus there is a uniform lower bound
\displaystyle |n_1(\omega_1) \wedge n_2(\omega_2) \wedge n_3(\omega_3)| \geq c
for some {c>0} and all {\omega_1 \in S_1, \omega_2 \in S_2, \omega_3 \in S_3}. If it were not for this non-coplanarity restriction, (2) would be equivalent to (1) (by setting {S_1=S_2=S_3} and {f_1=f_2=f_3}, with the converse implication coming from Hölder’s inequality; the {R^\epsilon} loss can be removed by a lemma from a paper of mine). At the time we wrote this paper, we tried fairly hard to try to remove this non-coplanarity restriction in order to recover progress on the original restriction conjecture, but without much success.
A few weeks ago, though, Bourgain and Guth found a new way to use multiscale analysis to “interpolate” between the result of Bennett, Carbery and myself (that has optimal exponents, but requires non-coplanar interactions), with a more classical square function estimate of Córdoba that handles the coplanar case. A direct application of this interpolation method already ties with the previous best known result in three dimensions (i.e. that (1) holds for {q > 3 \frac{1}{3}}). But it also allows for the insertion of additional input, such as the best Kakeya estimate currently known in three dimensions, due to Wolff. This enlarges the range slightly to {q > 3.3}. The method also can extend to variable-coefficient settings, and in some of these cases (where there is so much “compression” going on that no additional Kakeya estimates are available) the estimates are best possible.
As is often the case in this field, there is a lot of technical book-keeping and juggling of parameters in the formal arguments of Bourgain and Guth, but the main ideas and “numerology” can be expressed fairly readily. (In mathematics, numerology refers to the empirically observed relationships between various key exponents and other numerical parameters; in many cases, one can use shortcuts such as dimensional analysis or informal heuristic, to compute these exponents long before the formal argument is completely in place.) Below the fold, I would like to record this numerology for the simplest of the Bourgain-Guth arguments, namely a reproof of (1) for {p > 3 \frac{1}{3}}. This is primarily for my own benefit, but may be of interest to other experts in this particular topic. (See also my 2003 lecture notes on the restriction conjecture.)
In order to focus on the ideas in the paper (rather than on the technical details), I will adopt an informal, heuristic approach, for instance by interpreting the uncertainty principle and the pigeonhole principle rather liberally, and by focusing on main terms in a decomposition and ignoring secondary terms. I will also be somewhat vague with regard to asymptotic notation such as {\ll}. Making the arguments rigorous requires a certain amount of standard but tedious effort (and is one of the main reasons why the Bourgain-Guth paper is as long as it is), which I will not focus on here.
Read the rest of this entry »
Read the rest of this entry »
[This list is not exhaustive.]
Read the rest of this entry » |
983397241d56d680 | Math Science Chemistry Economics Biology News Search
> Bohr and quantum’s atomic model Issue: 2012-3 Section: 14-16
Bohr’s atomic model was introduced in 1913 by Niels Bohr. Bohr atom is a planetary model, where the electrons are in stationary circular orbits and the electrons orbit the nucleus at set distances, and it is based on laws of Plank and Einstein’s photoelectric effect.
The Bohr’s atomic model was an expansion on the Rutherford model. Bohr’s atomic model overcame the several flaws of Rutherford model. Rutherford’s atomic model leaves many questions, because Rutherford’s electrons lose energy, collapsing on the atom itself; also, Rutherford’s atomic model was not compatible with Maxwell’s laws. The Bohr’s atomic model is important because it describes most of the accepted features of atomic theory and explains the Rydberg formula. Also (characters) the Bohr model shows that the electrons are in orbits of differing energy around the nucleus. For Bohr(characters), the energy of an electron is quantized, that is, there is nothing between an orbit and another. The level of energy that normally occupies an electron is called the ground state. When an electron changes orbit it makes a quantum leaps. The difference between the two orbits (ground state and electron’s excited state) is emitted by the atom with photons. The Bohr’s atomic model shows that each electron has a set energy. From the electron’s excited state, the electron can return at its ground state. Bohr discovered also that the closer an electron is to the nucleus, the less energy needs; conversely, the further an electron is from the nucleus, the more energy it needs. He also discovered that each energy level may contain varying quantities of electrons. Even if it contains some errors, the Bohr model is very important. The energy of the orbit is related to its size. Bohr used Planck's constant and obtained a formula for the energy levels of the hydrogen atom. Bohr postulated that the angular momentum of the electron is quantized.
Planck’s laws and photoelectric effect
For Bohr the light is a electromagnetic radiation of a particular nature. Planck thought that the light was formed by packets of energy, called quanta; each quantum has a respective frequency. Planck's law was formulated to explain the radiation emitted by a blackbody. For a blackbody that does not exceed the hundreds of degrees, many of the radiation emitted are in the infrared part of the electromagnetic spectrum. At higher temperatures, a part of the radiation is radiated as visible light. The value of Planck’s constant 6.62606957 × 10−34 joule∙second, with a standard uncertainty of 0.00000029 × 10−34 joule∙second. Einstein, for explain the photoelectric effect, said that the light was formed by packets of energy, called photon. A bright light emits many photons. Einstein’s photoelectric effect is based on experiments conducted on a metal foil bombarded with electromagnetic energy, where electrons may be expelled only if they have the same frequency of the energy administered. Bohr takes an atom of hydrogen and administered energy, and the atom, overcome the maximum threshold, changed orbit (1 to 2, 2 to 3…). Bohr discovered 7 orbit in the hydrogen atom. This phenomenon is corresponded with the studies by Balmer. Balmer discovered which the spectrum was formed from various wavelengths, and all the wavelengths had got various frequencies.
Emission Spectrum
The energy released by electrons occupies the portion of the electromagnetic spectrum that we detect as visible light. Small variations are seen as light of different colors. All colors of the visible spectrum are visible when the white light is diffracted with a prism. But, when the light emitted by a hydrogen atom is fragmented, not all colors are visible. As Bohr thought that the electrons are in different energy levels, he found seven energy levels of the hydrogen atom.
Bohr hypothesized then that when an electron receives energy from outside, the electron changes orbit. When the electron returns at the original orbit, the electron issued a photon and an energy transition occurred. In addition to Balmer’s series, there are the Lyman’s series and the Paschen’s series.
Also the electron revolved in a circular and stationary orbit, as the Rutherford’s atomic model. Bohr’s atomic model results in an improvement of the Rutherford’s atom and each orbit corresponds to certain energy values. The energy of the photon issued corresponds at the different energy between two orbits. Bohr explained how electrons could jump from one orbit to another only by emitting or absorbing energy in fixed quanta.
The Bohr’s atomic model is fine for simple atoms (such as hydrogen), but not for more complex atoms. The Bohr’s atomic model doesn’t explain the Zeeman effect, it violates the Heisenberg Uncertainty Principle, and does not work with complex atoms.
10 years later contradictions appeared to the Bohr’s atom. It was necessary to make new hypothesis and not to miss the property of particles such as the electron, proton and atom. Despite all, Bohr’s atomic model was very important, especially for the astronomy. From the atom of Bohr the quantum atomic model was born.
Quantum mechanical atomic model
The Quantum mechanical atomic model is based on the quantum theory: matter has characteristics both of the particles and of the waves. According to the uncertainty principle, it’s impossible to know the exact position and position of an electron at the same time; or one or the other. The quantum mechanical atomic model is based on probability; because it uses even complex shapes of orbitals, amount of space in which it is likely to find the electron. To describe the electron and their orbital four quantum numbers were introduced: “n”, “l”,”m” and “ms”.
Schrödinger’s equation
The Schrödinger equation is very important for quantum Physics. Thanks to it you can find out where the particle is, its momentum and you can solve the wave functions of the particle. The Schrödinger equation provides the behavior of a dynamic system and is a wave equation that predicts the likelihood of events. The energy is quantized and has a quantum number n, and energy can never be zero.
Quantum number and orbitals
The quantum numbers describe the position of the electron and the amount of electrons that can stand in an orbital.
Quantum number “n”: is a quantum number that describes the distance of the orbital from the nucleus and the size of the orbit. It has got positive integer values: 1, 2, 3, …
The angular momentum quantum number or quantum number “l”: the quantum number “l” defines the shape of orbital. It has positive integer values (0 to n-1); “l” corresponds to the s-p-d and f. The subshell are orbitals that have the same value of number quantum “n” but different values of number quantum “l”.
Quantum number “m”: the quantum number “m” describes the orientation of the orbital in the space. It has values from –l to 0 to l:
Example: L= 1 M= -1,0,1
Spin quantum number “ms”: the spin quantum number “ms” specifies the value and orientation of the axis of rotation of an electron. It has values ½ or - ½.
The Pauli exclusion principle states that in an orbital can be only two electron with opposite spin.
The electron configuration is an arrangement of the electrons belonging to the orbitals of an atom. the arrangement of electrons takes place with the principle of the Aufbau:
The Bohr and quantum’s atomic model is very important for the science because the Bohr’s atomic model is an expansion of the Rutherford’s atomic model and it describes most of the accepted features of atomic theory and explains the Rydberg formula; the quantum atomic model because it introduced the concept of the orbital and explain the quantum numbers; also the Bohr’s atomic model is important for the discovered of the emission spectrum, which is very important in the astronomy. |
500ab1bde26b9b32 | Schrödinger potentials solvable in terms of the general Heun functions
A. M. Ishkhanyan
Research output: Contribution to journalArticlepeer-review
30 Citations (Scopus)
We show that there exist 35 choices for the coordinate transformation each leading to a potential for which the stationary Schrödinger equation is exactly solvable in terms of the general Heun functions. Because of the symmetry of the Heun equation with respect to the transposition of its singularities only eleven of these potentials are independent. Four of these independent potentials are always explicitly written in terms of elementary functions, one potential is given through the Jacobi elliptic sn-function, and the others are in general defined parametrically. Nine of the independent potentials possess exactly or conditionally integrable hypergeometric sub-potentials for which each of the fundamental solutions of the Schrödinger equation is written through a single hypergeometric function. Many of the potentials possess sub-potentials for which the general solution is written through fundamental solutions each of which is a linear combination of two or more Gauss hypergeometric functions. We present an example of such a potential which is a conditionally integrable generalization of the third exactly solvable Gauss hypergeometric potential.
Original languageEnglish
Pages (from-to)456-471
Number of pages16
JournalAnnals of Physics
Publication statusPublished - 1 Jan 2018
• General Heun equation
• Integrable potential
• Stationary Schrödinger equation
ASJC Scopus subject areas
• Physics and Astronomy(all)
Fingerprint Dive into the research topics of 'Schrödinger potentials solvable in terms of the general Heun functions'. Together they form a unique fingerprint.
Cite this |
bfc6d3fd59d39263 | Séminaires de l’année 2018
Séminaire du LPTMS: Nicolas Cherroret
Novel perspectives from Anderson localization of atomic matter waves
Nicolas Cherroret (Laboratoire Kastler-Brossel, Université Pierre et Marie Curie)
In the last decades, the field of atom optics has allowed for accurate experimental investigations of quantum transport with cold atoms. In this context, the physics of Anderson localization (AL) can today be finely studied, using tunable atomic matter waves in well controlled optical random potentials. After briefly introducing the main concepts of atom optics in random optical potentials, I will address the problem of the out-of-equilibrium evolution of a non-interacting matter wave in a random potential. The discussion will be focused on two different dynamical scenarios where unexpected manifestations of AL show up. First, I will discuss the spatial spreading of a narrow wave packet, a situation where AL triggers a « mesoscopic echo » peak in the density distribution. This phenomenon has been observed experimentally with cold atoms only recently. In the second scenario, I will consider the evolution of a plane matter wave in the random potential. In this case, the interesting dynamics takes place in momentum space, where AL manifests itself as a surprising « coherent forward scattering peak », twin of the well-known coherent backscattering effect. I will conclude the talk with open questions on the role of atomic interactions.
Physics-Biology interface seminar: Stéphanie Bonneau
Controled oxidation in living systems
Stéphanie Bonneau (Laboratoire Jean Perrin, Paris)
Living systems produce energy by oxidizing carbon : in aerobic organisms, a major step of this oxidation is processed by the respiratory chain in mitochondria. Energy production involves oxidation and subsequent ageing of the cellular materials. The control of their oxidative activity allows cells to remain far enough to the thermodynamic equilibrium and consequently the balance between respiration and ageing is a major regulation parameter of cell's fate. The key role of mitochondria in this phenomenon will be discussed.
Experimentally, the control of the cellular oxidation is performed by using chosen photosensitizers. Due to their macrocycle, such molecules present very special photo-physical properties. Their light irradiation generates, through their triplet state, reactive oxygen species. The lifetime of these molecular species is very short and their action is very localized. To specifically target photosensitizers to one or the other cell compartments is thus the basis of their potential to modify and control the physiology of the cells. For example, the photo-chemical internalization (PCI) of macromolecules into cells is based on the photo-induced alteration of endosomal membranes - before their maturation in lysosomes - allowing the escape of the macromolecules, free to reach its targets within cell. More extensive photo-induced changes, in particular to the mitochondria, lead to cell death by necrosis or apoptosis. This photo-induced cell death is basis of an anticancer therapy so-called PDT.
First, we focussed on the photo-induced modifications of the cellular trafficking. By combining measurements of local cytoplasmic viscosity and active trafficking, we found that photodynamic effect induced a only slight increase in viscosity but a massive decrease in diffusion. These effects are the signature of a return to thermodynamic equilibrium of the system after photo-activation. Secondly, to better apprehend such complex effects, we turned to model systems. In particular, we focused on photo-oxidation of membranes lipids, that are important oxidative targets. We extensively studied their modifications under photo-oxidation. Our purpose is to demonstrate that the photo-induced permeabilization of the membranes is correlated with a deep physical stress, which can be relaxed by various pathways, depending on its lipids composition, which is characteristic of the targeted cellular compartment.
Séminaire du LPTMS: Rémy Dubertrand & Pierre Illien
A semiclassical perspective to study quantum interacting particles
Rémy Dubertrand (Université de Liège, Belgique)
Quantum chaos have had a great success to describe various types of one-particle quantum systems in the semiclassical regime (e.g. large quantum numbers). I will describe how these techniques can be used to describe quantum systems of interacting particles. For example I contributed to look at Bose Hubbard model and justify why the spectral statistics agrees with RMT for a certain regime of the ratio between onsite interaction and hopping energies. I will discuss in a more general framework how fruitful a semiclassical approach can be to study such systems of interacting particles.
Effect of crowding and hydrodynamic interactions on the dynamics of fluctuating systems
Pierre Illien (EC2M laboratory, ESPCI Paris)
Describing the interactions of a fluctuating object with its environment is an ubiquitous problem of statistical physics. I will first focus on the dynamics of a driven particle in a host medium which hinders its motion through crowding interactions. Going beyond the usual effective descriptions of the environment of the active tracer, we propose a lattice model which takes explicitly into account the correlations between the dynamics of the tracer and the response of the bath and for which we determine analytically exact and approximate solutions, that reveal intrinsically nonlinear and nonequilibrium properties. I will then present recent results that reveal how the diffusivity of enzymes can be enhanced when they are catalytically active. In order to identify the physical mechanisms at stake in this phenomenon, we perform measurements on the endothermic and relatively slow enzyme aldolase. We propose a new physical paradigm, which reveals that the diffusion coefficient of a model enzyme hydrodynamically coupled to its environment increases significantly when undergoing changes in conformational fluctuations in a substrate concentration dependent manner, and is independent of the overall turnover rate of the underlying enzymatic reaction.
Séminaire du LPTMS: Andrei Bernevig *** séminaire exceptionnel ***
Topological Quantum Chemistry
Andrei Bernevig (Department of Physics, Princeton University, USA)
The past decade has seen tremendous success in predicting and experimentally discovering distinct classes of topological insulators (TIs) and semimetals. We review the field and we propose an electronic band theory that highlights the link between topology and local chemical bonding, and combines this with the conventional band theory of electrons. Topological Quantum Chemistry is a description of the universal global properties of all possible band structures and materials, comprised of a graph theoretical description of momentum space and a dual group theoretical description in real space. We classify the possible band structures for all 230 crystal symmetry groups that arise from local atomic orbitals, and show which are topologically nontrivial. We show how our topological band theory sheds new light on known TIs, and demonstrate the power of our method to predict a plethora of new TIs.
Séminaire du LPTMS: Pierre-Elie Larré & Clément Tauber
Quantum simulating many-body phenomena with propagating light
Pierre-Elie Larré (Université de Cergy-Pontoise)
We consider the propagation of a quantum light field in a cavityless nonlinear dielectric. In this all-optical platform, the space propagation of the field's envelope may be mapped onto the time evolution of a quantum fluid of interacting photons. The resulting many-body quantum system constitutes a particular class of quantum fluids of light and presently attracts a growing interest as a powerflul tool for quantum simulation. I will present recent theoretical and experimental progresses in this rapidly emerging research field, including investigations on superfluidity, elementary excitations, disorder, quantum quenches, prethermalization, thermalization, and Bose-Einstein condensation.
Bulk-edge correspondence for Floquet topological insulators
Clément Tauber (ETH, Zürich)
Floquet topological insulators describe independent electrons on a lattice driven out of equilibrium by a time-periodic Hamiltonian, beyond the usual adiabatic approximation. In dimension two such systems are characterized by integer-valued topological indices associated to the unitary propagator, alternatively in the bulk or at the edge of a sample. In this talk I will give new definitions of the two indices, relying neither on translation invariance nor on averaging, and show that they are equal. In particular weak disorder and defects are intrinsically taken into account. Finally indices can be defined when two driven sample are placed next to one another either in space or in time, and then shown to be equal. The edge index is interpreted as a quantized pumping occurring at the interface with an effective vacuum.
Physics-Biology interface seminar: Shiladitya Banerjee
Adaptive division control in stressed bacterial cells
Shiladitya Banerjee (UCL, UK)
Control of cell size is a fundamental adaptive trait that underlies the coupling between cell growth and division. Cells possess the unique ability to adapt their size and shapes in response to environmental cues, thereby translating extracellular information into decisions to grow or divide. However, the physical mechanisms mediating the regulation of cell size and division timing remain poorly understood. In this talk, I will discuss our recent discovery of an adaptive model of cell size control in bacteria, where the decision to divide is tightly regulated by the spatial patterning of cell wall growth modes. Using a combination of stochastic mechanical modelling and single-cell experiments, I will elucidate the implications of the size control model for cellular fitness adaptation under stress. In particular, our results show that morphological transformations provide fitness and survival advantages to bacteria under sustained antibiotic treatment.
Séminaire du LPTMS: Christopher Joyner
A random walk approach to linear statistics in random tournament ensembles
Christopher Joyner (Queen Mary University of London, UK)
We investigate the linear statistics of random matrices with purely imaginary Bernoulli entries exhibit global correlations in terms of row sums. These are related to ensembles of so-called random regular tournaments. Specifically, we construct a random walk within the space of these matrices and show the induced motion of the first k traces in a Chebyshev basis converges to a suitable Ornstein-Uhlenbeck process. Coupling this with Stein’s method allows us to compute the rate of convergence to a Gaussian distribution in the limit of large matrix dimension.
Séminaire du LPTMS: Beatriz Seoane Bartolomé & Ulisse Ferrari
Phase transitions in computer simulations : the Tethered Monte Carlo method
Beatriz Seoane Bartolomé (LPT-ENS, Paris)
In this talk, I will present a powerful Monte Carlo method that I developed during my PhD [1,2] and extended recently [3], designed to efficiently study phase transitions at equilibrium. The principle is very simple, by means of external constraints to the system, we are able to avoid the traditional critical (exponential) slowing down associated to the second (first) order transition, and thus reach much larger system sizes than with traditional methods. Furthermore, the reconstruction of the constrained free energy is much simpler than in other similar methods, such as the famous Umbrella Sampling, allowing us to both fix multiple constraints at the same time, and to extract magnitudes such as the inter-facial free energy with an unusual high precision. In particular, I will discuss the Tethered Monte Carlo strategy in the context of a toy model for crystalline porous media [3].
[1] J. Stat. Phys. 144, 554 (2011). [2] Phys. Rev. Lett. 108, 165701 (2012). [3] The Journal of Chemical Physics 147 , 084704 (2017).
Statistical Physics-inspired models of biological network: collective behavior in neuronal ensembles
Ulisse Ferrari (Institut de la Vision, Inserm & UPMC)
In both cortices and sensory systems, information is represented and transmitted through the correlated activity of large neuronal networks. Neurons, in fact, do not work independently: each of them drives the activity of the others, thus working as a collective ensemble. Methods borrowed from Statistical Physics and Machine Learning are powerful tools for characterizing the collective behavior of large systems and hence offer promising approaches to understand the activity of neuronal populations. In this talk I will show how the Maximum Entropy principle, applied to cortical in-vivo recording, allows for characterizing and comparing the population behavior during wakefulness and deep sleep. Then, I will use hidden-layer models, point processes and “experimental” linear response theory to account for non-linear stimulus processing in sensory networks, such as the retina. These approaches allow for constructing high performing models of the retinal population response to visual stimuli and thus for characterizing how a network of neurons can encode and transmit visual information.
Physics-Biology interface seminar: Pere Roca-Cusachs
Sensing the matrix: transducing mechanical signals from integrins to the nucleus.
Pere Roca-Cusachs (Institute for Bioengineering of Catalonia, Universitat de Barcelona, Spain)
Cell proliferation and differentiation, as well as key processes in development, tumorigenesis, and wound healing, are strongly determined by the properties of the extracellular matrix (ECM), including its mechanical rigidity and the density and distribution of its ligands. In this talk, I will explain how we combine molecular biology, biophysical measurements, and theoretical modelling to understand the mechanisms by which cells sense and respond to matrix properties. I will discuss how the properties under force of integrin-ECM bonds, and of the adaptor protein talin, drive and regulate matrix sensing. I will further discuss how this sensing can be understood through a computational molecular clutch model, which can quantitatively predict the role of integrins, talin, myosin, and ECM receptors, and their effect on cell response. Finally, I will analyze how signals triggered by rigidity at cell-ECM adhesions are transmitted to the nucleus, leading to the activation of the transcriptional regulator YAP.
Séminaire du LPTMS: Nicola Bartolo & Mathieu Hemery
Exact results for non-equilibrium phase transitions
φ-evo: from function to network
Mathieu Hemery (McGill University, Montréal, Canada)
Séminaire du LPTMS: Chikashi Arita
Variational calculation of diffusion coefficients in stochastic lattice gases
Chikashi Arita (Universität des Saarlandes, Saarbrücken)
Deriving macroscopic behaviors from microscopic dynamics of particles is a fundamental problem. In stochastic lattice gases one tries to demonstrate this hydrodynamic limit. The evolution of a stochastic lattice gas with symmetric hopping rules is described by a diffusion equation with density-dependent diffusion coefficient. In practice, even when the equilibrium properties of a lattice gas are analytically known, the diffusion coefficient cannot be explicitly computed, except when a lattice gas additionally satisfies the "gradient condition", e.g. the diffusion coefficients of the simple exclusion process and non-interacting random walks are exactly identical to their hopping rates. We develop a procedure to obtain systematic analytical approximations for the diffusion coefficient in non-gradient lattice gases with known equilibrium. The method relies on a variational formula found by Varadhan and Spohn. Restriction on test functions to finite-dimensional sub-spaces allows one to perform the minimization and gives upper bounds for the diffusion coefficient. We apply the procedure to the following two models; one-dimensional generalized exclusion processes, where each site can accommodate at most two particles (2-GEPs) [1], and the Kob-Andersen (KA) model on the square lattice, which is classified into kinetically-constrained gas [2]. The prediction of the diffusion coefficient depends on the domain ("shape") of test functions. The smallest shapes give approximations which coincide with the mean-field theory, but the larger shapes, the more precise upper bounds we obtain. For the 2-GEPs, our analytical predictions provide upper bounds which are very close to simulation results throughout the entire density range. For the KA model, we also find improved upper bounds when the density is small. By combining the variational method with a perturbation approach, we discuss the asymptotic behavior of the diffusion coefficient in the high density limit.
• [1] C. Arita, P. L. Krapivsky and K. Mallick, Variational calculation of transport coefficients in diffusive lattice gases, Phys. Rev. E 95, 032121 (2017)
• [2] C. Arita, P. L. Krapivsky and K. Mallick, Bulk diffusion in a kinetically constrained lattice gas, preprint cond-mat arXiv:1711.10616
Séminaire du LPTMS: Alexandre Lazarescu
On the hydrodynamic behaviour of interacting lattice gases far from equilibrium
Alexandre Lazarescu (Centre de Physique Théorique, École Polytechnique)
Lattice gases are a particularly rich playground to study the large scale emergent behaviour of microscopic models. A few things are known in general for models that are sufficiently close to equilibrium (i.e. with rates close to detailed balance, and where the dynamics is typically diffusive): in particular, the local density of particles behaves autonomously in the macroscopic limit, even at the level of large deviations, and the system can be described through a Langevin equation involving only a few quantities called transport coefficients. As demonstrated in the previous talk, obtaining those coefficients in practice can be quite challenging, but we can usually be confident that they exist. I will be talking about a situation that is quite different at first sight: systems far from equilibrium, where the dynamics is propagative, and where very little is known in general. The question is then whether one can hope to be able to describe those models with a similar hydrodynamic structure, or if that description breaks down (if, for instance, long-range correlations become relevant). I will present recent results showing that, for a broad class of 1D models with hard-core repulsion but also interactions and space-dependent rates, the answer is yes and no: all those models exhibit a dynamical phase transition between a hydrodynamic regime and a highly correlated one, which can be related to the so-called "third order phase transitions". The methods involved are quite general and likely to be applicable to many more families of models.
Séminaire du LPTMS: Sergej Moroz
Séminaire du LPTMS: Laurent de Forges de Parny
Multicomponent Bose-Hubbard Model: Nematic Order in Spinor Condensates
Laurent de Forges de Parny (Albert-Ludwigs University of Freiburg, Germany)
Since the seminal work of D. Jaksch et al. [1], the motivation of considering bosonic mixtures has emerged from the promising perspectives of observing coexisting quantum phases, spin-dynamics, and quantum magnetism. Recently, ultracold bosons with effective spin degree of freedom allowed to engineer magnetic quantum phase transitions and non trivial magnetic phases, e.g. the nematic order, which breaks the spin-rotation symmetry without magnetic order [2]. I will discuss the magnetic properties of strongly interacting spin-1 bosons in optical lattices. Employing a combined strategy based on exact numerical methods (quantum Monte Carlo simulations and exact diagonalization) and analytical calculations, we have derived the phase diagrams and characterized the phase transitions beyond the mean field description [3,4]. Furthermore, we have established the low energy spectrum of the nematic superfluid phase and have confirmed a singlet-to-nematic phase transition inside the Mott insulator phase. References:
1. D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner and P. Zoller, Cold Bosonic Atoms in Optical Lattices, Phys. Rev. Lett. 81, 3108 (1998)
2. T. Zibold, V. Corre, C. Frapolli, A. Invernizzi, J. Dalibard and F. Gerbier, Spin-nematic order in antiferromagnetic spinor condensates, Phys. Rev. A 93, 023614 (2016).
3. L. de Forges de Parny, F. Hébert, V. G. Rousseau, and G. G. Batrouni, Interacting spin-1 bosons in a two-dimensional optical lattice, Phys. Rev. B 88, 104509 (2013).
4. Laurent de Forges de Parny, Hongyu Yang, and Frédéric Mila, Anderson Tower of States and Nematic Order of Spin-1 Bosonic Atoms on a 2D Lattice, Phys.Rev.Lett. 113, 200402 (2014)
Séminaire du LPTMS: Andrea de Luca
Solution of a minimal model for many-body quantum chaos
Andrea de Luca (Rudolf Peierls Centre for Theoretical Physics, Oxford University, UK)
I present a minimal model for quantum chaos in a spatially extended many-body system. It consists of a chain of sites with nearest-neighbour coupling under Floquet time evolution. Quantum states at each site span a q-dimensional Hilbert space and time evolution for a pair of sites is generated by a q2×q2 random unitary matrix. The Floquet operator is specified by a quantum circuit, in which each site is coupled to its neighbour on one side during the first half of the evolution period, and to its neighbour on the other side during the second half of the period. I will introduce a diagrammatic formalism useful to average the many-body dynamics over realisations of the random matrices. This approach leads to exact expressions in the large-q limit and sheds light on the universality of random matrices in many-body quantum systems and the ubiquitous entanglement growth in out-of-equilibrium dynamics.
Séminaire du LPTMS: Shuang Wu
Thouless bandwidth formula in the Hofstadter model
Shuang Wu (LPTMS, Université Paris-Sud)
I will show a method of D. J. Thouless to calculate the Hofstadter spectrum bandwidth in relation to the Catalan constant. And I will present how we generalize Thouless bandwidth formula to its n-th moment and obtain a closed expression in terms of polygamma, zeta and Euler numbers.
Séminaire du LPTMS: Giulio Bertoli
Finite temperature disordered bosons in two dimensions
Giulio Bertoli (LPTMS, Université Paris-Sud)
In this talk, I will present a study of the phase transitions in a two dimensional weakly interacting Bose gas in a random potential at finite temperatures. It is possible to identify superfluid, normal fluid and insulator phases. The study of the effect of interaction between particles on localization demonstrates that interacting particles can undergo a many-body localization-delocalization transition, that is the transition from insulator to fluid state. I will also discuss the influence of disorder on the BKT transition between superfluid and normal fluid, in order to construct the phase diagram. At T=0 one has a tricritical point, where the three phases coexist. It is shown that the truncation of the energy distribution function at the trap barrier, which is a generic phenomenon in evaporative cooling of cold atoms, limits the growth of the localization length, so that the insulator phase is present at any temperature.
• Reference: G. Bertoli, V.P. Michal, B.L. Altshuler, G.V. Shlyapnikov, Finite temperature disordered bosons in two dimensions, preprint cond-mat.dis-nn arXiv:1708.03628
Physics-Biology interface seminar: Pierre Sens
Mechano-sensitive adhesion in cell spreading and crawling
Pierre Sens (Institut Curie, Paris)
Crawling cell motility is powered by actin polymerization and acto-myosin contraction. When moving over a flat and rigid substrate, cells usually develop thin and broad protrusions at their front, called lamellipodia, where actin polymerisation generates a protrusive force pushing the front edge of the cell forward. The lamellipodium displays interesting dynamics, including normal and lateral waves, possibly relevant to cell polarisation and the initiation of motion. I will discuss a stochastic model of mechano-sensitive cell adhesion, and discuss its relevance for symmetry breaking, cell polarisation, and motility. I will then discuss a generic model of micro-crawlers, built as an extension of low Reynolds number micro-swimmers, that highlights the crucial role of mechano-sensitive adhesion for the active crawling of cells and biomimetic objects.
Séminaire du LPTMS: Alexandre P. dos Santos *** séminaire exceptionnel ***
**** ATTENTION: horaire inhabituel !!! ****
Pressure between charged polarizable surfaces
Alexandre Pereira Dos Santos (Instituto de Fisica, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil)
We obtain phase diagrams and binodal curves for the repulsion/attraction between charged low dielectric/metallic surfaces in a salt-free environment. Punctual counterions are confined between the surfaces, however, they are not allowed to approach the surfaces nearer than a characteristic length which can model the ionic hydration. The polarization of the surfaces is considered with a recently introduced method based on periodic Green functions. We show that the density profiles are strongly dependent on the dielectric contrast in special for high electrostatic couplings. However, the pressure curves and consequently the binodal curves and critical points slightly change for different polarizable surfaces and hydration lengths.
Séminaire du LPTMS: Alexandre Nicolas
Bottleneck flows of pedestrians and grains
Alexandre Nicolas (LPTMS, Université Paris-Sud)
• [1] Alexandre Nicolas, Ángel Garcimartín and Iker Zuriguel, "A trap model for clogging and unclogging in granular hopper flows" (2017), preprint arxiv:1711.04455
Séminaire du LPTMS: Izaak Neri
Thermodynamic bounds on the statistics of first-passage times and extreme values of stochastic processes
Izaak Neri (Max Planck Institute for the Physics of Complex Systems, Dresden, Allemagne)
Stochastic thermodynamics generalizes concepts from thermodynamics, and makes them useful to study mesoscopic systems driven far from thermal equilibrium, such as, optically driven colloidal particles, noisy processes in cell biology or microelectronic devices. In mesoscopic systems thermodynamic observables -- such as, entropy production, heat and mesoscopic currents -- are fluctuating quantities, and stochastic thermodynamics characterizes universal properties of these fluctuating quantities. Established results are the fluctuation relations and the thermodynamic uncertainty relations, which express universal properties of fluctuations of stochastic currents at a fixed time. In this talk I will present thermodynamic bounds for the statistics of first-passage times and extreme values of stochastic currents, which are fluctuating properties of trajectories of stochastic currents. Some interesting results are: a bound for the mean first-passage time of current variables in terms of the dissipation rate, a fluctuation theorem for first-passage times of entropy production, and a universal bound on the supremum statistics of the heat absorbed by a nonequilibrium system. These results will be illustrated on examples of physical processes, such as, the dynamics a molecular motor and charge transport in microelectronic devices.
Séminaire du LPTMS: Samuel Belliard *** séminaire exceptionnel ***
Modified Bethe Ansatz for models without U(1) symmetry
Samuel Belliard (IPhT, CEA, Saclay)
I will present a modified version of the algebraic Bethe ansatz (MABA) that allows to characterize the eigenvalues and the eigenstates of spins chains without U(1) symmetry. In the case of the XXX Heisenberg spins chain on the circle with a twisted boundary condition, the Bethe vectors and associated eigenvalues will be constructed and the correlation function problem will be discussed. This method also works for the XXX and XXZ Heisenberg spins chains on the segment.
Séminaire du LPTMS: Bertrand Lacroix-à-chez-Toine *** séminaire exceptionnel ***
Extreme value statistics in a gas of 2D charged particles
Bertrand Lacroix-à-chez-Toine (LPTMS, Université Paris-Sud)
We study a system of N charged particles in two dimensions with Coulomb logarithmic repulsion and confined in an external symmetric potential. At the inverse temperature of interest β = 2, the positions of the charges form a 2D determinantal point process. In the case of a quadratic potential, there is a mapping between the positions of these charges and the eigenvalues of complex Ginibre matrices. We focus on the extremal statistics of the positions of the charges and in particular we highlight a new universal regime (with respect to a large class of confining potentials) which had been overlooked before [1]. It allows to solve a puzzle of matching between the typical regime of fluctuations [2] and the large deviation regime [3]. Finally we also considered potentials that deviates from this universality class and computed the extremal statistics in these cases.
• [1] B. Lacroix-A-Chez-Toine, A. Grabsch, S. N. Majumdar, G. Schehr, Extremes of 2d Coulomb gas: universal intermediate deviation regime, J. Stat. Mech. P013203, (2018).
• [2] B. Rider, J. Phys. A 36(12), 3401 (2003).
• [3] F. D. Cunden, F. Mezzadri, P. Vivo, J. Stat. Phys. 164(5), 1062-1081 (2016).
Séminaire du LPTMS: Lev Truskinovsky
Rigidity generation by nonthermal fluctuations and muscle contraction
Lev Truskinovsky (ESPCI, Paris)
Active stabilization in systems with zero or negative stiffness is an essential element of a wide variety of technological processes. We discuss a prototypical example of this phenomenon in a biological setting and show how active rigidity, interpreted as a formation of a pseudo-well in the effective energy landscape, can be generated in an over-damped stochastic system. We link the transition from negative to positive rigidity with time correlations in the additive noise, and show that subtle differences in the out-of-equilibrium driving may compromise the emergence of a pseudo-well. We apply our results to the description of the power stroke machinery in skeletal muscles which is behind their remarkable ability to take up an applied slack in a ms time scale. Along the way we draw some interesting parallels between muscle physiology and the theory of spin glasses.
Physics-Biology interface seminar: Ulisse Ferrari
Non-linear stimulus processing by the retina
Ulisse Ferrari (Institut de la Vision, Paris)
Understanding how sensory systems process information is an open challenge. This is mostly because these systems are non-linear, making it extremely difficult to model the relation between the stimulus and the sensory response. In this talk I will discuss two strategies to tackle this problem and apply them to the retina.
First, we use ex-vivo multi-electrode array experiments to record the retinal activity and directly model the ganglion cell response to complex stimuli, such as videos of moving objects. Here I will show that standard, nearly-linear, models are not enough and highly non-linear models are required. Then I will present the result of a closed-loop experiment where we adapted the stimulus on-line to investigate how the response changes when the visual stimulation is perturbed.
With this approach we could estimate the optimal performance of a neural decoder and show that the non-linear sensitivity of the retina is consistent with an efficient encoding of stimulus information.
-) U. Ferrari, C. Gardella, T. Mora, O. Marre. eNeuro, vol. 4, 6. 2017.
-) S. Deny, U. Ferrari, P. Yger, R. Caplette, S. Picaud, G. Tkacik, O. Marre, Nature Commun. 8 (1) 2017
Full counting statistics out of equilibrium: melting of antiferromagnetic order
Mario Collura (Oxford University, UK)
Séminaire du LPTMS: Simon Pigeon *** séminaire exceptionnel ***
Vibrational assisted conduction in a molecular wire
Simon Pigeon (LKB, UPMC, Paris)
I will present a detailed study of the conduction properties of a molecular wire where hopping processes between electronic sites are coupled to a vibrational mode of the molecule. This description is inspired by the idea that physically the vibrational mode does not need to change the energetic structure of the electronic part but can just perturb the exchange taking place on this subsystem. It is shown that the presence of the vibrational system can give rise to strong enhancement of the wire conductivity. Moreover through the control of the vibrational properties (temperature and position) one can accurately control the electronic flux crossing the device. An increase of the temperature enhances the conduction, while the control of the equilibrium position of the oscillator can switch on and off the conduction.
This work establishes how vibrational coupled hopping affects the electronic properties of a molecular wire. These crucial results pave the way to a better understanding and more complete description of electronic properties of these promising devices.
• S. Pigeon, L. Fusco, G. De Chiara & M. Paternostro, Vibrational assisted conduction in a molecular wire, Quantum Science and Technology 2, 025006 (2017) ; arXiv:1612.01809
Quantum Physics Journal Club: Giovanni Martone
Supersolids: a short overview
Giovanni Italo Martone (LPTMS)
Séminaire du LPTMS: Michele Filippone *** séminaire exceptionnel ***
Controlled Parity Switch of Persistent Currents and Topological charge-pumping effects induced by bulk magnetic fluxes
Michele Filippone (Université de Genève, Suisse)
We investigate persistent currents for a fixed number of fermions in periodic quantum ladders threaded by Aharonov-Bohm and transverse magnetic fluxes Φ and χ. We show that the coupling between ladder legs provides a way to effectively change the ground-state fermion-number parity, by varying χ. We demonstrate that varying χ by (one flux quantum) leads to an apparent fermion-number parity switch. We find that persistent currents exhibit a robust periodicity as a function of χ, despite the fact that χ→χ+2π leads to modifications of order 1/N of the energy spectrum, where N is the number of sites in each ladder leg. We connect the parity switching effect to the quantum Hall regime in two-dimensional systems. We show that the parity switching effect is related to the parity of the number of filled Landau levels and that it inherits strong robustness against disorder in the Harper-Hofstadter quantum Hall regime. Indeed, we show that the periodicity is a mesoscopic manifestation of a novel type of fermionic pumping in topological systems, complementary to Thouless' pump. Focusing on the low-energy edge physics in the general framework of Chern-Simons theory, we discuss this alternative type of pumping in the context of integer and fractional quantum Hall systems. Our construction provides an intuitive setting to understand known effects and explore new ones. In particular, we show that adding superconductivity to the picture allows us to recover the 4π Josephson effect of Majorana fermions and its generalizations to parafermions. The parity-switching and the periodicity effects are robust with respect to temperature and disorder and we outline potential physical realizations using Corbino disk geometries in solid state systems, quantum ladders with cold atomic gases and, for bosonic analogs of the effects, photonic lattices. Ref:
• Michele Filippone, Charles-Edouard Bardyn, Thierry Giamarchi, Controlled parity switch of persistent currents in quantum ladders, preprint cond-mat.mes-hall arXiv:1710.02152
Séminaire du LPTMS: Alexandru Petrescu *** séminaire exceptionnel ***
Fluxon-based quantum simulation in circuit QED
Alexandru Petrescu (Department of electrical engineering, Princeton University, USA)
Long-lived fluxon excitations can be trapped inside a superinductor ring, which can be realized with a long array of Josephson junctions, one of which offers the input/output path for the magnetic flux [1]. The superinductor ring can be separated into smaller loops by a periodic sequence of Josephson junctions in the quantum regime, thereby allowing fluxons to tunnel between neighboring loops [2]. This model is dual to that of two-leg ladder bosons, which have a rich phase diagram depending on flux and density [3–6]. By tuning the Josephson coupling, and implicitly the tunneling probability amplitude of fluxons, a wide class of 1D tight-binding lattice models may be implemented and populated with a stable number of fluxons. In this context, fluxons are lattice bosons with repulsive interactions. We illustrate this quantum simulation platform by discussing the Su-Schrieffer-Heeger model in the 1-fluxon subspace, which hosts a symmetry-protected topological phase with fractionally charged bound states at the edges [7,8]. This pair of localized edge states could be used to implement a superconducting qubit increasingly decoupled from decoherence mechanisms.
1. [1] N. A. Masluk, I. M. Pop, A. Kamal, Z. K. Minev, and M. H. Devoret, Phys. Rev. Lett. 109, 137002 (2012).
2. [2] A. Petrescu, H. E. Türeci, A. V. Ustinov, and I. M. Pop, ArXiv e-prints (2017), arXiv:1712.08630 [cond-mat.mes-hall].
3. [3] E. Orignac and T. Giamarchi, Phys. Rev. B 64, 144515 (2001).
4. [4] A. Petrescu and K. Le Hur, Phys. Rev. Lett. 111, 150601 (2013).
5. [5] M. Piraud, F. Heidrich-Meisner, I. P. McCulloch, S. Greschner, T. Vekua, and U. Schollwöck, Phys. Rev. B 91, 140406 (2015).
1. [6] A. Petrescu, M. Piraud, G. Roux, I. P. McCulloch, and K. Le Hur, Phys. Rev. B 96, 014524(2017).
2. [7] R. Jackiw and C. Rebbi, Phys. Rev. D 13, 3398 (1976).
3. [8] W. P. Su, J. R. Schrieffer, and A. J. Heeger, Phys. Rev. Lett. 42, 1698 (1979).
Séminaire du LPTMS: Raffaela Cabriolu
Creep response of a soft glass
Raffaela Cabriolu (Department of Chemistry, Norwegian University of Science and Technology, Trondheim, Norway)
In this work we discuss finite size effects in the fluidization process of dense amorphous mate- rials subjected to an external load. By means of molecular dynamics simulations we study the mechanical response of a densly packed 3D particle system to a sudden applied shear stress. In order to disentangle possible boundary effects from finite size effects, we use an unusual setup by implementing a geometry-constraint protocol with periodic boundary conditions in all directions. We show that this protocol is well controlled and that the long time fluidization process is to a great extend independent of the details of the protocol parameters. This procedure allows for a robust study of finite size effects regarding the creep exponents and the fluidization process. The slow dynamics show a power-law creep with exponents that do not depend on the system size whereas the fluidisation time shows strong finite size effects, that we can rationalize within a finite size scaling relation.
Séminaire du LPTMS: Mehdi Bouzid
Athermal analogue of sheared colloidal suspensions
Mehdi Bouzid (LPTMS, Université Paris-Sud)
Sand-piles, window glass, tomato ketchup, are three materials that would not necessarily strike the larger public for their similarities. However, they take part of one of the most lingering enigma of condensed matter physics as they are examples of fluids undergoing dynamical arrest and becoming solid in a way essentially different from a thermodynamic phase transition. Such complex fluids, developing a yield-stress and becoming very hard solids (metallic or oxide glasses) or soft glassy materials (colloidal pastes, granular packing, polymer melts ...), are of central importance in statistical physics, material science or chemical engineering. In this talk I will highlight an analogy between the rheology of Brownian and non-Brownian suspensions, we show that these systems can be described by a Herschel-Bulkley law as soon as the shear rate and shear stress are respectively normalized by an energy scale and a microscopic time of reorganization, which are both functions of the normal confinement stress. The pressure-controlled approach, originally developed for granular flows, reveals a striking physical analogy between the colloidal glass transition and the granular jamming transition.
Physics-Biology interface seminar: Emmanuel de Langre
Plant vibration, from wind flutter to phenotyping
Emmanuel de Langre (École polytechnique, Palaiseau, France)
Plants are often very flexible objects. This results in motion under stimuli such as wind or currents, but also hosts such as insects. Motion are known to influence plant development by thigmomorphogenesis. I will review methodologies and results from the past ten years, aimed at quantifying and understanding the vibration of plants, or parts of plants, from Arabidopsis Thaliana to large trees. I will focus on experimental techniques indoor and outdoor, on simple models of motions, and the role of the plant architecture. The recent application to high throughput plant phenotyping by vibrations will also be presented.
Séminaire du LPTMS: Jacopo de Nardis
Jacopo de Nardis (LPT-ENS, Paris)
Séminaire du LPTMS: Leonardo Mazza
Majorana zero modes in one-dimensional Ytterbium quantum gases
Leonardo Mazza (Département de Physique, ENS, Paris)
Séminaire du LPTMS: Corrado Rainone *** séminaire exceptionnel ***
Mechanical Failure in Amorphous Solids: Scale Free Spinodal Criticality
Corrado Rainone (Dpt of Chemical and biological physics, Weizmann Institute of Science, Israël)
The mechanical failure of amorphous media is a ubiquitous phenomenon from material engineering to geology. It has been noticed for a long time that the phenomenon is "scale-free", indicating some type of criticality. In spite of attempts to invoke "Self-Organized Criticality", the physical origin of this criticality, and also its universal nature, being quite insensitive to the nature of microscopic interactions, remained elusive. Recently we proposed that the precise nature of this critical behavior is manifested by a spinodal point of a thermodynamic phase transition. Moreover, at the spinodal point there exists a divergent correlation length which is associated with the system-spanning instabilities (known also as shear bands) which are typical to the mechanical yield. Demonstrating this requires the introduction of an "order parameter" that is suitable for distinguishing between disordered amorphous systems, and an associated correlation function, suitable for picking up the growing correlation length. The theory, the order parameter, and the correlation functions used are universal in nature and can be applied to any amorphous solid that undergoes mechanical yield. Critical exponents for the correlation length and the system size dependence are estimated. We conclude with some perspectives and modelling ideas on the subject. Réf:
• Itamar Procaccia, Corrado Rainone, and Murari Singh, Mechanical failure in amorphous solids: Scale-free spinodal criticality, Phys. Rev. E 96, 032907 (2017)
Séminaire du LPTMS: Grégoire Ithier *** séminaire exceptionnel ***
Typicality and unconventional equilibrium states of an embedded quantum system.
Grégoire Ithier (Royal Holloway, London, UK)
In recent years, the progress in quantum engineering has provided new tools for simulating the dynamics of truly isolated quantum systems. These systems, made of trapped ions or cold atoms[1], can be prepared in a global pure state and their level of isolation is such that they evolve unitarily according to the Schrödinger equation. Surprisingly, despite being at all times in a pure quantum state, they display signatures of a local equilibration which can be in strong disagreement with the predictions of statistical physics [2]. These experimental facts are clearly questioning what kind of statistical description is relevant for isolated many body quantum systems.
In this talk, I will present our recent results on a theoretical model dedicated to this problem. This model considers a quantum system coupled to a large quantum environment, and introduces some randomness at the level of the interaction Hamiltonian. We then demonstrate that the system has a typical dynamics for several classes of random interactions and most importantly for arbitrary system, environment, and global initial state [3]. In other words, the microscopic structure of interaction Hamiltonians does not matter and reduced density matrices have a self-averaging property.
These results have two important consequences: first they can explain the absence of sensitivity to microscopic details of processes like e.g. thermalization. Second they provide the rigorous ground for an averaging procedure over random interactions which can be used for analytical non perturbative calculations performed with full generality i.e. for arbitrary system, environment, and initial state.
We apply this technique to calculate analytically the stationary state at long times of the system and find a new thermodynamical ensemble more general than the microcanonical one [4].
Séminaire du LPTMS: Alfredo Ozorio de Almeida
!!! Attention : jour inhabituel !!!
Translations and reflections on the torus: Identities for discrete Wigner functions and transforms
Alfredo Miguel Ozorio de Almeida (Centro Brasileiro de Pesquisas Fisicas, Rio de Janeiro, Brazil)
A finite Hilbert space can be associated to a periodic phase space, that is, a torus. A finite subgroup of operators corresponding to reflections and translations on the torus form respectively the basis for the discrete Weyl representation, including the Wigner function, and for its Fourier conjugate, the chord representation. They are invariant under Clifford transformations and obey analogous product rules to the continuous representations, so allowing for the calculation of expectations and correlations for observables. We here import new identities from the continuum for products of pure state Wigner functions and chord functions, involving, for instance the inverse phase space participation ratio and correlations of a state with its translation. Connections between products of Wigner functions and mixed (transition) Wigner functions also arise. Finally, generalizations of translations and reflections to a doubled phase space connect the Weyl representation of the evolution operator to the propagator of Wigner functions.
Séminaire du LPTMS: José Lebreuilly
Stabilizing zero temperature quantum phases and incompressible states of light via non-Markovian reservoir engineering
José Lebreuilly (Laboratoire Pierre Aigrain, ENS, Paris)
We study the possibility of stabilizing strongly correlated quantum fluids of light in driven-dissipative devices through novel non-Markovian reservoir engineering techniques. This approach allows to compensate losses and refill selectively the photonic population so to sustain a desired steady-state. It relies in particular on the use of a frequency-dependent incoherent pump which can be implemented, e.g., via embedded two-level systems maintained at a strong inversion of population. As specific applications of these methods, we discuss the generation of a photonic Mott Insulator (MI). As a first step, we present the case of a narrow band emission spectrum and show how this allows for the stabilization of MI states under the condition that the photonic states are relatively flat in energy. As soon as the photonic bandbwidth becomes comparable to the emission linewidth, important non-equilibrium signatures and entropy generation appear, and a novel dissipative phase transition from a Mott Insulating state toward a superfluid (SF) phase is unveiled. As a second step, we present a more advanced configuration based on reservoirs with a broadband frequency distribution, and we highlight the potential of this configuration for the quantum simulation of equilibrium quantum phases at zero temperature with tunable chemical potential. As a proof of principle we establish the applicability of our scheme to the Bose-Hubbard model by confirming the presence of a perfect agreement with the ground-state predictions both in the MI and SF regions, and more generally in all parts of the parameter space.
Physics-Biology interface seminar: Quan Li
Nanodiamond based quantum sensors for biological applications
Quan Li (Chinese University of Hong Kong, China)
Special location: Laboratoire Aimé Cotton, Orsay
Nanodiamond (ND) with Nitrogen-vacancy (NV) centers serves as promising bio-sensor due to its excellent bio-compatibility, high photo-stability, and the long spin coherence time at room temperature. However, the complicated biological environment, e.g. in a single cell, imposes stringent requirements on the sensor probes to be internalized. In this talk, I will discuss the requirements on nanodiamond as intra-cellular sensor, and the possible strategies that will enable various bio-sensing measurements. I will start with the understanding of nanodiamond-cell interfaces, from anchoring of ND on the plasma membrane to their internalization, and eventually to their intracellular trafficking. Other than the conventional three-dimensional trajectories tracking of the ND, it is also possible to track their orientations (rotation), providing additional information of the intracellular environment. One problem with NV based bio-sensing is that the NV center is less sensitive to certain parameters such as temperature and pressure, and not at all response to many other important biochemical parameters such as pH and non-magnetic biomolecules. I will also discuss possible schemes of constructing nanodiamond based hybrid sensors, which lead to significantly enhanced sensitivity and/or potentially enable the measurement of various biochemical parameters using NV based quantum sensing.
Séminaire du LPTMS: Jacopo Rocchi
Self-sustained clusters in spin glass models
Jacopo Rocchi (LPTMS, Université Paris-Sud)
While macroscopic properties of spin glasses have been thoroughly investigated, their manifestation in the corresponding microscopic configurations is much less understood. To identify the emerging microscopic structures with macroscopic phases at different temperatures, we introduce the concept of self-sustained clusters (SSC). SSC are regions of the space where in-cluster induced fields dominate over the field induced by out-cluster spins. We study their properties in the Ising p-spin model with p=3 using replicas. The intuition gained using fully connected models is then used in the study of models defined on random graphs. A message-passing algorithm is developed to determine the probability of individual spins to belong to SSC. Results for specific instances, which compare the predicted SSC associations with the dynamical properties of the spins, are obtained from numerical simulations. This insight gives rise to a way to predict individual spin dynamics from a single snapshot of spin configurations.
Physics-Biology interface seminar: Gervaise Mosser
Collagen and gelatin from sol to gel states for the synthesis of biomaterials
Gervaise Mosser (Laboratoire de Chimie de la Matière Condensée, Université Pierre et Marie Curie)
Collagen type I, the most abundant protein of connective tissues (bones, dermis, tendons, etc), is a macromolecular mesogen that can form lyotropic liquid-crystal phases. With this approach, our team works on elaborating several biomimetic biomaterials. However, the use of collagen can be hindered due to its price and the possibility to easily denature into gelatine and noticeably during sterilization processes. In this context, we wanted to determine whether collagen could be partially replaced by gelatine without modification of the overall hierarchical structure of the biomaterial.
Séminaire du LPTMS: Marcel Filoche
Localization landscape and localization potential in disordered or complex structures
Marcel Filoche (Laboratoire de Physique de la Matière Condensée, Ecole Polytechnique)
Standing waves in disordered or complex systems can be subject to a strange and intriguing phenomenon which has puzzled the physics and mathematical communities for more than 60 years, namely wave localization. This phenomenon consists of a concentration (or a focusing) of the wave energy in a very restricted sub-region of the entire domain. It has been evidenced experimentally in mechanics, acoustics and quantum physics. Determining the conditions for the onset of localization, depending on the disorder amplitude, the energy, or the wave type, is the aim of many theoretical studies. We will present a theory that unifies different types of localization within a single mathematical framework [1]. To that end, we will introduce the notion of "localization landscape", solution to an associated Dirichlet problem. Going further, this will enable us to define an "effective localization potential", providing a new insight into the confinement of the waves in disordered media. This potential allows us to predict the localization region, the energies of the localized modes, the density of states, and the long range decay of the wave functions. We will present experimental and numerical examples of this theory in mechanics, in semiconductor physics, and in molecular systems, as well as theoretical perspectives with cold atom systems. [1] M. Filoche and S. Mayboroda, Universal mechanism for Anderson and weak localizationPNAS 109, 14761–14766 (2012).
Séminaire du LPTMS : Shamashis Sengupta
Gate-tunable superconductivity in oxide heterostructures
Shamashis Sengupta (CSNSM, Université Paris-Sud)
The realization of two-dimensional electronic gases (2DEGs) in oxide-based heterostructures (e.g. LaAlO3/SrTiO3) has led to important discoveries about superconductivity in low dimensions. There have been reports of the observation of pairing interactions without superconductivity (Cheng et al., Nature 521, 196 (2015)) and density-of-states features resembling the pseudogap in cuprates (Richter et al., Nature 502, 528 (2013)). Consequently, this 2DEG has emerged as a model system to study the physics of Cooper pair formation in two dimensions and to gain useful insights about complex problems, e.g., the phase diagram of high temperature superconductors. In this talk, we will discuss about a new method developed in our group for realizing such superconducting systems in oxide heterostructures, and the results of experiments to characterize their properties. Due to the low carrier density, it is possible to change it using a gate voltage following the principle of a field-effect transistor. The superconducting critical parameters (temperature and field) are tunable as a function of the gate voltage, leading to a 'superconducting dome' in the phase diagram. The possibility of continuously varying the carrier density allows us to study different equilibrium and non-equilibrium features characterizing the electronic phases. Results of some recent experiments will be presented.
Physics-Biology interface seminar: David Bensimon
Quantitative analysis of the somitogenetic wavefront
David Bensimon (LPS-ENS, Paris, France)
Somitogenesis is the process by which the anterio-posterior axis is segmented in all vertebrates thus defining the coordinate system that will serve for positioning of the appendices and organs. This process of segmentation is due to the interaction between a posterior moving wavefront of morphogens and a posterior located clock generating somites (segments) at regular times and places. The existence and characterization of the clock has been amply demonstrated. In this talk I will focus on the molecular network behind the wavefront. I will discuss the wavefront response to various perturbations and compare our observations with a model of this network.
Quantum Physics Journal Club: Raoul Santachiara
Quantum mechanics in multi-connected space and the origin of new statistics in low dimensional system
Raoul Santachiara (LPTMS, Université Paris-Sud)
We recall how to define the problem of N indinstinguishible quantum particles and argue that the topology of the configuration space plays a crucial role. This observation, that has been put on solid grounds by Leinaas and Mirheim in the 1977, has provided the theoretical framework for the existence of anyonic statistics in two dimensions. Moreover, it inspired the connection between the Conformal field theory and topological phases in two dimensions: via this connection, the occurence of non-Abelian anyons in the fractional quantum Hall effect has been suggested.
Séminaire du LPTMS: Serguey Andreev
Effective interactions in a quantum Bose-Bose mixture
Serguey Andreev, ITMO University, St. Petersburg, Russia
Application of the methods of Quantum Electrodynamics (QED) to a system of bosons at absolute zero temperature put forward by Spartak Beliaev in 1958 has been one of the most powerful analytical methods in studies of Bose-Einstein condensates. The Beliaev theory provides a prescription of replacement of the actual microscopic interaction by an effective potential which can be used for perturbative expansion of the many-body Hamiltonian. Originally designed for one-component systems, the method has recently been applied to binary Bose mixtures in the context of supersolidity and stabilization of collapsing Bose-Einstein condensates by quantum fluctuations. The present work is aimed at investigation of legitimacy of extrapolation of the Beliaev prescription to two-component systems. We show that quantum scatterings of different components, which until now have been assumed independent, can interfere due to the Andreev-Bashkin entrainment effect. The effect manifests itself in renormalization of the elementary excitations of the system. This result has escaped the earlier considerations based on the Fourier expansion of small-amplitude oscillations of the order parameter. We explain how one can account for the effect by using a properly generalized Bogoliubov approach. In 3D the effect appears in the second order of the perturbation theory, which makes possible using the concept of effective potential in this case. The entrainment arises due to "dressing" of magnons with Bogoliubov phonon modes, by analogy with the physics of Bose polaron. We exploit this fruitful analogy to speculate on possible formation of a magnon crystal in the strongly-interacting regime.
Séminaire du LPTMS: Eoin Quinn *** séminaire exceptionnel ***
Splitting of electrons and violation of the Luttinger sum rule
Eoin Quinn (University of Amsterdam, The Netherlands)
We present a framework for organising the correlations of interacting electrons, which allows us to describe a regime of strongly correlated behaviour. We highlight two ways to characterise the electronic degree of freedom, either by the canonical fermion algebra or by the graded Lie algebra su(2|2). The first underlies the Fermi liquid description of correlated matter, and we identify a novel regime governed by the latter. We derive a systematic expansion of the electronic correlations, and compute the electronic spectral function at the leading order. This reveals a splitting in two of the electronic band, a violation of the Luttinger sum rule, and a Mott metal-insulator transition. Réf:
Séminaire du LPTMS: Senthil Todadri
*** ATTENTION: horaire inhabituel ***
Dualities in condensed matter physics
Senthil Todadri (Massachusetts Institute of Technology, Cambridge, USA)
Quantum Physics Journal Club: Maurizio Fagotti
Lieb-Robinson Bounds
Maurizio Fagotti (LPTMS, Université Paris-Sud)
Séminaire du LPTMS: Aurélien Decelle
Spectral learning of Restricted Boltzmann Machines
Aurélien Decelle (Laboratoire de Recherche en Informatique, Université Paris Sud)
In this presentation I will expose our recent results on the Restricted Boltzman Machine (RBM). The RBM is a generative model very similar to the Ising model, it is composed of both visible and hidden binary variables, and traditionally used in the context of machine learning. In this context, the goal is to infer the parameters of the RBM such that it reproduces correctly a dataset's distribution. Although they have been widely used in computer science, the phase diagram of this model is not known precisely in the context of learning. In particular, it is not known how the parameters influence the learning, and what exactly is learned within the parameters of the model. After an introduction to some aspects of Machine learning, I will expose our work, showing how the SVD of the data governs the first phase of the learning and how this decomposition helps to understand the dynamics and the equilibrium properties of the model. Réf:
• Aurélien Decelle, Giancarlo Fissore and Cyril Furtlehner, Spectral dynamics of learning in restricted Boltzmann machines, EuroPhys. Lett. 119, 60001 (2017)
• Aurélien Decelle, Giancarlo Fissore and Cyril Furtlehner, Thermodynamics of Restricted Boltzmann Machines and related learning dynamics, preprint cond-mat arXiv:1803.01960 (2018).
Séminaire du LPTMS: Urna Basu
Active Brownian Motion in Two Dimensions
Urna Basu (LPTMS, Université Paris-Sud)
We study the dynamics of a single active Brownian particle in a two-dimensional harmonic trap. The active particle has an intrinsic time scale set by the rotational diffusion. The harmonic trap also induces a relaxational time-scale. We show that the competition between these two time scales leads to a nontrivial time evolution for the active Brownian particle. At short times a strongly anisotropic motion emerges leading to anomalous persistence properties. At long-times, the stationary position distribution in the trap exhibits two different behaviours: a Gaussian peak at the origin in the strongly passive limit and a delocalised ring away from the origin in the opposite strongly active limit. The predicted stationary behaviours in these limits are in agreement with recent experimental observations. Réf:
Physics-Biology interface seminar: Knut Drescher
Bacterial collective behaviours
Knut Drescher (Max Planck Institute for Terrestrial Microbiology, Marburg, Germany
In nature, bacteria often engage in a range of collective behaviors. In this presentation, I will demonstrate how two bacterial behaviors, swarming and biofilm formation, are related by physical interactions, chemical signaling, and dynamical transitions. I will show how these collective behaviors arise from cell-cell interactions, and the physiological state of individual cells. Furthermore, I will introduce new experimental methods for investigating bacterial collective behaviors.
Soutenance de thèse: Aurélien Grabsch
Soutenance de thèse :
Random matrix theory in statistical physics: quantum scattering and disordered systems
Aurélien Grabsch
Random matrix theory has applications in various fields: mathematics, physics, finance, ... In physics, the concept of random matrices has been used to study the electonic transport in mesoscopic structures, disordered systems, quantum entanglement, interface models in statistical physics, cold atoms, ... In this thesis, we study coherent AC transport in a quantum dot, properties of fluctuating 1D interfaces on a substrate and topological properties of multichannel quantum wires. The first part gives a general introduction to random matrices and to the main method used in this thesis: the Coulomb gas. This technique allows to study the distribution of observables which take the form of linear statistics of the eigenvalues. These linear statistics represent many relevant physical observables, in dif- ferent contexts. This method is then applied to study concrete examples in coherent transport and fluctuating interfaces in statistical physics. The second part focuses on a model of disordered wires: the multichannel Dirac equation with a random mass. We present an extension of the powerful methods used for one dimensional systems to this quasi-1D situation, and establish a link with a random matrix model. From this result, we extract the density of states and the localisation properties of the system. Finally, we show that this system exhibits a series of topological phase transitions (change of a quantum number of topological nature, without changing the symmetries), driven by the disorder.
Quantum Physics Journal Club: Bradraj Pandey
Out-of-time-order correlators in quantum mechanics
Séminaire du LPTMS: Thorsten Emig
A Minimal Physiological Model for Human Running Performance
Thorsten Emig (LPTMS, Université Paris-Sud)
Measurements of physiological variables during exercise and performance evaluations and predictions are important for a fundamental understanding of physiological processes, training and assessment of athletes, and beyond sport in the study of complex physiological response related to aging, muscular structure and cardiovascular health. Models for human running performances of various complexities and underlying principles have been proposed, often employing a combination of data for world record performances and concepts that are not always based on simple principles of human physiology. We present a novel, minimal model for human running performance that follows from a self-consistency relation for the time dependent power output during racing events. The model has a total of four parameters that are not fixed a priori and characterize individual physiological profiles for a runner. The analytic approach presented here is the first to derive the observed logarithmic scaling between world (and other) record running speeds and times from basic principles. Various female and male record performances (world, national) and also personal best performances of individual runners for distances from 800m and to the Marathon are excellently described by our model, with mean absolute errors of (often much) less than 1%. Physiological parameters of our model, as obtained from records and individual runners, are consistent with existing laboratory measurements. The computed maximal power output that can be sustained for a given time describes well existing experimental data for the time to exhaustion dependence of supramaximal oxygen consumption in the anaerobic regime. Our model is used to define and estimate endurance for both the aerobically and anaerobically dominated performances. As an application of our model, we derive personalized training speeds for prescribed duration and intensity. Our findings could be a basis for plethora of further studies including assessment of performance dependence on age, altitude, muscular structure, specialization of athlete, racing strategies, and optimal dosing of recreational exercise.
Séminaire du LPTMS: Erik Aurell
Continuous-time dynamic cavity for equilibrium and non-equilibrium processes
Erik Aurell (KTH-Royal Institute of Technology, Sweden)
Dynamics on locally tree-like graphs can be described by marginals which satisfy equations known as dynamic cavity. These equations are for probabilities of whole histories of single variables, and therefore need further approximations or closure. I will present a closure for continuous-time processes, and show how it behaves for some standard models in disordered systems which are either in equilibrium, or relaxing towards equilbrium. I will also discuss local search algorithms on K-satisfiability of the walksat type, processes which do not satisfy detailed balance. This is joint work over the last few years with Gino Del Ferraro, Eduardo Dominguez, David Machado and Roberto Mulet.
Soutenance de thèse: Kirill Plekhanov
Soutenance de thèse :
Topological Floquet states, artificial gauge fields in strongly correlated quantum fluids
Kirill Plekhanov
Jury : Mark Oliver Goerbig (Université Paris-Sud - LPS) Nathan Goldman (Université libre de Bruxelles) Walter Hofstetter (Goethe-Universitat Frankfurt) Karyn Le Hur (Ecole Polytechnique - CPHT) directeur de thèse Titus Neupert (University of Zurich) Guido Pupillo (Université de Strasbourg) Nicolas Regnault (ENS - LPA) Guillaume Roux (Université Paris-Sud - LPTMS) directeur de thèse
In this thesis we study the topological aspects of condensed matter physics, that received a revolutionary development in the last decades. Topological states of matter are protected against perturbations and disorder, making them very promising in the context of quantum information. The interplay between topology and interactions in such systems is however far from being well understood, while the experimental realization is challenging. Thus, in this work we investigate such strongly correlated states of matter and explore new protocols to probe experimentally their properties. In order to do this, we use various both analytical and numerical techniques.
First, we analyze the properties of an interacting bosonic version of the celebrated Haldane model – the model for the quantum anomalous Hall effect. We propose its quantum circuit implementation based on the application of periodic time-dependent perturbations – Floquet engineering. Continuing these ideas, we study the interacting bosonic version of the Kane-Mele model – the first model of a topological insulator. This model has a very rich phase diagram with an emergence of an effective frustrated magnetic model and a variety of symmetry broken spin states in the strongly interacting regime. Ultra-cold atoms or quantum circuits implementation of both Haldane and Kane-Mele bosonic models would allow for experimental probes of the exotic states we observed.
Second, in order to deepen the perspectives of quantum circuit simulations of topological phases we analyze the strong coupling limit of the Su-Schrieffer-Heeger model and we test new experimental probes of its topology associated with the Zak phase. We also work on the out-of-equilibrium protocols to study bulk spectral properties of quantum systems and quantum phase transitions using a purification scheme which could be implemented both numerically and experimentally.
13:30-14:00 Walter Hofstetter -- Johann Wolfgang Goethe-Universität, Frankfurt / Main, Germany http://www.goethe-university-frankfurt.de/66535594/AG-Hofstetter 14:00-14:30 Titus Neupert -- University of Zürich, Switzerland http://www.physik.uzh.ch/en/groups/neupert/team/neupert.html 14:30-15:00 Nathan Goldman -- Université libre de Bruxelles, Belgium hhtps://www.nathan-goldman-physics.com 15:00-15:30 Pause 15:30-16:00 Guido Pupillo -- ISIS, Université de Strasbourg https://isis.unistra.fr/laboratoire-de-physique-quantique-guido-pupillo/ 16:00-16:30 Nicolas Regnault -- LPA, ENS http://www.lpa.ens.fr/spip.php?page=annuaire&Chercheurs=1&lang=fr 16:30-17:00 Mark Oliver Goerbig -- LPS, Université Paris-Sud https://www.equipes.lps.u-psud.fr/GOERBIG/
Séminaire du LPTMS: Mihail Poplavskyi
Pfaffian Point Processes and corresponding gap probabilities
Mihail Poplavskyi (King’s College London, Department of Mathematics)
Stochastic point processes are natural models of complex particle systems. In this talk we discuss point processes with a special structure of correlation functions, called Pfaffian Point Processes (PPP). We then give several examples of PPP such as the Glauber dynamics of Ising spin model, ensembles of random matrices with orthogonal symmetry, random Kac series, etc. In the second part of the talk we present recent results on gap probabilities for PPP and their applications to persistence probabilities for a class of stochastic models.
Séminaire du LPTMS: Guillaume Roux
Quantum purification spectroscopy
Guillaume Roux (LPTMS, Université Paris-Sud)
We discuss a protocol based on quenching a purified quantum system that allows to capture bulk spectral features. It uses an infinite temperature initial state and an interferometric strategy to access the Loschmidt amplitude, from which the spectral features are retrieved via Fourier transform, providing coarse-grained approximation at finite times. It involves techniques available in current experimental setups for quantum simulation, at least for small systems. We illustrate possible applications in testing the eigenstate thermalization hypothesis and the physics of many-body localization. Reference:
• Bradraj Pandey, Kirill Plekhanov and Guillaume Roux, Quantum purification spectroscopy, preprint quant-ph arXiv:1802.04638.
Séminaire du LPTMS: Arthur Goetschy
Optimizing energy transfer and dwell times in disordered systems: a statistical approach
Arthur Goetschy (Institut Langevin, ESPCI)
When a wave such as light propagates through a disordered system, it is scattered many times in various directions before escaping. At first sight, this process is well described by diffusion. However, diffusion neglects interferences, making us believe that the information content of a wave is progressively lost through spreading. This picture is incorrect. In fact, multiple scattering is a linear process that redistributes information among many degrees of freedom which can nowadays be resolved and manipulated. In this talk, we will give an overview of different strategies to achieve original light transport properties in open disordered systems that deviate both from the diffusive picture and the Gaussian field model. First, we will characterize the statistical properties of the transmission matrix to demonstrate large energy transfer or focusing through nominally opaque media [1, 2]. Then, we will discuss how to achieve similar performance in transmission by means of the reflection matrix only [3, 4]. Finally, we will characterize the distribution of scattering times, in order to generate excitations with particularly short or long dwell times. References:
• [1] A. Goetschy and A. D. Stone, Filtering Random Matrices: The Effect of Incomplete Channel Control in Multiple Scattering, Phys. Rev. Lett. 111, 063901 (2013)
• [2] C. W. Hsu, S. F. Liew, A. Goetschy, H. Cao, and A. D. Stone, Correlation-enhanced control of wave focusing in disordered media, Nature Phys. 13, 497 (2017)
• [3] N. Fayard, A. Goetschy, R. Pierrat and R. Carminati, Mutual Information between Reflected and Transmitted Speckle Images, Phys. Rev. Lett. 120, 073901 (2018)
• [4] I. Starshynov, A. M. Paniagua-Diaz, N. Fayard, A. Goetschy, R. Pierrat, R. Carminati and J. Bertolotti, Non-Gaussian Correlations between Reflected and Transmitted Intensity Patterns Emerging from Opaque Disordered Media, Phys. Rev. X 8, 021041 (2018)
Soutenance de thèse: Ines Rodriguez-Arias
Soutenance de thèse:
Collective behaviors in interacting spin systems
Inès Rodriguez-Arias
• Cécile Monthus, IPhT, CEA-Saclay, présidente
• Juan Garrahan, University of Notthingham, rapporteur
• Nicolas Laflorencie, LPT Université Paul Sabatier, rapporteur
• Cristiano Ciuti, MPQ Université Paris Diderot, examinateur
• Geoffrey Bodenhausen, Département de Chimie, ENS, examinateur
• Andrea De Luca, University of Oxford, invité.
• Alberto Rosso, LPTMS, Université Paris-Sud, directeur de thèse
Résumé: Dynamic nuclear polarization (DNP) is one of the most promising techniques towards a new generation of Magnetic Resonance Imaging (MRI). The idea is to use the Nuclear Magnetic Resonance (NMR) in other nuclei rather than the traditional hydrogen, such as carbon. For the carbon signal to be detected, one needs to enhance its spin polarization. In thermal equilibrium — at low temperature and high magnetic field — electron spins are far more polarized than any system of nuclear spins, which is due to their smaller mass. With the DNP technique we bring the system out-of- equilibrium irradiating it with microwaves. This triggers polarization transfer from the electron spins to the nuclear ones. During my Ph.D, I have studied both analytically and numerically the competition between the dipolar interactions among electron spins (which can be tuned experimentally) and the disorder naturally present in the sample. I proposed two models to study DNP: a Heisenberg spin-chain and a system of free-fermions in the Anderson model. Two different regimes were found : (I) For strongly interacting electron spins, the out-of-equilibrium steady state displays an effective thermodynamic behavior characterised by a very low spin temperature. (II) In the weakly interacting regime, it is not possible to define a spin temperature, and it is associated to a many-body localized phase (or an Anderson-localized phase). My research was focused on the properties of the two phases with respect to the performance of DNP, and I found it to be optimal at the transition between the two. This is a very important result that has been verified by recent experiments carried in École Normale Supérieure de Paris.
Physics-Biology interface seminar: Alexis Lomakin
How do cells measure their boundaries to tailor physiological responses?
Alexis Lomakin (King's College London, UK)
Much like modern day engineered devices, cells in the human body are able to make precise measurements: intestinal epithelial cells monitor local cell densities to prevent hyperplasia, neutrophils sample their microenvironment to compute the fastest migratory route toward infection sites, and epidermal stem cells use extracellular matrix occupancy to make cell fate decisions. What these examples illustrate is the sensitivity of complex cell behaviors to spatial and mechanical constraints, known in quantitative sciences as boundary conditions. Although the importance of boundary conditions in cell and tissue physiology is increasingly recognized, it remains unclear how cells sample their boundaries to tailor specific behaviors to boundary conditions. Here, using biophysical tools to manipulate cell boundaries in a highly controlled, quantitative manner, we found that cells estimate externally-imposed confinement using their largest and stiffest intracellular component, the nucleus. Cell confinement below a certain threshold deforms the nucleus and expands its envelope area. Unbuffered against area expansion due to slow turnover of constituents, the nuclear envelope becomes stretched. This in turn engages signaling via nuclear membrane stretch-sensitive proteins to the actomyosin cortex, activating contractility. The latter provides a motive force for the cell to squeeze through tight pores and constrictions in the extracellular matrix. Interestingly, no increase in cell contractility is observed when cells move through environmental confines that do not significantly deform the nucleus. Thus, the nucleus acts as an internal ruler for environmental confinement size, allowing cells to utilize energetically costly contractility on demand, only when surrounding space becomes restrictive. The advantage of the proposed mechanism is that in contrast to the plasma membrane, nuclear membranes do not participate in constitutive membrane trafficking; their surface area thus fluctuates less. This intrinsic quiescence should privilege them to function as low-noise detectors, to readily discriminate local environmental conditions from internal traffic-induced cell area/tension fluctuations.
Séminaire du LPTMS: Pierre Ronceray
Learning force fields from stochastic trajectories
Pierre Ronceray (Princeton Center for Theoretical Science, USA)
From nanometer-scale proteins to micron-scale colloidal particles, particles in biological and soft matter systems undergo Brownian dynamics: their deterministic motion due to external forces and interactions competes with the random diffusion due to thermal noise. In the absence of forces, all trajectories look alike: the key information characterizing the system's dynamics thus lies in its force field. However, reconstructing the force field by inspecting microscopy observations of the system's trajectory is a hard problem, for two reasons. First, there needs to be enough information about the force available in the trajectory: the effect of the force field becomes apparent only after a long enough observation time. Second, one needs a practical method to extract that information and reconstruct the force field, which is challenging for force fields with a spatial structure, in particular in the presence of measurement noise. Here we address these two problems for steady-state Brownian trajectories. We first give a quantitative meaning to the information contained in a trajectory, and show how it limits force inference. We then propose a practical procedure to optimally use this information to reconstruct the force field by decomposing it into moments. Using simple model stochastic processes, we demonstrate that our method permits a quantitative evaluation of phase space forces and currents, circulation, and entropy production with a minimal amount of data.
Quantum Journal Club: Leonardo Mazza
Topological Transition in a Non-Hermitian Quantum Walk
Physics-Biology interface seminar: Pierre Ronceray
Cell contraction induces long-ranged stress stiffening in the extracellular matrix
Pierre Ronceray (Princeton University, USA)
Animal cells in tissues are supported by biopolymer matrices, which typically exhibit highly nonlinear mechanical properties. While the linear elasticity of the matrix can significantly impact cell mechanics and functionality, it remains largely unknown how cells, in turn, affect the nonlinear mechanics of their surrounding matrix. Here we show that living contractile cells are able to generate a massive stiffness gradient in three distinct 3D extracellular matrix model systems: collagen, fibrin, and Matrigel. We decipher this remarkable behavior by introducing Nonlinear Stress Inference Microscopy (NSIM), a novel technique to infer stress fields in a 3D matrix from nonlinear microrheology measurement with optical tweezers. Using NSIM and simulations, we reveal a long-ranged propagation of cell-generated stresses resulting from local filament buckling. This slow decay of stress gives rise to the large spatial extent of the observed cell-induced matrix stiffness gradient, which could form a mechanism for mechanical communication between cells.
Séminaire du LPTMS: Valentina Ros
Arrangement of local minima and phase transitions in the energy landscape of simple glassy models
Valentina Ros (IPhT, CEA-Saclay)
Understanding the statistical properties of the stationary points of high-dimensional, random energy landscapes is a central problem in the physics of glassy systems, as well as in interdisciplinary applications to computer science, ecology and biology. In this talk, I will discuss a framework to perform the computation of the quenched complexity of stationary points, making use of a replicated version of the Kac-Rice formula. I will discuss its application to simple models (the spiked tensor model and its generalizations) which capture the competition between a deterministic signal and stochastic noise, and correspond to a spherical p-spin Hamiltonian endowed with ferromagnetic multi-body interaction terms. I will describe the phase transitions that occur in the structure of the landscape when changing the signal-to-noise ratio, and highlight the implications for the evolution of local dynamics within the landscape. Reference:
• Valentina Ros, Gerard Ben Arous, Giulio Biroli and Chiara Cammarota, Complex energy landscapes in spiked-tensor and simple glassy models: ruggedness, arrangements of local minima and phase transitions, preprint cond-mat arXiv:1804.02686
Quantum Journal Club: Eoin Quinn
Organising strong correlations: Schwinger-Shastry formalism
Séminaire du LPTMS: Alberto Biella
Efficient stochastic unraveling of disordered open quantum systems
Alberto Biella (Laboratoire Matériaux et Phénomènes Quantiques, Université Paris Diderot)
The interplay of interaction, dissipation and driving in open quantum systems can trigger transitions between nonequilibrium phases. Such behaviour can emerge in extended lattices, in more then one spatial dimension, when homogenous systems are considered. However, in any realistic experimental realization, disorder cannot be neglected. In this work we develop a method to efficiently unravel the density matrix of a generic disordered open quantum system exploiting stochastic trajectories. We use it to study the effect of on-site disorder in the paradigmatic driven-dissipative Bose-Hubbard lattice in two dimensions. In particular, we will focus on the role of the disorder when the system is driven across a first-order transition from the low- to the high-density phase. We found that the disorder induces the formation of density domains which progressively smears the sharp transition leading to a crossover behaviour in the thermodynamic limit. We characterize this mechanism in terms of photon density and spatial correlation functions and we discuss how inhomogeneities affects the bistable dynamics of the system at the transition. Our results are relevant for state-of-the-art experiments in extended photonic lattices based on semiconductor microcavities and superconducting circuits.
Physics-Biology interface seminar: Willy Supatto
Live imaging of motile cilia to investigate left-right symmetry breaking in zebrafish embryos
Willy Supatto (LOB, École polytechnique)
In vertebrate embryos, cilia-driven fluid flows are guiding left-right body symmetry breaking within the left-right organizer (LRO). To investigate the generation and sensing of flows, it is required to quantify cilia biophysical features in 3D and in vivo [1]. In the zebrafish embryo, the LRO is called the Kupffer’s vesicle (KV) and is a spheroid shape cavity, which is covered with motile cilia distributed at its surface and oriented in all directions of space. This transient structure varies in size and shape during development and from one embryo to the other. As a consequence, the experimental investigation of cilia properties is challenging. It requires quantifying cilia features in vivo and in 3D and combining the data from different embryos to compare one embryo to the other and perform statistical analyses.To reach this goal, we devised an experimental workflow combining live 3D imaging using multiphoton microscopy, image processing, and data registration to quantify cilia biophysical features, such as cilia density, motility, 3D orientation, or length. We integrated such experimental features obtained in vivo into a fluid dynamics model and a multiscale physical study of flow generation and detection. This strategy enabled us to demonstrate how cilia orientation pattern generates the asymmetric flow within the KV [2]. In addition, we could investigate the physical limits of flow detection to clarify which mechanisms could be reliably used for body axis symmetry breaking [2]. Finally, we discovered the distribution of cilia orientation is asymmetric within the KV [3]. Importantly, these results suggested that the asymmetric force detection could result from the cilium being sensitive to its own motion. Together, this work sheds light on the complexity of left-right symmetry breaking and chirality genesis in developing tissues.
[1] From cilia hydrodynamics to zebrafish embryonic development. Supatto & Vermot, Current Topics in Developmental Biology 2011
[2] Physical limits of flow sensing in the left-right organizer. Ferreira et al, eLife 2017
[3] Chiral cilia orientation in the left-right organizer. Ferreira et al, Cell Reports, in press
Séminaire du LPTMS: Paola Ruggiero
*** Attention : jour inhabituel ***
Conformal field theory on top of a breathing Tonks-Girardeau gas
Paola Ruggiero (SISSA, Trieste, Italie)
Conformal field theory (CFT) has been extremely successful in describing universal effects in critical one-dimensional (1D) systems, in situations in which the bulk is uniform. However, in many experimental contexts, such as quantum gases in trapping potentials and in several out-of-equilibrium situations, systems are strongly inhomogeneous. Recently it was shown that the CFT methods can be extended to deal with such 1D situations [1,2]: the system’s inhomogeneity gets reabsorbed in the parameters of the theory, such as the metric, resulting in a CFT in curved space. Here in particular we make use of CFT in curved spacetime to deal with the out-of-equilibrium situation generated by a frequency quench in a Tonks-Girardeau gas in a harmonic trap [3]. We show compatibility with known exact result and use this new method to compute new quantities, not explicitly known by means of other methods, such as the dynamical fermionic propagator and the one particle density matrix at different times. Refs:
• [1] J. Dubail, JM. Stéphan, J. Viti & P. Calabrese, Conformal field theory for inhomogeneous one-dimensional quantum systems: the example of non-interacting Fermi gases, SciPost Phys. 2, 002 (2017).
• [2] S. Murciano, P. Ruggiero & P. Calabrese, Entanglement and relative entropies for low-lying excited states in inhomogeneous one-dimensional quantum systems, arXiv:1810.02287
• [3] P. Ruggiero, Y. Brun & J. Dubail, To appear.
Séminaire du LPTMS: Herbert Spohn
Nonlinear fluctuating hydrodynamics for one-dimensional fluids
Professor Herbert Spohn (Zentrum Mathematik, München)
For one-dimensional fluids the conventional Landau-Lifshitz fluctuating hydrodynamics breaks down. I discuss its nonlinear extension in the approximation of an anharmonic chain, in particular the dynamical phase diagram and self-similar shape functions. A recent novel development concerns the application of the theory to nonintegrable classical spin chains.
Séminaire du LPTMS: Alexios Polychronakos *** séminaire exceptionnel ***
!!!! ATTENTION : LIEU INHABITUEL (salle des conseils de l'IPN) !!!!
100 Years of Feynman and 30 without him: reminiscences from his last year
Alexios P. Polychronakos (The City College and Graduate Center of the CUNY, New York)
2018 marks the 100th anniversary of the birth and 30 years since the passing of Richard Feynman, a brilliantly creative physicist and a legendary personality in science and society at large. During the last year of his life at Caltech Feynman became fascinated by integrable models, his involvement and enthusiasm inspiring and motivating both experts and novices in the field. I will attempt to give a glimpse into Feynman's thinking and personality through the lens of personal memories and mementos from that last year.
Séminaire du LPTMS: Emmanuel Trizac
When random walkers help solving intriguing integrals
Emmanuel Trizac (LPTMS, Université Paris-Sud)
We will discuss the properties of a family of integrals involving the cardinal sine fuction, first studied by Borwein & Borwein. The aim is to provide a physicist's perspective onto a curious change of behaviour occurring within this family, noticed when benchmarking computer algebra packages, and initially attributed to a bug. A number of non-trivial generalizations will be obtained.
Physics-Biology interface seminar: Jan Brugues
How to set the proper size and shape of metaphase spindles
Jan Brugues (MPI Dresden, Germany)
Regulation of size and growth is a fundamental problem in biology. A prominent example is the formation of the mitotic spindle, where protein concentration gradients around chromosomes are thought to regulate spindle growth by controlling microtubule nucleation. Previous evidence suggests that microtubules nucleate throughout the spindle structure. However, the mechanisms underlying microtubule nucleation and its spatial regulation are still unclear. In the first part of the talk I will present an assay based on laser ablation to directly probe microtubule nucleation events in Xenopus laevis egg extracts. Combining this method with theory and quantitative microscopy, we show that the size of a spindle is controlled by autocatalytic growth of microtubules, driven by microtubule-stimulated microtubule nucleation. The autocatalytic activity of this nucleation system is spatially regulated by the limiting amounts of active microtubule nucleators, which decrease with distance from the chromosomes. This mechanism provides an upper limit to spindle size even when resources are not limiting. Once the necessary amounts of microtubules are created, the activities of motors lead to the proper shape and architecture of spindles. In the second part of the talk I will discuss the origin of motor-mediated stress in spindles.
Soutenance de thèse: Shuang Wu
Soutenance de thèse:
Algebraic area distribution of two dimensional random walks and the Hofstadter model
Shuang Wu
• Rapporteur: Sergei Matveenko (Landau Institute for Theoretical Physics, Moscow, Russia)
• Rapporteur: Alexios Polychronakos (The City College of New York, USA)
• Examinateur: Angel Alastuey (Laboratoire de Physique, ENS Lyon)
• Examinateur: Vincent Pasquier (IPhT, CEA Saclay)
• Examinatrice: Didina Serban (IPhT, CEA Saclay)
• Invité: Olivier Giraud (LPTMS, Université Paris-Sud)
• Directeur de thèse: Stéphane Ouvry (LPTMS, Université Paris-Sud)
Résumé: This thesis is about the Hofstadter model, i.e, a single electron moving on a two-dimensional lattice coupled to a perpendicular homogeneous magnetic field. Its spectrum is one of the famous fractals in quantum mechanics, known as the Hofstadter's butterfly. There are two main subjects in this thesis: the first is the study of the deep connection between the Hofstadter model and the distribution of the algebraic area enclosed by two-dimensional random walks. The second focuses on the distinctive features of the Hofstadter's butterfly and the study of the bandwidth of the spectrum. We found an exact expression for the trace of the Hofstadter Hamiltonian in terms of the Kreft coefficients, and for the higher moments of the bandwidth.
Quantum Journal Club: Christophe Texier
Correlations of occupation numbers in the canonical ensemble
Christophe Texier (LPTMS, Université Paris-Sud)
The connection between the statistical physics of non-interaction indistinguishable particles in quantum mechanics and the theory of symmetric functions will be reviewed. Then, I will study the $p$-point correlation function $overline{n_1cdots n_p}$ of occupation numbers in the canonical ensemble ; in the grand canonical ensemble, they are trivially obtained from the independence of individual quantum states, however the constraint on the number of particles makes the problem non trivial in the canonical ensemble. I will show several representations of these correlation functions. I will illustrate the main formulae by revisiting the problem of Bose-Einstein condensation in a 1D harmonic trap in the canonical ensemble, for which we have obtained several analytical results. In particular, in the temperature regime dominated by quantum correlations, the distribution of the ground state occupancy is shown to be a truncated Gumbel law. Ref: Olivier Giraud, Aurélien Grabsch & Christophe Texier, Correlations of occupation numbers in the canonical ensemble and application to BEC in a 1D harmonic trap, Phys. Rev. A 97, 053615 (2018).
Séminaire du LPTMS: Alexandre Krajenbrink
Linear statistics and pushed Coulomb-gas at the soft edge of random matrices : four paths to large deviations
Alexandre Krajenbrink (LPT-ENS, Paris)
In this talk, I will consider the classical problem of linear statistics in random matrix theory. This amounts to study the distribution of the sum of a certain function of the matrix eigenvalues. Varying this function, this problem can describe fluctuations of conductance, shot noise, Renyi entropy, center of mass of interfaces, particle number… This problem has been extensively studied for the bulk of the eigenvalues (macroscopic linear statistics) where interesting phase transitions have been unveiled but not so much at the edge of the spectrum (microscopic linear statistics) on which I will focus. In particular, I will introduce four methods to solve this problem, show their equivalence and I will discuss the physical applications of these results (large deviations of the solution of the Kardar-Parisi-Zhang equation, existence of phase transitions with continuously varying exponent and possible experimental realization of this setup with non-intersecting Brownian interfaces).
Reference :
• Alexandre Krajenbrink & Pierre Le Doussal, Linear statistics and pushed Coulomb gas at the edge of beta random matrices: four paths to large deviations, preprint arXiv:1811.00509
Séminaire du LPTMS: Lucile Julien
La révision du système international d'unités
Lucile Julien (Laboratoire Kastler-Brossel, UPMC, Paris)
Présentation en pdf
Article de Pierre Cladé et Lucile Julien, "Les mesures atomiques de haute précision. Un outil privilégié pour tester l’électrodynamique quantique", Reflets de la Physique 59, sept. 2018
Le SI, système international d’unités, né en 1960, est l’héritier du système métrique et du système MKSA. Il est fondé sur 7 unités de base, dont les définitions peuvent changer lorsque les besoins des utilisateurs le rendent nécessaire. Ainsi, le mètre a été redéfini en 1983 en fixant la valeur numérique de la vitesse de la lumière dans le vide. La 26ème Conférence Générale des Poids et Mesures, qui s'est réunie du 13 au 16 novembre, a décidé de la même façon de redéfinir le kilogramme, l’ampère, la mole et le kelvin en fixant les valeurs de quatre constantes de la physique. Après une présentation historique du SI, je présenterai les motivations de sa révision actuelle, les travaux qui l’ont rendue possible et la façon dont elle est réalisée.
!!!! ATTENTION : LIEU INHABITUEL (Auditorium Irène Joliot Curie) !!!!
**** l'exposé sera en français ****
Physics-Biology interface seminar: Ana-Jesus Garcia-Saez
Single molecule analysis of mitochondrial permeabilization in apoptosis
Ana-Jesus Garcia-Saez (MPI Tübingen, Germany)
Bax and Bak are key regulators of apoptosis and mediate the permeabilization of the outer mitocondrial membrane that leads to cytochrome and Smac release. Although it is widely accepted that Bax and Bak function and molecular mechanism largely overlap, there is limited evidence how Bak works. In previous studies, we have used single molecule microscopy to characterize the oligomerization of Bax in the membrane and its organization at the nanoscale in the mitochondria of apoptotic cells. We now extended these approaches to Bak and identified key structural differences between the two proteins that may have functional implications.
Séminaire du LPTMS: Christophe Mora
Christophe Mora (Laboratoire Pierre Aigrain, ENS, Paris)
• Leonardo Mazza, Fernando Iemini, Marcello Dalmonte & Christophe Mora, Poor man's parafermions in a lattice model with even multiplet pairing, preprint cond-mat arXiv:1801.08548.
Séminaire du LPTMS: Christophe Texier
Counting the equilibria of a directed polymer in a random medium and Anderson localisation
Christophe Texier (LPTMS, Université Paris-Sud)
I will discuss a new connection between two different problems: the counting of equilibria of a directed polymer in a random medium (DPRM) and the problem of Anderson localisation for the 1D Schrödinger equation. Using the Kac-Rice formula, it is possible to express the mean number of equilibria of a DPRM in terms of functional determinants. In the one-dimensional situation, these functional determinants can be calculated thanks to the Gelfand-Yaglom method, showing that the mean number of equilibria of the DPRM growth exponentially with the length of the polymer, with a rate controlled by the generalized Lyapunov exponent (GLE) of the localisation problem (cumulant generating function of the log of the wave function). The GLE is solution of a spectral problem studied by combining numerical approaches and WKB-like approximation. Furthermore, the formalism can be extended in order to obtain the number of equilibria at fixed energy, providing the (annealed) distribution of the energy density of the line over the equilibria. Reference:
• Yan V. Fyodorov, Pierre Le Doussal, Alberto Rosso and Christophe Texier, Exponential number of equilibria and depinning threshold for a directed polymer in a random potential, Annals of Physics 397, 1-64 (2018)
Physics-Biology interface seminar: Karim Benzerara
Seminar cancelled: rescheduled for March 13th
Karim Benzerara (Sorbonne Universités, Paris) |
931618e7e178b19e | Abstracts for Thursday Philosophy of Physics seminars
October 15th
Kian Salimkhani, University of Cologne
The Dynamical Approach to Spin-2 Gravity
Abstract: In this presentation I study how the spin-2 approach to gravity helps to strengthen Brown and Pooley’s dynamical approach to general relativity. In particular, I investigate the ontological status of the metric field and the status of the equivalence principle.
October 22nd
Jeremy Steeger, University of Washington
One World Is (Probably) Just as Good as Many
Abstract: One of our most sophisticated accounts of objective chance in quantum theories involves the Deutsch-Wallace theorem, which uses symmetries of the quantum state space to justify agents’ use of the Born rule when the quantum state is known. But Wallace (2003, 2012) argues that this theorem requires an Everettian approach to measurement. I find this argument to be unsound, and I demonstrate a counter-example by applying the Deutsch-Wallace theorem to Bohmian mechanics.
October 29th
Porter Williams, University of Southern California
Identifying Causal Directions in Quantum Theories via Entanglement
November 5th
Sarita Rosenstock, Australian National University
A Category Theoretic Framework for Physical Representation
Abstract: It is increasingly popular for philosophers of physics to use category theory, the mathematical theory of structure, to adjudicate debates about the (in)equivalence of formal physical theories. In this talk, I discuss the theoretical foundations of this strategy. I introduce the concept of a “representation diagram” as a way to scaffold narrative accounts of how mathematical gadgets represent target systems, and demonstrate how their content can be effectively summarised by what I call a “structure category”. I argue that the narrative accounts contain the real content of an act of physical representation, and the category theoretic methodology serves only to make that content precise and conducive to further analysis. In particular, one can use tools from category theory to assess whether one physical formalism thus presented has more “properties”, “structure”, or “stuff” than another according to a given narrative about how they both purport to represent the same physical systems.
November 12th
Trevor Teitel, University of Toronto
How to be a Spacetime Substantivalist
Abstract: The consensus among spacetime substantivalists is to respond to Leibniz’s classic shift arguments, and their contemporary incarnation in the form of the hole argument, by pruning the allegedly problematic surplus metaphysical possibilities. Some substantivalists do so by directly appealing to a modal doctrine akin to anti-haecceitism; others do so by appealing to an underlying hyperintensional doctrine that implies some such modal doctrine. My first aim in this talk is to undermine all extant forms of this consensus position. My second aim is to show what form substantivalism must take in order to uphold the consensus while addressing my challenge from part one. In so doing, I’ll discuss some related issues about the interaction of modality and vagueness. I’ll then argue against the resulting substantivalist metaphysic on independent grounds. I’ll conclude by discussing the way forward for substantivalists once we reject the consensus position.
November 19th
Nicolas Menicucci, RMIT University
Sonic Relativity and the Sound Postulate
Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, “do devices exist that will experience the relativity in these systems?” We describe a thought experiment in which ‘acoustic observers’ possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ, with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.
Nov 19th (BBLOC Seminar)
Emily Adlam
Spooky Action at a Temporal Distance
Abstract: Since the discovery of Bell’s theorem, the physics community has come to take seriously the possibility that the universe might contain physical processes which are spatially nonlocal, but there has been no such revolution with regard to the possibility of temporally nonlocal processes. In this talk, I argue that the assumption of temporal locality is actively limiting progress in the field of quantum foundations. I investigate the origins of the assumption, arguing that it has arisen for historical and pragmatic reasons rather than good scientific ones, then explain why temporal locality is in tension with relativity and review some recent results which cast doubt on its validity.
November 26th
Martin Lipman, Leiden University
Realism About Relative Facts and Special Relativity
Title: Realism about Relative Facts and Special Relativity
Abstract: In this talk I will set out a non-standard metaphysical framework, according to which there can be genuine facts regarding matters that are normally said to obtain only relative to something. The framework is in the spirit of Kit Fine’s fragmentalism, though different in formulation (also from the version of fragmentalism that I defended in earlier work). After sketching the metaphysical principles and the bit of logic needed to make sense of the view, I will discuss its application to the special theory of relativity. According to the proposed interpretation of the special theory of relativity, there are genuine facts regarding absolute simultaneity, temporal duration and length. If time permits, I will discuss one or two objections
December 3rd
Baptiste Le Bihan, University of Geneva
What Does the World Look Like According to Superdeterminism?
Week 1 (30 April): Sir Roger Penrose (Maths, Oxford): CCC and Hawking Points in the Microwave Sky.
Abstract: The theory of of conformal cyclic cosmology (CCC), that I originally put forward in 2005, proposes that our Big Bang was the (conformal) continuation of the remote future of a previous cosmic “aeon” whose exponentially expanding remote future conformally continued to become our Big Bang. Conformal space-time geometry is the geometry defined by the light cones, but where the metric loses its fundamental role. It is the geometry respected by massless particles and fields, most particularly Maxwell’s electromagnetism. The huge, cold, and reified future of the previous aeon thereby identifies with our tiny, hot, and dense Big Bang. Moreover, with CCC, the cycle of aeons continues indefinitely.
CCC has taken a long time to be respected seriously by the cosmological community, despite its being the only scheme I know of that properly explains the source of the 2nd law of thermodynamics in the form that we find it, in addition to certain observed features in the cosmic microwave background (CMB)predicted by CCC. More specifically, recent analysis of the CMB, of both the WMAP and Planck satellites’ data, has revealed numerous previously unobserved remarkably energetic anomalous spots in the CMB, such spots being implications of CCC. They would be the effects of the ultimate Hawking evaporation of supermassive black holes in the previous aeon, whose entire mass-energy would come through the crossover into our aeon in points referred to as “Hawking points”, that following the first 380000 years of our own aeon’s expansion would produce spots like those actually observed in our CMB sky.
Week 2 (7 March): David Wallace (Philosophy, Pittsburgh): Isolated systems and their symmetries
Abstract: I defend the view that metaphysical and epistemic implications of a theory’s symmetries for that theory can be understood via a formal conception of those symmetries, understood as transformations which preserve the form of the equations of motion (contra recent work by, e.g., Belot, Dasgupta, and Moller-Nielsen). A key concept here is extendibility of a symmetry: whether or not a symmetry of a system remains a symmetry when that system is coupled to other systems (most notably measurement devices). This in turn requires us to interpret (most) physical theories as idealised isolated subsystems of a larger universe, not (as is common in philosophy of physics) under the fiction that they describe an entire Universe. I provide a detailed framework for doing so and for extracting consequences for symmetry extendibility: the core concept is subsystem-recursivity, whereby interpretative conclusions about a sector of a theory can be deduced from considering subsystems of other models of the same theory.
Background reading (not assumed):
“Observability, redundancy and modality for dynamical symmetry transformations” http://philsci-archive.pitt.edu/16622/
“Isolated Systems and their Symmetries, Part I: General Framework and Particle-Mechanics” http://philsci-archive.pitt.edu/16623/
“Isolated systems and their symmetries, part II: local and global symmetries of field theories “http://philsci-archive.pitt.edu/16624/
Week 3 (14 March): Erik Curiel (MCMP): On the Cogency of Quantum Field Theory on Curved Spacetime and Semi-Classical Gravity
Abstract: Quantum field theory on curved spacetime (QFT-CST), and semi-classical gravity (SCG) more generally, is the framework within which our current theories about quantum effects around black holes are formulated. The results of their study, including most famously the Hawking effect and its infamous spawn the information-loss paradox, have revealed several surprises that threaten to overturn the views of space, time, and matter that general relativity and quantum field theory each on their own suggests. In particular, they appear to point to a deep and hitherto unsuspected connection among our three most fundamental theories, general relativity, quantum field theory and thermodynamics. As such, work in SCG provides today some of the most important, central, and fruitful fields of study in theoretical physics, bringing together workers from a variety of fields such as cosmology, general relativity, quantum field theory, particle physics, fluid dynamics, condensed matter, and quantum gravity, providing bridges that now closely connect disciplines once seen as largely independent. The framework, however, has serious mathematical, physical and conceptual problems, which I survey. One might think that treating SCG as merely an effective theory would ameliorate these problems. I argue that the issue is not straightforward. Thus, SCG presents us with problems that are foundational in a serious sense: they must be addressed in order to make sense of contemporary theoretical physics.
Week 4 (21 May): James Read (Philosophy, Oxford): Newtonian Equivalence Principles
Abstract: I present a unified framework for understanding equivalence principles in spacetime theories, applicable to both relativistic and Newtonian contexts. This builds on prior work by Knox (2014) and Lehmkuhl (forthcoming).
Week 5 (28 May): David Baker (Philosophy, Michigan): What Are Symmetries?
Abstract: I advance a stipulational account of symmetries, according to which symmetries are part of the content of theories. For a theory to have a certain symmetry is for the theory to stipulate that models related by the symmetry represent the same possibility. I show that the stipulational account compares positively with alternatives, including Dasgupta’s epistemic account of symmetry, Moller-Nielsen’s motivational account, and so-called formal and ontic accounts. In particular, the stipulational account avoids the problems Belot and Dasgupta have raised against formal and ontic accounts of symmetry while retaining many of the advantages of these otherwise-attractive frameworks. It also fits naturally into an appealing account of how we ought to interpret effective theories as opposed to fundamental ones.
Week 6 (4 June): Marij van Strien (Wuppertal): Bohm’s theory of quantum mechanics and the notion of classicality
Abstract: When David Bohm published his alternative theory of quantum mechanics in 1952, it was not received well; a recurring criticism was that it formed a reactionary attempt to return to classical physics. In response, Bohm emphasized the progressiveness of his approach, and even turned the accusation of classicality around by arguing that he wanted to move beyond classical elements still inherent in orthodox quantum mechanics. In later years, he moved more and more towards speculative and mystical directions.
In this talk I will aim to explain this discrepancy between the ways in which Bohm’s work on quantum mechanics has been received and the way in which Bohm himself presented it. I reject the idea that Bohm’s early work can be described as mechanist, determinist, and realist, in contrast to his later writings, and argue that there is in fact a strong continuity between his work on quantum mechanics from the early 1950s and his later, more speculative writings. In particular, I argue that Bohm was never strongly committed to determinism and was a realist in some ways but not in others. A closer look at Bohm’s philosophical commitments highlights the ways in which his theory of quantum mechanics is non-classical and does not offer a way to avoid all ‘quantum weirdness’.
Week 1 (23 January): Anders Sandberg (Oxford): Physical eschatology: how much can we say about the far future of the universe, and how much does it matter?
Abstract: Historically science has been reluctant to make long-term predictions about the future. One interesting exception is astronomy, where the combination of relatively low complexity, low noise environments and large timespans as well as strong theories have allowed the field of physical eschatology to emerge. This talk will outline the development of physical eschatology, discuss the current main models, and try to analyse the methodological challenges of such extreme long-range predictions, especially in the light of longtermist ethics increasingly being interested in some of the results as being potentially relevant for deciding near-term strategies.
Week 2 (30 January). Mauro Dorato (University of Rome 3): Overcoming dynamical explanations with structural explanations
Abstract: By briefly reviewing three well-known scientific revolutions in spacetime physics (the discovery of inertia, of special relativity and of general relativity), I claim that problems that were supposed to be crying for a dynamical explanation in the old paradigm ended up receiving a structural explanation in the new one. This claim is meant to give more substance to Kuhn’s claim that revolutions are accompanied by a shift in what needs to be explained, while suggesting at the same time the existence of a pattern that is common to all of the above three case-studies and that involves the overcoming of central assumptions of the manifest image. In the last part I discuss the question whether also entanglement can be given a purely structural-non dynamical explanation.
Week 3 (6–7 February) The first Oxford Philosophy of Physics Graduate Conference.
Week 4 (13 February). John Dougherty (Munich): Why ghosts are real and “surplus structure” isn’t
Astract: Gauge theories are often thought to pose an interpretive puzzle. On the one hand, it seems that some of the mathematical structure of a gauge theory is surplus—that is, it does not reflect any structure in the world. Interpreting this structure as surplus is meant to be especially important to the process of quantization. On the other hand, it has proven difficult to eliminate this putatively surplus structure without losing important features of the theory like empirical adequacy. In this talk I argue that this puzzle is ill-posed, because there is no notion of “surplus structure” on which gauge theories have it. The standard conception of surplus structure presumes an account of mathematical structure that excludes the mathematics of gauge theory, so gauge theories neither have nor lack surplus structure on this conception. And on analyses of “surplus structure” that do apply to gauge theories it’s easy to see that they don’t have it.
Week 5 (20 February) J. Brian Pitts (Cambridge): Constraints, Gauge, Change and Observables in Hamiltonian General Relativity
Abstract: Since the mid-1950s it has seemed that change is somehow missing in Hamiltonian General Relativity. How did this problem arise? How compelling are the axioms on which it rests? What of the 1980s+ reforming literature that has aimed to recover the mathematical Hamiltonian-Lagrangian equivalence that was given up in the mid-1950s, a reforming literature that is visible in journals but scarce in books? What should one mean by Hamiltonian gauge transformations and observables, and how can one decide? The absence of change in observables can be traced to (1) a pragmatic conjecture (initially by Peter Bergmann and his student Schiller and later by Dirac) that gauge transformations come not merely from a tuned sum of “first-class constraints” (the Rosenfeld-Anderson-Bergmann gauge generator), but also from each first-class constraint separately, and (2) an assumption that the internal gauge symmetry of electromagnetism is an adequate precedent for the external/space-time coordinate symmetry of General Relativity. Requiring that gauge transformations preserve Hamilton’s equations or that equivalent theory formulations yield equivalent observables shows that change is right where it should be in Hamiltonian General Relativity including observables, namely, essential time dependence (e.g., lack of a time-like Killing vector field) in coordinate-covariant quantities. A genuine problem of missing change might exist at the quantum level, however.
Week 6 (27 February)-Simon Saunders (Oxford): Particle trajectories, indistinguishable particles, and the discovery of the photon.
Abstract: It is widely thought that particles with trajectories cannot be indistinguishable, in the quantum mechanical sense – wrongly. In this talk I shall explain how this doctrine first arose in Dirac’s 1926 treatment, and why it has proved so enduring. As a result, historians and philosophers of physics have neglected the obvious precursor of the indistinguishability concept in Gibbs’ concept of generic phase, applicable to classical particles. It was neglected by the discovers of quantum mechanics as well, not least by Einstein, whose 1905 argument for the light quantum was based on the concept of the ‘mutual independence’ of non-interacting particles. Yet indistinguishable particles in Gibbs sense are not mutually independent, in Einstein’s: the fluctuation he considered, for thermal radiation in the Wien regime, does not discriminate between them.
Trajectories are not just compatible with the indistinguishability concept: in an important sense, they are needed for it. This makes clearer the difference between diffeomorphism invariance and permutation invariance, and highlights an important thread to the history of quantum physics, and specifically Bose’s contribution: for Bose showed how to derive the Planck black-body distribution precisely by endowing the light quantum with a state space of its own – in effect, allowing that light quanta may have trajectories. This was the breakthrough to the concept of the photon, as opposed to the light quantum concept.
A final ingredient, needed to redress this history, is the recognition that the kind of entanglement introduced by symmetrisation is essentially trivial – and cannot, for example, lead to the violation of any Bell inequality, an observation recently made by Adam Caulton. I conclude that the failure of statistical independence, as it arises in Bose-Einstein statistics away from the Wien regime, is unrelated to quantum non-locality, and to entanglement.
Week 7 (5 March) – no seminar
Week 8 (12 March). Emily Adlam, BLOC Seminar at King’s College, London: TBC.
Week 1 (17th October): Patricia Palacios (Philosophy, University of Salzburg).
Title: Re-defining equilibrium for long-range interacting systems
Abstract: Long-range interacting systems are systems in which the interaction potential decays slowly for large inter-particle distance. Typical examples of long-range interactions are the gravitational and Coulomb forces. The philosophical interest for studying these kinds of systems has to do with the fact that they exhibit properties that escape traditional definitions of equilibrium based on average ensembles. Some of those properties are ensemble inequivalence, negative specific heat, negative susceptibility and ergodicity breaking. Focusing on long-range interacting systems has thus the potential of leading one to an entirely different conception of equilibrium or, at least, to a revision of traditional definitions of it. But how should we define equilibrium for long-range interacting systems?
In this talk, I address this question and argue that the problem of defining equilibrium in terms of average ensembles is due to the lack of a time-scale in the statistical mechanical treatment. In consequence, I argue that adding a specific time-scale to the statistical treatment can give us a satisfactory definition of equilibrium in terms of metastable states. I point out that such a time-scale depends on the number of particles in the system, as it happens when phase transitions occur, also in the more usual context of short-range interacting systems like condensed matter ones. I thus discuss the analogies and the dissimilarities between the case of long-range systems and that of phase transitions and argue that these analogies, which should be interpreted as liberal formal analogies, can have an important heuristic role in the development of statistical mechanics for long range interacting systems.
Week 2 (24th October): Francesca Chadha-Day (Physics, University of Cambridge).
Title: Dark Matter: Understanding the gravity of the situation
Abstract: The existence of Dark Matter – matter that is unaccounted for by the Standard Model of particle physics – is supported by a staggering quantity and variety of astrophysical observations. A plethora of Dark Matter candidates have been proposed. Dark matter may be cold, warm or fuzzy. It may be composed of right-handed neutrinos, supersymmetric particles, axions or primordial black holes. I will give an overview of Dark Matter candidates and how we can understand the phenomenological differences between them in the framework of quantum theory. I will discuss the difficulties faced by modified gravity theories in explaining our observations, and their relation to Dark Matter.
Week 3 (31st October): NO SEMINAR
Week 4 (7th November): Jamee Elder (Philosophy, University of Notre Dame/University of Bonn).
Title: The epistemology of LIGO
Abstract: In this talk, I examine the methodology and epistemology of LIGO, with a focus on the role of models and simulations in the experimental process. This includes post-Newtonian approximations, models generated through the effective one-body formalism, and numerical relativity simulations, as well as hybrid models that incorporate aspects of all three approaches. I then present an apparent puzzle concerning the validation of these models: how can we successfully validate these models and simulations through our observations of black holes, given that our observations rely on our having valid models of the systems being observed? I argue that there is a problematic circularity here in how we make inferences about the properties of compact binaries. The problem is particularly acute when we consider these experiments as empirical tests of general relativity. I then consider strategies for responding to this challenge.
Week 5 (14th November): Adam Caulton (Philosophy, University of Oxford).
Title; Is a particle an irreducible representation of the Poincaré group?
Abstract: Ever since investigations into the group representation theory of spacetime symmetries, chiefly due to Wigner and Bargmann in the 1930s and ‘40s, it has become something of a mantra in particle physics that a particle is an irreducible representation of the Poincaré group (the symmetry group of Minkowski spacetime). Call this ‘Wigner’s identification’. One may ask, in a philosophical spirit, whether Wigner’s identification could serve as something like a real definition (as opposed to a nominal definition) of ‘particle’—at least for the purposes of relativistic quantum field theory. In this talk, I aim to show that, while Wigner’s identification is materially adequate for many purposes—principally scattering theory—it does not provide a serviceable definition. The main problem, or so I shall argue, is that the regime of legitimate particle talk surpasses the constraints put on it by Wigner’s identification. I aim further to show that, at least in the case of particles with mass, a promising rival definition is available. This promising rival emerges from investigations due to Foldy in the 1950s, which I will outline. The broad upshot is that the definition of ‘particle’ may well be the same in both the relativistic and non-relativistic contexts, and draws upon not the Poincaré group (or any other spacetime symmetry group) but rather the familiar Heisenberg relations.
Week 6 (21st November): Radin Dardashti (Philosophy, University of Wuppertal).
Title: Understanding Problems in Physics
Abstract: In current fundamental physics empirical data is scarce, and it may take several decades before the hypothesised solution to a scientific problem can be tested. So, scientists need to be careful in assessing what constitutes a scientific problem in the first place, for there may be the danger of providing a solution to a non-existing problem. Relying and extending on previous work by Larry Laudan and Thomas Nickles, I apply the philosophical discussion on scientific problems to modern particle physics.
Week 7 (28th November): NO SEMINAR
Week 8 (5th December): Karim Thébault (Philosophy, University of Bristol).
Title: Time and Background Independence
Abstract: We showcase a new framework for the analysis of the symmetries and spatiotemporal structure of a physical theory via the application to the problem of differentiating intuitively background dependent theories from intuitively background independent theories. This problem has been rendered a particularly pressing one by the magisterial analysis of Pooley (2017), who convincingly demonstrates that diffeomorphism invariance cannot be equated with background independence via reference to the comparison between diffeomorphism invariant special relativity and general relativity.
Our framework is built upon the analysis of the transformation behaviour of nomic and temporal structures under kinematical transformations (defined as endomorphisms on the space of kinematically possible models). We define the sub-regions with the space of kinematical transformations corresponding to where a structure is absolute (does not vary) and relative (does vary) and then classify temporal structures via the intersection of their absolute and relative regions with those of nomic structures. Of particular relevance is the case where there is non-trivial overlap between the relative region of some temporal structure and both the absolute and relative regions of the nomic structure. We classify such structure as dynamical surplus structure.
Finally, based upon the analysis of temporal foliation structure, we provide a new account of background independence. On our account background independence manifestly fails for diffeomorphism invariant special relativity (since the temporal foliation structure is non-dynamical surplus structure) and obtains for general relativity (since the temporal foliation structure is dynamical surplus structure). This formalises the intuitive idea of the contingent independence of the dynamical models of general relativity from a spatiotemporal background.
Week 9 (12th December): Barry Loewer (Philosophy, Rutgers) Title: The package deal account of fundamental laws
Abstract: In my talk I will describe an account of the metaphysics of fundamental laws I call “the Package Deal Account (PDA)” that is a descendent of Lewis’ BSA but differs in a number of ways. First, it does not require the truth of a thesis Lewis call “Humean Supervenience” (HS) and so can accommodate relations and structures found in contemporary physics that conflict with HS. Second, it is not committed to Humeanism since it is compatible with there being fundamental necessary connections in nature. Third, it greatly develops the criteria for what counts in favor of a candidate system to determine laws. Fourth and most significantly, unlike the BSA the PDA does not presuppose metaphysically primitive elite properties/quantities that Lewis calls “perfectly natural properties/quantities.
Week1 (Thursday May 2) Olivier Darrigol (Paris): Ludwig Boltzmann: Atoms, mechanics, and probability
Statistical mechanics owes much more to Ludwig Boltzmann than is usually believed. In his attempts to derived thermodynamic and transport phenomena from deeper microphysical assumptions, he explored at least five different approaches: One based on mechanical analogies (with periodic mechanical systems or with statistical ensembles), one based on Maxwell’s collision formula, one based on the ergodic hypothesis, one based on combinatorial probabilities, and one based on the existence of thermodynamic equilibrium. I will sketch this various approaches and show how Boltzmann judged them and interconnected them. It will also argue that in general Boltzmann was more concerned with constructive efficiency than with precise conceptual foundations. Basic questions on the reality of atoms or on the nature of probabilities played only a secondary role in his theoretical enterprise.
Week 2 (Thursday May 9) No seminar
Week 3 (Thursday May 16) Jeremy Butterfield (Cambridge): On realism and functionalism about space and time
(Joint worth with Henrique Gomes.) In this talk I will set the recent literature on spacetime functionalism in context, by discussing two traditions that form its background. First: functionalism in general, as a species of inter-theoretic reduction. Second: relationism about space and time.
Week 4 (Thursday May 23): No seminar
Week 5 (Thursday May 30) Henrique Gomes (Perimeter and Cambridge): Gauge, boundaries, and the connection form
Forces such as electromagnetism and gravity reach across the Universe; they are the long-ranged forces in current physics. And yet, in many applications—theoretical and otherwise—we only have access to finite domains of the world. For instance, in computations of entanglement entropy, e.g. for black holes or cosmic horizons, we raise boundaries to separate the known from the unknown. In this talk, I will argue we do not understand gauge theory as well as we think we do, when boundaries are present.
For example: It is agreed by all that we should aim to construct variables that have a one to one relationship to the theory’s physical content within bounded regions. But puzzles arise if we try to combine definitions of strictly physical variables in different parts of the world. This is most clearly gleaned by first employing the simplest tool for unique physical representation—gauge fixings—and then proceeding to stumble on its shortcomings. Whereas fixing the gauge can often shave off unwanted redundancies, the coupling of different bounded regions requires the use of gauge-variant elements. Therefore, the coupling of regional observables is inimical to gauge-fixing, as usually understood. This resistance to gauge-fixing has led some to declare the coupling of subsystems to be the raison d’être of gauge [Rov14].
Here I will explicate the problems mentioned above and illustrate a possible resolution. The resolution was introduced in a recent series of papers [Gomes & Riello JHEP ’17,Gomes & Riello PRD ’18,Gomes, Hopfumuller, Riello NPB ’19]. It requires the notion of a connection-form in the field-space of gauge theories. Using this tool, a modified version of symplectic geometry—here called ‘horizontal’—is possible. Independently of boundary conditions, this formalism bestows to each region a physically salient, relational notion of charge: the horizontal Noether charge. It is relational in the sense that it only uses the different fields already at play and relationships between them; no new “edge-mode” degrees of freedom are required.
The guiding requirement for the construction of the relational connection-form is simply a harmonious melding of regional and global observables. I show that the ensuing notions of regional relationalism are different from other attempts at resolving the problem posed by gauge symmetries for bounded regions. The distinguishing criterion is what I consider to be the ‘acid test’ of local gauge theories in bounded regions: does the theory license only those regional charges which depend solely on the original field content? In a satisfactory theory, the answer should be “yes”. Lastly, I will introduce explicit examples of relational connection-forms, and show that the ensuing horizontal symplectic geometry passes this ‘acid test’.”
Week 7 (Thursday June 13) Harvey Brown (Oxford): Aspects of probabilistic reasoning in physics
Week 8 (Thursday June 20) Martin Lesourd (Oxford): The epistemic constraints that observers face in General Relativistic spacetimes
What can observers know about their future and their own spacetime on the basis of their past lightcones? Important contributions to this question were made by Earman, Geroch, Glymour, Malament and more recently Manchak. Building on the work of Malament, Manchak (2009/10/11/14) has been able to prove what seem to be general and far reaching results to the effect that observers in general relativistic spacetimes face severe epistemic constraints. Here, after reviewing these results, I shall present a number of new results which grant observers a more positive epistemic status. So in short: if Malament and Manchak’s results were cause for a form of epistemic pessimism, then the ones presented here will strive for a more optimistic outlook.
Week 1 (17 Jan) Laurenz Hudetz (LSE): The conceptual-schemas account of interpretation
This talk addresses the question what it is to interpret a formalism. It aims to provide a general framework for talking about interpretations (of various kinds) in a rigorous way. First, I clarify what I mean by a formalism. Second, I give an account of what it is to establish a link between a formalism and data. For this purpose, I draw on the theory of relational databases to explicate what data schemas and collections of data are. I introduce the notion of an interpretive link using tools from mathematical logic. Third, I address the question how a formalism can be interpreted in a way that goes beyond a connection to data. The basic idea is that one extends a data schema to an ontological conceptual schema and links the formalism to this extended schema. I illustrate this account of interpretation by means of simple examples and highlight how it can be fruitfully applied to address conceptual problems in philosophy of science.
Week 2 (24 Jan) Patrick Dürr (Oxford): Philosophy of the Dead: Nordström Gravity
The talk revisits Nordström Gravity (NoG) – arguably the most plausible relativistic scalar theory of gravity before the advent of GR. In Nordström’s original formulation (1913), NoG1, it appears to describe a scalar gravitational field on Minkowski spacetime. In 1914, Fokker and Einstein showed that NoG is mathematically equivalent to a purely metric theory, NoG2 – strikingly similar to the Einstein Equations. Like GR, NoG2 is plausibly construed as a geometrised theory of gravity: In NoG2, gravitational effects are reduced to manifestations of non-Minkowskian spacetime structure. Both variants of NoG, and their claimed physical equivalence, give rise to three conundrums that we will explore.
(P1) The (Weak) Equivalence Principle appears to be violated in NoG1 – but holds in NoG2. (P2) In NoG1, it appears unproblematic to ascribe the gravitational scalar an energy-momentum tensor. In trying to define gravitational energy in NoG2, by contrast, one faces problems akin to those in GR. (P3) In NoG1, total (i.e. gravitational plus non-gravitational) energy-momentum appears to be conserved, whereas in NoG2, no obvious candidate for gravitational energy is available, and furthermore it seems unclear whether non-gravitational energy is conserved.
In as far as NoG1 and NoG2 are equivalent formulations of the same theory, (P1)-(P3) appear paradoxical. For a resolution, I will proffer a metaphysically perspicuous articulation of NoG’s ontology that explicates the equivalence, and propose an instructive reformulation.
Week 3 (31 Jan) Yang-Hui He (City, London): Exceptional and Sporadic
I give an overview of a host of seemingly unrelated classification problems in mathematics which turn out to be intimately connected through some deep Correspondences.
Some of these relations were uncovered by focusing on so-called exceptional structures which abound: in geometry, there are the Platonic solids; in algebra, there are the exceptional Lie algebras; in group theory, there are the sporadic groups, to name but a few. A champion for such Correspondences is Prof. John McKay.
I also present how these correspondences have subsequently been harnessed by theoretical physicists. My goal is to take a casual promenade in this land of ‘exceptionology’, reviewing some classic results and presenting some new ones based on joint work with Prof. McKay.
Week 4 (7 Feb) Casey McCoy (Stockholm): Why is h-bar a universal constant?
Some constants are relevant for all physical phenomena: the speed of light pertains to the causal structure of spacetime and hence all physical processes. Others are relevant only to particular interactions, for example the fine structure constant. Why is Planck’s constant one of the former? I motivate the possibility that there could have been multiple, interaction-specific ‘Planck constants. Although there are indeed good reasons to eschew this possibility, it suggests a further question: what is the actual conceptual significance of Planck’s constant in quantum physics? I argue that it lies principally in relating classical and quantum physics, and I draw out two main perspectives on this relation, represented by the views of Landsman and Lévy-Leblond.
Week 5 (14 Feb) Joanna Luc (Cambridge): Generalised manifolds as basic objects of General Relativity
Week 6 (21 Feb) Katie Robertson (Birmingham): Reducing the second law of thermodynamics: the demons and difficulties.
In this talk I consider how to reduce the second law of thermodynamics. I first discuss what I mean by ‘reduction’, and emphasis how functionalism can be helpful in securing reductions. Then I articulate the second law, and discuss what the ramifications of Maxwell’s demon are for the status of the second law. Should we take Maxwell’s means-relative approach? I argue no: the second law is not a relic of our inability to manipulate individual molecules in the manner of the nimble-fingered demon. When articulating the second law, I take care to distinguish it from the minus first law (Brown and Uffink 2001); the latter concerns the spontaneous approach to equilibrium whereas the former concerns the thermodynamic entropy change between equilibrium states, especially in quasi-static processes. Distinguishing these laws alters the reductive project (Luczak 2018): locating what Callender (1999) calls the Holy Grail – a non-decreasing statistical mechanical quantity to call entropy – is neither necessary nor sufficient. Instead, we must find a quantity that plays the right role, viz. to be constant in adiabatic quasi-static processes and increasing in non-quasi-static processes, and I argue that the Gibbs entropy plays this role.
Week 7 (28 Feb) Alex Franklin (KCL): On the Effectiveness of Effective Field Theories
Effective Quantum Field Theories (EFTs) are effective insofar as they apply within a prescribed range of length-scales, but within that range they predict and describe with extremely high accuracy and precision. I will argue that the effectiveness of EFTs is best explained in terms of the scaling behaviour of the parameters. The explanation relies on distinguishing autonomy with respect to changes in microstates (autonomy_ms), from autonomy with respect to changes in microlaws (autonomy_ml), and relating these, respectively, to renormalisability and naturalness. It is claimed, pace Williams (2016), that the effectiveness of EFTs is a consequence of each theory’s renormalisability rather than its naturalness. This serves to undermine an important argument in favour of the view that only natural theories are kosher. It has been claimed in a number of recent papers that low-energy EFTs are emergent from their high-energy counterparts, see e.g. Bain (2013) and Butterfield (2014). Building on the foregoing analysis, I will argue that the emergence of EFTs may be understood in terms of the framework developed in Franklin and Knox (2018).
Week 8 (7 Mar) Karen Crowther (Geneva): As Below, So Before: Synchronic and Diachronic Conceptions of Emergence in Quantum Gravity
The emergence of spacetime from quantum gravity appears to be a striking case-study of emergent phenomena in physics (albeit one that is speculative at present). There are, in fact, two different cases of emergent spacetime in quantum gravity: a “synchronic” conception, applying between different levels of description, and a “diachronic” conception, from the universe “before” and after the “Big Bang” in quantum cosmology. The purpose of this paper is to explore these two different senses of spacetime emergence; and to see whether, and how, they can be understood in the context of specific extant accounts of emergence in physics.
Week 1 (11 Oct): David Wallace (USC): Spontaneous symmetry breaking in finite quantum systems: a decoherent-histories approach.
Abstract: Spontaneous symmetry breaking (SSB) in quantum systems, such as ferromagnets, is normally described as (or as arising from) degeneracy of the ground state; however, it is well established that this degeneracy only occurs in spatially infinite systems, and even better established that ferromagnets are not spatially infinite. I review this well-known paradox, and consider a popular solution where the symmetry is explicitly broken by some external field which goes to zero in the infinite-volume limit; although this is formally satisfactory, I argue that it must be rejected as a physical explanation of SSB since it fails to reproduce some important features of the phenomenology. Motivated by considerations from the analogous classical system, I argue that SSB in finite systems should be understood in terms of the approximate decoupling of the system’s state space into dynamically-isolated sectors, related by a symmetry transformation; I use the formalism of decoherent histories to make this more precise and to quantify the effect, showing that it is more than sufficient to explain SSB in realistic systems and that it goes over in a smooth and natural way to the infinite limit.
Week 2 (18 Oct): Simon Saunders (Oxford): Understanding indistinguishabilty.
Abstract:Indistinguishable entities are usually thought to be exactly alike, but not so in quantum mechanics — nor need the concept be restricted to the quantum domain. The concept, properly understood, can be applied in any context in which the only dynamically-salient state-independent properties are the same (so a fortiori in classical statistical mechanics).
The connection with the Gibbs paradox, and the reasons why the concept of classical indistinguishable particles has been so long resisted, are also discussed. The latter involves some background in the early history of quantum mechanics. This work builds on a recent publication, ‘The Gibbs Paradox’, Entropy (2018) 20(8), 552.
Week 3 (25 Oct): NO SEMINAR
Week 4 (1 Nov): NO SEMINAR
Week 5 (8 Nov): Tushar Menon (Oxford): Rotating spacetimes and the relativistic null hypothesis
Abstract: Recent work in the physics literature demonstrates that, in particular classes of rotating spacetimes, physical light rays do not, in general, traverse null geodesics. In this talk, I discuss its philosophical significance, both for the clock hypothesis (in particular, for Sam Fletcher’s recent purported proof thereof for light clocks), and for the operational meaning of the metric field in GR. (This talk is based on joint work with James Read and Niels Linnemann)
Week 6 (15 Nov): Jonathan Barrett (Oxford): Quantum causal models
Abstract: From a discussion of how to generalise Reichenbach’s Principle of the Common Cause to the case of quantum systems, I will develop a formalism to describe any set of quantum systems that have specified causal relationships between them. This formalism is the nearest quantum analogue to the classical causal models of Judea Pearl and others. I will illustrate the formalism with some simple examples, and if time, describe the quantum analogue of a well known classical theorem that relates the causal relationships between random variables to conditional independences in their joint probability distribution. I will end with some more speculative remarks concerning the significance of the work for the foundations of quantum theory.
Week 7 (22 Nov): James Nguyen (IoP/UCL): Interpreting Models: A Suggestion and its Payoffs
Abstract: I suggest that the representational content of a scientific model is determined by a `key’ associated with it. A key allows the model’s users to draw inferences about its target system. Crucially, these inferences need not be a matter of proposed similarity (structural or otherwise) to its target but can allow for much more conventional associations between model features and features to be exported. Although this is a simple suggestion, it has broad ramifications. I point out that it allows us to re-conceptualise what we mean by `idealisation’: just because a model is a distortion of its target (in the relevant respects, and even essentially so), this does not entail that it is a misrepresentation. I show how, once we think about idealisation in this way, various puzzles in the philosophy of science dissolve (the role of fictional models in science; the non-factivity of understanding; the problem of inconsistent models; and others).
Week 1 (26 Apr) Doreen Fraser (University of Waterloo): Renormalization and scaling transformations in quantum field theory
Abstract: Renormalization is a mathematical operation that needs to be carried out to make empirical sense of quantum field theories (QFTs). How should renormalized QFTs be physically interpreted? A prominent non-perturbative strategy for renormalizing QFTs is to draw on formal analogies with classical statistical mechanical models for critical phenomena. This strategy is implemented in both the Wilsonian renormalization group approach and the Euclidean approach to constructing models of the Wightman axioms. Each approach features a scaling transformation, but the scaling transformations are given different interpretations. I will analyze the two interpretations and argue that the approaches offer compatible and complementary perspectives on renormalization.
Week 2 (3 May) Jeremy Butterfield (Cambridge): On Dualities and Equivalences Between Physical Theories
Abstract:The main aim of this paper is to make a remark about the relation between (i) dualities between theories, as `duality’ is understood in physics and (ii) equivalence of theories, as `equivalence’ is understood in logic and philosophy. The remark is that in physics, two theories can be dual, and accordingly get called `the same theory’, though we interpret them as disagreeing—so that they are certainly not equivalent, as `equivalent’ is normally understood. So the remark is simple: but, I shall argue, worth stressing—since often neglected. My argument for this is based on the account of duality by De Haro: which is illustrated here with several examples, from both elementary physics and string theory. Thus I argue that in some examples, including in string theory, two dual theories disagree in their claims about the world. I also spell out how this remark implies a limitation of proposals (both traditional and recent) to understand theoretical equivalence as either logical equivalence or a weakening of it.
Week 3 (10 May) Matt Farr (Cambridge): The C Theory of Time
Abstract: Does time have a direction? Intuitively, it does. After all, our experiences, our thoughts, even our scientific explanations of phenomena are time-directed; things evolve from earlier to later, and it would seem unnecessary and indeed odd to try to expunge such talk from our philosophical lexicon. Nevertheless, in this talk I will make the case for what I call the C theory of time: in short, the thesis that time does not have a direction. I will do so by making the theory as palatable as possible, and this will involve giving an account of why it is permissible and indeed useful to talk in time-directed terms, what role time-directed explanations play in science, and why neither of these should commit us to the claim that reality is fundamentally directed in time. On the positive side, I will make the case that the C theory’s deflationism about the direction of time offers a superior account of time asymmetries in physics than rival time-direction-realist accounts.
Week 4 (17 May) Seth Lloyd (MIT): The future of Quantum Computing.
Abstract:Technologies for performing quantum computation have progressed rapidly over the past few years. This talk reviews recent advances in constructing quantum computers, and discusses applications for the kinds of quantum computers that are likely to be available in the near future. While full blown error corrected quantum computers capable of factoring large numbers are some way away, quantum computers with 100-1000 qubits should be available soon. Such devices should be able to solve problems quantum simulation and quantum machine learning that are beyond the reach of the most powerful classical computers. The talk will also discuss social aspects of quantum information, including the proliferation of start ups and the integration of quantum technologies in industry.
Week 5 (24 May) Emily Thomas (Durham): John Locke: Newtonian Absolutist about Time?
Abstract:John Locke’s metaphysics of time are relatively neglected but he discussed time throughout his career, from his unpublished 1670s writings to his 1690 Essay Concerning Human Understanding, and beyond. The vast majority of scholars who have written on Locke’s metaphysics of time argue that Locke’s views underwent an evolution: from relationism, the view that time and space are relations holding between bodies; to Newtonian absolutism, on which time and space are real, substance-like entities that are associated with God’s eternal duration and infinite immensity. Against this majority reading, I argue that Locke remained a relationist in the Essay, and throughout his subsequent career.
Week 6 (31 May) Minhyong Kim (Oxford): Three Dualities
Abstract:This talk will present a few contemporary points of view on geometry, with particular emphasis on dualities. Most of the talk will be concerned with mathematical practice, but will be interspersed with brief and superficial allusions to physics.
Week 7 (7 Jun) Owen Maroney (Oxford): TBC
Week 8 (14 Jun) Tushar Menon (Oxford): TBC
Abstract: TBC
Week 2 (25 Jan) Giulio Chiribella (Oxford): The Purification Principle
Abstract: Over the past decades there has been an intense work aiming at the reconstruction of quantum theory from principles that can be formulated without the mathematical framework of Hilbert spaces and operator algebras. The motivation was that these principles could provide a new angle to understand into the counterintuitive quantum laws, that they could reveal connections between different quantum features, and that they could provide guidance for constructing new quantum algorithms and for extending quantum theory to new physical scenarios.
In this talk I will discuss on one such principle, called the Purification Principle. Informally, the idea of the Purification Principle is that it is always possible to combine the incomplete information gathered by an observer with a maximally informative picture of the physical world. This idea resonates with Schrödinger’s famous quote that “[in quantum theory] the best possible knowledge of a whole does not necessarily imply the best possible knowledge of its parts”, a property that he called “not one, but rather the characteristic trait of quantum mechanics, the one that enforces its entire departure from classical lines of thought.”
References for this talk:
GC, GM D’Ariano, and P Perinotti, Probabilistic Theories with Purification, Phys. Rev. A 81, 062348 (2010)
GC, GM D’Ariano, and P Perinotti, Informational Derivation of Quantum Theory, Phys. Rev. A 84, 012311 (2011)
GM D’Ariano, GC, and P Perinotti, Quantum Theory From First Principles, Cambridge University Press (2017).
Week 3 (1 Feb) Christopher Timpson (Oxford): Concepts of fundamentality: the case of information
Abstract: A familiar – perhaps traditional – conception of fundamental physics is as follows: physics presents the world as being populated at the basic level (or the pro tem basic level) by various fields or/and particles, and it furnishes equations describing how these items evolve and interact with one another over time, equations couched primarily in terms of such properties as energy, mass, and various species of charge. This evolution may be conceived to take place against a fixed background spatiotemporal arena of some kind, or in one alternative, it may well be conceived that the arena has a metrical structure which should also be treated as particular kind of field, itself subject to dynamical equations (as in General Relativity). But in recent years, stemming primarily from developments in quantum information theory and related thinking in quantum theory itself, an alternative conception has been gaining momentum, one which sees the concept of information playing a much more fundamental role in physics than this traditional picture would allow. This alternative conception urges that information must be recognised as a fundamental physical quantity, a quantity which in some sense should be conceived as being on a par with energy, mass, or charge. Perhaps even, according to strong versions of the conception, information should be seen as the new subject-matter for physics, displacing the traditional conception of material particles and fields as being the fundamental subject matter.
These are bold and interesting claims on the part of information, and regarding what is said to follow from the successes of quantum information theory. Are they well-motivated? Are they true? I will explore these issues by attempting, first of all, to delineate various ways in which something (object, structure, property, or concept) might be thought to be physically fundamental. On at least one prima facie plausible carving of the notion of fundamentality, one should distinguish between the logically independent notions of ontological fundamentality, nomological fundamentality, and explanatory fundamentality. Something is ontologically fundamental if it is posited by the most fundamental description of the world; it is nomologically fundamental if reference to it is necessary to state the physical laws in some domain; and it is explanatorily fundamental if positing it is necessary in explanation and understanding. It is straightforward to show that the concept of information is not ontologically fundamental (or more cagily put: that there is nothing at all about the successes of quantum information theory that would warrant thinking it to be ontologically fundamental), but the questions of nomological and explanatory fundamentality of information are harder to settle so succinctly.
Week 4 (8 Feb) David Wallace (USC): Why black hole information loss is paradoxical
Abstract: I distinguish between two versions of the black hole information-loss paradox. The first arises from apparent failure of unitarity on the spacetime of a completely evaporating black hole, which appears to be non-globally-hyperbolic; this is the most commonly discussed version of the paradox in the foundational and semipopular literature, and the case for calling it “paradoxical” is less than compelling. But the second arises from a clash between a fully-statistical-mechanical interpretation of black hole evaporation and the quantum-field-theoretic description used in derivations of the Hawking effect. This version of the paradox arises long before a black hole completely evaporates, seems to be the version that has played a central role in quantum gravity, and is genuinely paradoxical. After explicating the paradox, I discuss the implications of more recent work on AdS/CFT duality and on the ‘Firewall paradox’, and conclude that the paradox is if anything now sharper.
Week 5 (15 Feb) Dennis Lehmkuhl (Caltech): The History and Interpretation of Black Hole Solutions
Abstract:The history and philosophy of physics community has spent decades grappling with the interpretation of the Einstein field equations and its central mathematical object, the metric tensor. However, the community has not endeavoured a detailed study of the solutions to these equations. This is all the more surprising as this is where the meat is in terms of the physics: the confirmation of general relativity through the 1919 observation of light being bent by the sun, as well as the derivation of Mercury’s perihelion, both depend much more on the use of the Schwarzschild solution than on the actual field equations. Indeed, Einstein had not yet found the final version of the field equations when he predicted the perihelion of Mercury. The same is true with respect to the recently discovered black holes and gravitational waves: they are, arguably, tests of particular solutions to the Einstein equations and how these solutions are applied to certain observations. Indeed, what is particularly striking is that all the solutions just mentioned are solutions to the vacuum Einstein equations rather than to the full Einstein equations. This is surprising given that black holes are the most massive objects in the universe, and yet they are adequately represented by solutions to the vacuum field equations.
In this talk, I shall discuss the history and the diverse interpretations and applications of three of the most important (classes of) black hole solutions: I will address especially how the free parameters in these solutions were identified as representing the mass, charge and angular momentum of isolated objects, and what kind of coordinate conditions made it possible to apply the solutions in order to represent point particles, stars, and black holes.
Week 6 (22 Feb) No seminar
Week 7 (1 Mar) Carina Prunkl (Oxford): Black Hole Entropy is Entropy and not (necessarily) Information
Abstract:The comparison of geometrical properties of black holes with classical thermodynamic variables reveals surprising parallels between the laws of black hole mechanics and the laws of thermodynamics. Since Hawking’s discovery that black holes when coupled to quantum matter fields emit radiation at a temperature proportional to their surface gravity, the idea that black holes are genuine thermodynamic objects with a well-defined thermodynamic entropy has become more and more popular. Surprisingly, arguments that justify this assumption are both sparse and rarely convincing. Most of them rely on an information-theoretic interpretation of entropy, which in itself is a highly debated topic in the philosophy of physics. Given the amount of disagreement about the nature of entropy and the second law on the one hand, and the growing importance of black hole thermodynamics for the foundations of physics on the other hand, it is desirable to achieve a deeper understanding of the notion of entropy in the context of black hole mechanics. I discuss some of the pertinent arguments that aim at establishing the identity of black hole surface area (times a constant) and thermodynamic entropy and show why these arguments are not satisfactory. I then present a simple model of a Black Hole Carnot cycle to establish that black hole entropy is genuine thermodynamic entropy which does not require an information-theoretic interpretation.
Week 8 (8 Mar) Nicolas Teh (Notre Dame): TBC
Week 2 (October 19) Henrique Gomes, Perimeter Institute, Waterloo.
“New vistas from the many-instant landscape”
Abstract: Quantum gravity has many conceptual problems. Amongst the most well-known is the “Problem of Time”: gravitational observables are global in time, while we would really like to obtain probabilities for processes taking us from an observable at one time to another, later one. Tackling these questions using relationalism will be the preferred strategy during this talk. The ‘relationalist’ approach leads us to shed much redundant information and enables us to identify a reduced configuration space as the arena on which physics unfolds, a goal still beyond our reach in general relativity. Moreover, basing our ontology on this space has far-reaching consequences. One is that it suggests a natural interpretation of quantum mechanics; it is a form of ‘Many-Worlds’ which I have called Many-Instant Bayesianism. Another is that the gravitational reduced configuration space has a rich, highly asymmetric structure which singles out preferred, non-singular and homogeneous initial conditions for a wave-function of the universe, which is yet to be explored.
Week 3 (October 26) Jonathan Halliwell, Imperial College, London
“Comparing conditions for macrorealism: Leggett-Garg inequalities vs no-signalling in time”
Abstract: Macrorealism is the view that a macroscopic system evolving in time possesses definite properties which can be determined without disturbing the future or past state.
I discuss two different types of conditions which were proposed to test macrorealism in the context of a system described by a single dichotomic variable Q. The Leggett-Garg (LG) inequalities, the most commonly-studied test, are only necessary conditions for macrorealism, but I show that when the four three-time LG inequalities are augmented with a certain set of two-time inequalities also of the LG form, Fine’s theorem applies and these augmented conditions are then both necessary and sufficient. A comparison is carried out with a very different set of necessary and sufficient conditions for macrorealism, namely the no-signaling in time (NSIT) conditions proposed by Brukner, Clemente, Kofler and others, which ensure that all probabilities for Q at one and two times are independent of whether earlier or intermediate measurements are made in a given run, and do not involve (but imply) the LG inequalities. I argue that tests based on the LG inequalities have the form of very weak classicality conditions and can be satisfied, in quantum mechanics, in the face of moderate interference effects, but those based on NSIT conditions have the form of much stronger coherence witness conditions, satisfied only for zero interference. The two tests differ in their implementation of non-invasive measurability so are testing different notions of macrorealism. The augmented LG tests are indirect, entailing a combination of the results of different experiments with only compatible quantities measured in each experimental run, in close analogy with Bell tests, and are primarily tests for macrorealism per se. By contrast the NSIT tests entail sequential measurements of incompatible quantities and are primarily tests for non-invasiveness.
Based on the two papers J.J.Halliwell, Phys.Rev. A93, 022123 (2016); A96, 012121 (2017).
Week 4 (November 2) Sam Fletcher, Dept of Philosophy, University of Minnesota.
“Emergence and scale’s labyrinth”
I give precise formal definitions of a hierarchy of emergence concepts for properties described in models of physical theories, showing how some of these concepts are compatible with reductive (but not strictly deductive) relationships between these theories. Besides applying fruitfully to a variety of physical examples, these concepts do not in general track autonomy or novelty along a single simple dimensional scale such as energy, length, or time, but can instead involve labyrinthine balancing relationships between these scales. This complicates the usual view of emergence as relating linearly (or even partially) ordered levels.
Week 5 (November 9) James Ladyman, Dept of Philosophy, University of Bristol
“Why interpret quantum mechanics?”
Abstract: I discuss recent arguments that QM needs no interpretation, and that it should be understood as not representational. I consider how the interpretation of quantum mechanics relates to various kinds of realism, and the fact that the theory is known not to be a complete theory of the world. I tentatively suggest a position that is sceptical about the way the interpretation of quantum mechanics is often undertaken, in particular of the idea of the ontology of the wavefunction, but stops short of regarding quantum states as not representational.
Week 6 (November 16) Hasok Chang, Department of History and Philosophy of Science, University of Cambridge.
“Beyond truth-as-correspondence: Realism for realistic people”
Abstract: In this paper I present arguments against the epistemological ideal of “correspondence”, namely the deeply entrenched notion that empirical truth consists in the match between our theories and the world. The correspondence ideal of knowledge is not something we can actually pursue, for two reasons: it is difficult to discern a coherent sense in which statements correspond to language-independent facts, and we do not have the kind of independent access to the “external world” that would allow us to check the alleged statement–world correspondence. The widespread intuition that correspondence is a pursuable ideal is based on an indefensible kind of externalist referential semantics. The idea that a scientific theory “represents” or “corresponds to” the external world is a metaphor grounded in other human epistemic activities that are actually representational. This metaphor constitutes a serious and well-entrenched obstacle in our attempt to understand scientific practices, and overcoming it will require some disciplined thinking and hard work. On the one hand, we need to continue with real practices of representation in which correspondence can actually be judged; on the other hand, we should stop the illegitimate transfer of intuitions from those practices over to realms in which there are no representations being made and no correspondence to check.
Week 7 (November 23) Alison Fernandes, Department of Philosophy, University of Warwick.
“The temporal asymmetry of chance”
Abstract: The Second Law of Thermodynamics can be derived from the fact that an isolated system at non-maximal entropy is overwhelmingly likely to increase in entropy over time. Such derivations seem to make ineliminable use of objective worldly probabilities (chances). But some have argued that if the fundamental laws are deterministic, there can be no non-trivial chances (Popper, Lewis, Schaffer). Statistical-mechanical probabilities are merely epistemic, or otherwise less real than ‘dynamical’ chances. Many have also thought that chance is intrinsically temporally asymmetric. It is part of the nature of chance that the past is ‘fixed’, and that all non-trivial chances must concern future events. I’ll argue that it is no coincidence that many have held both views: the rejection of deterministic chance is driven by an asymmetric picture of chance in which the past produces the future. I’ll articulate a more deflationary view, according to which more limited temporal asymmetries of chance reflect contingent asymmetries of precisely the kind reflected in the Second Law. The past can be chancy after all.
Week 8 (November 30) Nancy Cartwright, Department of Philosophy, University of Durham and University of California, San Diego
“What are pragmatic trials in medicine good for?”
Abstract: There is widespread call for increasing use of pragmatic trials in both medicine and social science nowadays. These are randomised controlled trials (RCTs) that are administered in ‘more realistic’ circumstances than standard, i.e. with more realistic treatment/programme delivery (e.g. busier, less well-trained doctors/social workers) and a wider range of recipients (e.g. ones that self select into treatment or have ‘co-morbidities’ or are already subject to a number of other interventions that might interfere with the treatment). Pragmatic trial results are supposed to be more readily ‘generalisable’ than results from those with more rigid protocols.
We argue that this is a mistake. Trials, pragmatic or otherwise, can only provide results about those individuals enrolled in the trial. Anything else requires assumptions from elsewhere, and generally strong ones. Based on a common understanding of what causal principles look like in these domains, this talk explains what results can be well warranted by an RCT and warns against the common advice to take the criteria for admission to a trial to be indicative of where else its results may be expected to hold.
Joint work with Sarah Wieten.
15th June 2017 Harvey Brown (Oxford), “QBism: the ineffable reality behind “ participatory realism””
Abstract: The recent philosophy of Quantum Bayesianism, or QBism, represents an attempt to solve the traditional puzzles in the foundations of quantum theory by denying the objective reality of the quantum state. Einstein had hoped to remove the spectre of nonlocality in the theory by also assigning an epistemic status to the quantum state, but his version of this doctrine was recently proved to be inconsistent with the predictions of quantum mechanics. In this talk, I present plausibility arguments, old and new, for the reality of the quantum state, and expose what I think are weaknesses in QBism as a philosophy of science.
8th June 2017 David Jackson (Independent), “How to build a unified field theory from one dimension”
Abstract: Motivated in part by Kant’s work on the a priori nature of space and time, and in part by the conceptual basis of general relativity, a physical theory deriving from a single temporal dimension will be presented. We describe how the basic arithmetic composition of the real line, representing the one dimension of time, itself incorporates structures that can be interpreted as underpinning both the geometrical form of space and the physical form of matter. This unification scheme has a number of features in common with a range of physical theories based on ‘extra dimensions’ of space, while being heavily constrained in deriving from a single dimension of time. A proposal for combining general relativity with quantum theory in the context of this approach will be summarised, along with the connections made with empirical observations. In addition to extracts from Kant further references to sources in the philosophical literature will be cited, in particular with regard to the relation between mathematical objects and physical structures.
1st June 2017 Jo E. Wolff (KCL), “Quantities – Metaphysical Choicepoints”
Abstract: Beginning from the assumption that quantities are (rich) relational structures, I ask, what kind of ontology arises from attributing this sort of structure to physical attributes. There are three natural questions to ask about relational structures: What are the relations, what are the relata, and what is the relationship between relata and relations? I argue that for quantities, the choicepoints available in response to these questions are:
1) intrinsicalism vs. structuralism
2) substantivialism vs. anti-substantivalism
3) absolutism vs. comparativism
In the remainder of the talk I sketch, which of these choices make for coherent candidate ontologies for quantities.
18th May 2017 Paul Tappenden (Independent), “Quantum fission”.
Abstract: Sixty years on there is still deep division about Everett’s proposal. Some very well informed critics take the whole idea to be unintelligible whilst there are important disagreements amongst supporters. I argue that Everett’s fundamental and radical idea is to do with metaphysics rather than physics: it is to abolish the physically possible/actual dichotomy. I show that the idea is intelligible via a thought experiment involving a novel version of the mind-body relation which I have already used in the defence of semantic internalism.
The argument leads to a fission interpretation of branching rather than a “divergence” interpretation of the sort first suggested by David Deutsch in 1985 and more recently developed in different ways by Simon Saunders, David Wallace and Alastair Wilson. I discuss the two metaphysical problems which fission faces: transtemporal identity and the identification of probability with relative branch measure. And I claim that the Born rule applies transparently if the alternative mind-body relation is accepted. The upshot is that what Wallace calls the Radical View replaces his preferred Conservative View, with the result that there are some disturbing consequences such as inevitable personal survival in quantum Russian roulette scenarios and David Lewis’s suggestion that Everettians should “shake in their shoes”.
11th May 2017 Michela Massimi (Edinburgh), “Perspectival models in contemporary high-energy physics”.
Abstract: In recent times perspectivism has come under attack. Critics have argued that when it comes to modelling, perspectivism is either redundant, or, worse, it leads to a plurality of incompatible or even inconsistent models about the same target system. In this paper, I attend to two tasks. First, I try to get clear about the charge of metaphysical inconsistency that has been levelled against perspectivism and identify some key assumptions behind it. Second, I propose a more positive role for perspectivism in some modelling practices by identifying a class of models, which I call “perspectival models”. I illustrate this class of models with examples from contemporary LHC physics.
4th May 2017 Tushar Menon (Oxford), “Affine Balance: Algebraic functionalism and the ontology of spacetime”.
Abstract: Our two most empirically successful theories, quantum mechanics and general relativity, are at odds with each other when it comes to several foundational issues. The deepest of these issues is also, perhaps, the easiest to grasp intuitively: what is spacetime? Most attempts at theories of quantum gravity do not make it obvious which degrees of freedom are spatiotemporal. In non-general relativistic theories, the matter/spacetime distinction is adequately tracked by the dynamical/non-dynamical object distinction. General relativity is different, because spacetime, if taken to be jointly, but with some redundancy, represented by a smooth manifold and a metric tensor field, is not an immutable, inert, external spectator. Our dynamical/non-dynamical distinction appears no longer to do the work for us; we appear to need something else. In the first part of this talk, I push back against the idea that the dynamical/non-dynamical distinction is doomed. I motivate a more general algebraic characterisation of spacetime based on Eleanor Knox’s spacetime functionalism, and the Helmholtzian notion of free mobility. I argue that spacetime is most usefully characterised by its (local) affine structure.
In the second part of this talk, I consider the debate between Brown and Pooley on the one hand and Janssen and Balashov on the other, about the direction of the arrow of explanation in special relativity. Characterising spacetime using algebraic functionalism, I demonstrate that only Brown’s position is neutral on the substantivalism–relationalism debate. This neutrality may prove to be highly desirable in an interpretation of spacetime that one hopes will generalise to theories of quantum gravity—it seems like poor practice to impose restrictions on an acceptable quantum theory of spacetime based on metaphysical prejudices or approximately true effective field theories. The flexibility of Brown’s approach affords us a theory-dependent a posteriori identification of spacetime, and arguably counts in its favour. I conclude by gesturing towards how this construction might be useful in extending Brown’s view to theories of quantum gravity.
27th April 2017 Peter Hylton (UIC) “Analyticity, yet again”.
Abstract: Although Quine became famous for having rejected the analytic-synthetic distinction, he actually accepted it for the last quarter century of his philosophical career. Yet his doing so makes no difference to his other views. In this talk, I press the question ‘Why not?’, in the hope of gaining insight into Quine’s views, and especially his differences with Carnap. I contrast Quine’s position not only with Carnap’s but also with those of Putnam, as represented in his paper ‘The Analytic and the Synthetic’. Putnam there puts forward an answer to the ‘Why not?’ question which is, I think, fairly widely accepted, and perhaps taken to be Quine’s answer as well—wrongly so taken, I claim.
9th March 2017 Michael Hicks (Physics, Oxford), “Explanatory (a)symmetries and Humean laws”.
Abstract: Recently, Lange (2009) has argued that some physical principles are explanatorily prior to others. Lange’s main examples are symmetry principles, which he argues explain both conservation laws–through Noether’s Theorem–and features of dynamic laws–for example, the Lorentz invariance of QFT. Lange calls these “meta-laws” claims that his account of laws, which is built around the counterfactual stability of groups of statements, can capture the fact that these govern or constrain first-order laws, whereas other views, principally Humean views, can’t. After reviewing the problem Lange presents, I’ll show how the explanatory asymmetry between laws he describes follows naturally on a Humean understanding of what laws are–particularly informative summaries. The Humean should agree with Lange that symmetry principles are explanatorily prior to both conservation laws and dynamic theories like QFT; however, I’ll argue that Lange is wrong to consider these principles “meta-laws” which in some way govern first-order laws, and I’ll show that on the Humean view, the explanation of these two sorts of laws from symmetry principles is importantly different.
2nd March 2017 Ronnie Hermens (Philosophy, Groningen), “How ψ-ontic are ψ-ontic models?”.
Abstract: Ψ-ontology theorems show that in any ontic model that is able to reproduce the predictions of quantum mechanics, the quantum state must be encoded by the ontic state. Since the ontic state determines what is real, and it determines the quantum state, the quantum state must be real. But how does this precisely work in detail, and what does the result imply for the status of the quantum state in ψ-ontic models? As a test case scenario I will look at the ontic models of Meyer, Kent and Clifton. Since these models are able to reproduce the predictions of quantum mechanics, they must be ψ-ontic. On the other hand, quantum states play no role whatsoever in the construction of these models. Thus finding out which ontic state belongs to which quantum state is a non-trivial task. But once that is done, we can ask: does the quantum state play any explanatory role in these models, or is the fact that they are ψ-ontic a mere mathematical nicety?
23rd February 2017 Simon Saunders (Philosophy, Oxford), “Quantum monads”.
Abstract: The notion of object (and with it ontology) in the foundations of quantum mechanics has been made both too easy and too hard: too easy, because particle distinguishability, and with it the use of proper names, is routinely assumed; too hard, because a number of metaphysical demands have been made of it (for example, in the notion of ‘primitive ontology’ in the writings of Shelly Goldstein and his collaborators). The measurement problem is also wrapped up with it. I shall first give an account of quantum objects adequate to the thin sense required of quantification theory (in the tradition of Frege and Quine); I then consider an alternative, much thicker notion that is strongly reminiscent of Leibniz’s monadology. Both apply to the Everett interpretation and to dynamical collapse theories (sans primitive ontology).
16th February 2017 Steven Balbus (Physics, Oxford), “An anthropic explanation for the nearly equal angular diameters of the Sun and Moon”.
Abstract: The very similar angular sizes of the Sun and Moon as subtended at the Earth is generally portrayed as coincidental. In fact, close angular size agreement is a direct and inevitable mathematical consequence of even roughly comparable lunar and solar tidal amplitudes. I will argue that the latter was a biological imperative for the evolution of land vertebrates and can be understood on the basis of anthropic arguments. Comparable tidal amplitudes from two astronomical sources, with close but distinct frequencies, leads to strongly modulated forcing: in essence spring and neap tides. This appearance of this surely very rare tidal pattern must be understood in the context of paleogeography and biology of the Late Devonian period. Two great land masses were separated by a broad opening tapering to a very narrow, shallow-sea strait. The combination of this geography and modulated tidal forces would have been conducive to forming a rich inland network of shallow but transient (and therefore isolating) tidal pools at an epoch when fishy tetrapods were evolving and acquiring land navigational skills. I will discuss the recent fossil evidence showing that important transitional species lived in habitats strongly influenced by intermittent tides. It may be that any planet capable of harbouring a contemplative species displays a moon in its sky very close in angular diameter to that of its sun.
9th February 2017 Alastair Wilson (Philosophy, Birmingham), “How multiverses might undercut the fine-tuning argument”.
Abstract: In the context of the probabilistic fine-tuning argument that moves from the fragility of cosmological parameters with respect to life to the existence of a divine designer, appealing to the existence of a multiverse has in general seemed problematically ad hoc. The situation looks rather different, though, if there is independent evidence from physics for a multiverse. I will argue that independently-motivated multiverses can be undercutting defeaters for the fine-tuning argument; but whether the argument is indeed undercut still depends on open questions in fundamental physics and cosmology. I will also argue that Everettian quantum mechanics opens up new routes to undercutting the fine-tuning argument, although by itself it is insufficient to do so.
26th January 2017 Antony Eagle (Philosophy, Adelaide), “Quantum location”.
Abstract: Many metaphysicians are committed to the existence of a location relation between material objects and spacetime, useful in characterising debates in the metaphysics of persistence and time, particularly in the context of trying to map ordinary objects into models of relativity theory. Relatively little attention has been paid to location in quantum mechanics, despite the existence of a position observable in QM being one of the few things metaphysicians know about it. I want to explore how the location relation(s) postulated by metaphysicians might be mapped onto the framework of QM, with particular reference to the idea that there might be such a thing as being indeterminately located.
19th January 2017 Emily Adlam (DAMPT, Cambridge), “Quantum mechanics and global determinism”.
Abstract: We propose that the information-theoretic features of quantum mechanics are perspectival effects which arise because experiments on local variables can only uncover a certain subset of the correlations exhibited by an underlying deterministic theory. We show that the no-signalling principle, information causality, and strong subadditivity can be derived in this way; we then use our approach to propose a new resolution of the black hole information paradox.
24 Nov 2016 David Glick (Philosophy, Oxford), “Swapping Something Real: Entanglement Swapping and Entanglement Realism”.
Abstract: Experiments demonstrating entanglement swapping have been alleged to challenge realism about entanglement. Seevinck (2006) claims that entanglement “cannot be considered ontologically robust” while Healey (2012) claims that entanglement swapping “undermines the idea that ascribing an entangled state to quantum systems is a way of representing some new, non-classical, physical relation between them.” My aim in this paper is to show that realism is not threatened by the possibility of entanglement swapping, but rather, should be informed by the phenomenon. I argue—expanding the argument of Timpson and Brown (2010)—that ordinary entanglement swapping cases present no new challenges for the realist. With respect to the delayed-choice variant discussed by Healey, I claim that there are two options available to the realist: (a) deny these are cases of genuine swapping (following Egg (2013)) or (b) allow for the existence of entanglement relations between timelike separated regions. This latter option, while radical, is not incoherent and has been suggested in quite different contexts. While I stop short of claiming that the realist must take this option, doing so allows one to avoid certain costs associated with Egg’s account. I conclude by noting several important implications of entanglement swapping for how one thinks of entanglement relations more generally.
17 Nov 2016 Jim Weatherall (UC Irvine),”On Stuff: The Field Concept in Classical Physics”.
Abstract: Discussions of physical ontology often come down to two basic options. Either the basic physical entities are particles, or else they are fields. I will argue that, in fact, it is not at all clear what it would mean to say that the world consists of fields. Speaking classically (i.e., non-quantum-ly), there are many different sorts of thing that go by the name “field”, each with different representational roles. Even among those that have some claim to being “fundamental” in the appropriate sense, it does not seem that a single interpretational strategy could apply in all cases. I will end by suggesting that standard strategies for constructing quantum theories of fields are not sensitive to the different roles that “fields” can play in classical physics, which adds a further difficulty to interpreting quantum field theory. Along the way, I will say something about an old debate in the foundations of relativity theory, concerning whether the spacetime metric is a “geometrical” or “physical” field. The view I will defend is that the metric is much like the electromagnetic field: geometrical!
10 Nov 2016 Lina Jansson (Nottingham), ‘Newton’s Methodology Meets Humean Supervenience about Laws of Nature’.
Abstract: Earman and Roberts [2005a,b] have argued for Humean supervenience about laws of nature based on an argument from epistemic access. In rough outline, their argument relies on the claim that if Humean supervenience is false, then we cannot have any empirical evidence in favour of taking a proposition to be a law of nature as opposed to merely accidentally true. I argue that Newton’s methodology in the Principia provides a counterexample to their claim. In particular, I argue that the success or failure of chains of subjunctive reasoning is empirically accessible, and that this provides a way of gaining empirical evidence for or against a proposition being a law of nature (even under the assumption that Humean supervenience fails).
27 Oct 2016 Ryan Samaroo (Bristol), “The Principle of Equivalence is a Criterion of Identity”.
Abstract: In 1907 Einstein had an insight into gravitation that he would later refer to as ‘the happiest thought of my life’. This is the hypothesis, roughly speaking, that bodies in free fall do not ‘feel’ their own weight. This is what is formalized in ‘the equivalence principle’. The principle motivated a critical analysis of the Newtonian and 1905 inertial frame concepts, and it was indispensable to Einstein’s argument for a new concept of inertial motion. A great deal has been written about the equivalence principle. Nearly all of this work has focused on the content of the principle, but its methodological role has been largely neglected. A methodological analysis asks the following questions: what kind of principle is the equivalence principle? What is its role in the conceptual framework of gravitation theory? I maintain that the existing answers are unsatisfactory and I offer new answers.
20 Oct 2016 Niels Martens (Oxford, Philosophy), “Comparativism about Mass in Newtonian Gravity”.
Abstract: Absolutism about mass asserts that facts about mass ratios are true in virtue of intrinsic masses. Comparativism about mass denies this. I present and dismiss Dasgupta’s (2013) analysis of his recent empirical adequacy argument in favour of comparativism—in the context of Newtonian Gravity. I develop and criticise two new versions of comparativism. Regularity Comparativism forms a liberalisation of Huggett’s Regularity Relationalism (2006), which uses the Mill-Ramsey-Lewis Best System’s Account to respond to Newton’s bucket argument in the analogous relationalism-substantivalism debate. To the extent that this approach works at all, I argue that it works too well: it throws away the massive baby with the bathwater. A Machian flavoured version of comparativism is more promising. Although it faces no knock-down objection, it is not without its own problems though.
13 Oct 2016 David Wallace (USC, Philosophy), “Fundamental and emergent geometry in Newtonian gravity”.
Abstract: Using as a starting point recent and apparently incompatible conclusions by Simon Saunders (Philosophy of Science 80 (2013) pp.22-48) and Eleanor Knox (British Journal for the Philosophy of Science 65 (2014) pp.863-880), I revisit the question of the correct spacetime setting for Newtonian physics. I argue that understood correctly, these two theories make the same claims both about the background geometry required to define the theory, and about the inertial structure of the theory. In doing so I illustrate and explore in detail the view — espoused by Knox, and also by Harvey Brown (Physical Relativity, OUP 2005) — that inertial structure is defined by the dynamics governing subsystems of a larger system. This clarifies some interesting features of Newtonian physics, notably (i) the distinction between using the theory to model subsystems of a larger whole and using it to model complete Universes, and (ii) the scale-relativity of spacetime structure.
19 May 2016 Eleanor Knox (KCL, Philosophy), “Novel Explanation and the Emergence of Phonons”.
Abstract: Discussions of emergence in the philosophy of physics literature often emphasise the role of asymptotic limits in understanding the novelty of emergent phenomena while leaving the nature of the novelty in question unexplored. I’ll put forward an account of explanatory novelty that can accommodate examples involving asymptotic limits, but also applies in other cases. The emergence of phonons in a crystal lattice will provide an example of a description with novel explanatory power that does not depend on asymptotic limits for its novelty. The talk is based on joint work with Alex Franklin.
12th May 2016 Yvonne Geyer (Oxford, Maths), “Rethinking Quantum Field Theory: Traces of String Theory in Yang-Mills and Gravity”.
Abstract: A multitude of recent developments point towards the need for a different understanding of Quantum Field Theories. After a general introduction, I will focus on one specific example involving one of the most natural and fundamental observables; the scattering amplitude. In Yang-Mills theory and Einstein gravity, scattering amplitudes exhibit a simplicity that is completely obscured by the traditional approach to Quantum Field Theories, and that is remarkably reminiscent of the worldsheet models describing string theory. In particular, this implies that – without additional input – the theories describing our universe, Yang-Mills theory and gravity, exhibit traces of string theory.
28th April 2016 Roman Frigg (LSE, Philosophy), “Further Rethinking Equilibrium”.
Abstract: In a recent paper we proposed a new definition of Boltzmannian equilibrium and showed that in the case of deterministic dynamical systems the new definition implies the standard characterisation but without suffering from its well-known problems and limitations. We now generalise this result to stochastic systems and show that the same implication holds. We then discuss an existence theorem for equilibrium states and illustrate with a number of examples how the theorem works. Finally, fist steps towards understanding the relation between Boltzmannian and Gibbsian equilibrium are made.
25 Feb 2016 Stephen J. Blundell (Oxford, Physics), ‘Emergence, causation and storytelling: condensed matter physics and the limitations of the human mind’
Abstract: The physics of matter in the condensed state is concerned with problems in which the number of constituent particles is vastly greater than can be comprehended by the human mind. The physical limitations of the human mind are fundamental and restrict the way in which we can interact with and learn about the universe. This presents challenges for developing scientific explanations that are met by emergent narratives, concepts and arguments that have a nonEtrivial relationship to the underlying microphysics. By examining examples within condensed matter physics, and also from cellular automata, I show how such emergent narratives efficiently describe elements of reality.
18 Feb 2016 Jean-Pierre Llored (University of Clermont-Ferrand), ‘From quantum physics to quantum chemistry’.
Abstract: The first part, which is mainly anthropological, summarizes the results of a survey that we carried out in several research laboratories in 2010. Our aims were to understand what quantum chemists currently do, what kind of questions they ask, and what kind of problems they have to face when creating new theoretical tools both for understanding chemical reactivity and predicting chemical transformations.
The second part, which is mainly historical, highlights the philosophical underpinnings that structure the development of quantum chemistry from 1920 to nowadays. In so doing, we will discuss chemical modeling in quantum chemistry, and the different strategies used in order to define molecular features using atomic ones and the molecular surroundings at the same time. We will show how computers and new laboratories emerged simultaneously, and reshaped the culture of quantum chemistry. This part goes on to describe how the debate between ab initio and semi-empirical methods turned out to be highly controversial because of underlying scientific and metaphysical assumptions about, for instance, the nature of the relationships between science and the possibility for human knowledge to reach a complete description of the world.
The third and last part is about the philosophical implications for the study of quantum chemistry and that of ‘quantum sciences’ at large. It insists on the fact that the history of quantum chemistry is also a history of the attempts of chemists to establish the autonomy of their theories and methods with respect to physical, mathematical, and biological theories. According to this line of argument, chemists gradually proposed new concepts in order to circumvent the impossibility to perform full analytical calculations and to make the language of classical structural chemistry and that of quantum chemistry compatible. Among different topics, we will query the meaning of a chemical bond, the impossibility to deduce a molecular shape from the Schrödinger equation, the way quantum chemistry is involved in order to explain the periodic table, and the possibility to go beyond the Born-Oppenheimer approximation. We would like to show that quantum chemistry is neither physics nor chemistry nor applied mathematics, and that philosophical debates which turned out to be relevant in quantum physics are not necessarily so in quantum chemistry, whereas other philosophical questions arise…
11th Feb 2016 David Wallace (Oxford, Philosophy) , ‘Who’s afraid of coordinate systems?’.
Abstract: Coordinate-based approaches to physical theories remain standard in mainstream physics but are largely eschewed in foundational discussion in favour of coordinate-free differential-geometric approaches. I defend the conceptual and mathematical legitimacy of the coordinate-based approach for foundational work. In doing so, I provide an account of the Kleinian conception of geometry as a theory of invariance under symmetry groups; I argue that this conception continues to play a very substantial role in contemporary mathematical physics and indeed that supposedly ‘coordinate-free’ differential geometry relies centrally on this conception of geometry. I discuss some foundational and pedagogical advantages of the coordinate-based formulation and briefly connect it to some remarks of Norton on the historical development of geometry in physics during the establishment of the general theory of relativity.
21 Jan 2016 Philipp Roser (Clemson), ‘‘Time and York time in quantum theory’.
Abstract: Classical general relativity has no notion of a physically meaningful time parameter and one is free to choose one’s coordinates at will. However, when attempting to quantise the theory this freedom leads to difficulties, the notorious `problem of time’ of canonical quantum gravity. One way to overcome this obstacle is the identification of a physically fundamental time parameter. Interestingly, although purely aesthetic at the classical level, different choices of time parameter may in principle lead to different quantum phenomenologies, as I will illustrate with a simple model. This means that an underlying physically fundamental notion of time may (to some extent) be detectable via quantum effects.
For various theoretical reasons one promising candidate for a physical time parameter is `York time’, named after James York and his work on the initial-value problem of general relativity, where its importance first became apparent. I will derive the classical and quantum dynamics with respect to York time for certain cosmological models and discuss some of the unconventional structural features of the resulting quantum theory.
3 Dec 2015 Thomas Moller-Nielsen (Oxford), “Symmetry and the Interpretation of Physical Theories”
Abstract: In this talk I examine two (putative) ways in which symmetries can be used as tools for physical theory interpretation. First, I examine the extent to which symmetries can be used as a guide to a theory’s ideology: that is, as a means of determining which quantities are real, according to the theory. Second, I examine the extent to which symmetries can be used as a guide to a theory’s ontology: that is, as a means of determining which objects are real, according to the theory. I argue that symmetries can only legitimately be used in the first, but not the second, sense.
26 Nov 2015 Ellen Clarke (All Souls), “Biological Ontology”.
Abstract: All sciences invent kind concepts: names for categories that gather particulars together according to their possession of some scientifically interesting properties. But kind concepts must be well-motivated: they need to do some sort of work for us. I show how to define one sort of scientific concept – that of the biological individual, or organism – so that it does plenty of work for biology. My view understands biological individuals as defined by the process of evolution by natural selection. I will engage in some speculation about how the situation compares in regard to other items of scientific ontology.
19 November 2015 Dan Bedingham (Oxford) “Dynamical Collapse of the Wavefunction and Relativity”.
Abstract: When a collapse of the wave function takes place it has an instantaneous effect over all space. One might then assume that a covariant description is not possible since a collapse whose effects are simultaneous in one frame of reference would not have simultaneous effects in a boosted frame. I will show, however, that in fact a consistent covariant picture emerges in which the collapsing wave function depends on the choice of foliation of space time, but that suitably defined local properties are unaffected by this choice. The formulation of a covariant description is important for models attempting to describe the collapse of wave function as a dynamical process. This is a very direct approach to solving the quantum measurement problem. It involves simply giving the wave function the stochastic dynamics that it has in practice. We present some proposals for relativistic versions of dynamical collapse models.
12 November 2015 Karim Thébault (Bristol) “Regarding the ‘Hole Argument’ and the ‘Problem of Time’”
Abstract: The canonical formalism of general relativity affords a particularly interesting characterisation of the infamous hole argument. It also provides a natural formalism in which to relate the hole argument to the problem of time in classical and quantum gravity. In this paper I will examine the connection between these two much discussed problems in the foundations of spacetime theory along two interrelated lines. First, from a formal perspective, I will consider the extent to which the two problems can and cannot be precisely and distinctly characterised. Second, from a philosophical perspective, I will consider the implications of various responses to the problems, with a particular focus upon the viability of a ‘deflationary’ attitude to the relationalist/substantivalist debate regarding the ontology of space-time. Conceptual and formal inadequacies within the representative language of canonical gravity will be shown to be at the heart of both the canonical hole argument and the problem of time. Interesting and fruitful work at the interface of physics and philosophy relates to the challenge of resolving such inadequacies.
5 November 2015 Joseph Melia (Oxford) “Haecceitism, Identity and Indiscernibility: (Mis-)Uses of Modality in the Philosophy of Physics”
Abstract: I examine a number of arguments involving modality and identity in the Philosophy of Physics. In particular, (a) Wilson’s use of Leibniz’ law to argue for emergent entities; (b) the implications of anti-haecceitism for the Hole argument in GR and QM; (c) the proposal to “define” or “ground” or “account” for identity via some version of Principle of the Identity of Indiscernibles or the Hilbert-Bernays formula.
Against (a) I argue that familiar problems with applications of Leibniz’ law in modal contexts block the argument for the existence of emergent entities;
On (b), I argue that (i) there are multiple and incompatible definitions of haecceitism at play in the literature; (ii) that, properly understood, haecceitism *is* a plausible position; indeed, even supposedly mysterious haecceities do not warrant the criticism of obscurity they have received; (iii) we do better to solve the Hole argument by other means than a thesis about the range and variety of possibilities.
On (c), I argue that recent attempts to formulate a principle of PII fit to serve as a definition of identity are either trivially true, or must draw distinctions between different kinds of properties that are problematic: better to accept identity as primitive.
Some relevant papers/helpful reading (I will not, of course, assume familiarity with these papers)
J. Ladyman: `On the Identity and Diversity of Objects in a Structure.’ Proc. Aristotelian Supp Soc. (2007).
D. Lewis: `On the Plurality of Worlds’, Chp.4. (1986)
O. Pooley: `Points, Particles and Structural Realism’, in Rickles, French and Saatsi, `The Structural Foundations of Quantum Gravity.’ (2006)
S. Saunders: `Are Quantum Particles Objects?’ Analysis (2006)
J. Wilson: `Non-Reductive Physicalism and Degrees of Freedom’, BJPS (2010)
29 October 2015 Chiara Marletto (Oxford, Materials), “Constructor theory of information (and its implications for our understanding of quantum theory)”.
Abstract: Constructor Theory is a radically new mode of explanation in fundamental physics. It demands a local, deterministic description of physical reality – expressed exclusively in terms of statements about what tasks are possible, what are impossible, and why. This mode of explanation has recently been applied to provide physical foundations for the theory of information – expressing, as conjectured physical principles, the regularities of the laws of physics necessary for there to be what has been so far informally called ‘information’. In constructor theory, one also expresses exactly the relation between classical information and the so-called ‘quantum information’ – showing how properties of the latter arise from a single, constructor-theoretic constraint. This provides a unified conceptual basis for the quantum theory of information (which was previously lacking one qua theory of information). Moreover, the arising of quantum-information like properties in a deterministic, local framework also has implications for the understanding of quantum theory, and of its successors.
22 October 2015 Bryan Roberts (LSE) “The future of the weakly interacting arrow of time”.
Abstract: This talk discusses the evidence for time asymmetry in fundamental physics. The main aim is to propose some general templates characterising how time asymmetry can be detected among weakly interacting particles. We will then step back and evaluate how this evidence bears on time asymmetry in future physical theories beyond the standard model.
15 October 2015 Oscar Dahlsten (Oxford Physics) “The role of information in work extraction”.
Abstract: Since Maxwell’s daemon it has been known that extra information can give more work. I will discuss how this can be made concrete and quantified. I will focus on
so-called single-shot statistical mechanics. There one can derive expressions for the maximum work one can extract from a system given one’s information. Only one property
of the state one assigns to the system matters: the entropy. There are subtleties, including which entropy to use. I will also discuss the relation to fluctuation theorems, and our recent
paper on realising a photonic Maxwell’s daemon.
Some references, I will certainly not assume you have looked at them:
arXiv:0908.0424 The work value of information, Dahlsten, Renner, Rieper and Vedral
arXiv:1009.1630 The thermodynamic meaning of negative entropy, del Rio, Aaberg, Renner, Dahlsten and Vedral
arXiv:1207.0434 A measure of majorisation emerging from single-shot statistical mechanics, Egloff, Dahlsten, Renner, Vedral
arXiv:1409.3878 Introducing one-shot work into fluctuation relations, Yunger Halpern, Garner, Dahlsten, Vedral
arXiv:1504.05152 Equality for worst-case work at any protocol speed, Dahlsten, Choi, Braun, Garner, Yunger Halpern, Vedral
arxiv:1510.02164 Photonic Maxwell’s demon, Vidrighin, Dahlsten, Barbieri, Kim, Vedral and Walmsley”
11 June 2015 Tim Pashby (University of Southern California)
‘Schroedinger’s Cat: It’s About Time (Not Measurement)’
Abstract: I argue for a novel resolution of Schroedinger’s cat paradox by paying particular attention to the role of time and tense in setting up the problem. The quantum system at the heart of the paradoxical situation is an unstable atom, primed for indeterministic decay at some unknown time. The conventional account gives probabilities for the result of instantaneous measurements and leads to the unacceptable conclusion that the cat can neither be considered alive nor dead until the moment the box is opened (at a time of the experimenter’s choosing). To resolve the paradox I reject the status of the instantaneous quantum state as `truthmaker’ and show how a quantum description of the situation can be given instead in terms of time-dependent chance propositions concerning the time of decay, without reference to measurement.
The conclusions reached in the case of Schroedinger’s cat may be generalized throughout quantum mechanics with the means of event time observables (interpreted as conditional probabilities), which play the role of the time of decay for an arbitrary system. Conventional quantum logic restricts its attention to the lattice of projections, taken to represent possible properties of the system. I argue that event time observables provide a compelling reason to look beyond the lattice of projections to the algebra of effects, and suggest an interpretation in which propositions are made true by events rather than properties. This provides the means to resolve the Wigner’s friend paradox along similar lines.
4th June 2015 Neil Dewar (Oxford)
‘Symmetry and Interpretation: or, Translations and Translations’
Abstract: There has been much discussion of whether we should take (exact) symmetries of a physical theory to relate physically equivalent states of affairs, and – if so – what it is that justifies us in so doing. I argue that we can understand the propriety of this move in essentially semantic terms: namely, by thinking of a symmetry transformation as a means of translating a physical theory into itself. To explain why symmetry transformations have this character, I’ll first look at how notions of translation and definition are dealt with in model theory. Then, I’ll set up some analogies between the model-theoretic formalism and the formalism of differential equations, and show how the relevant analogue of self-translation is a symmetry transformation. I conclude with some remarks on how this argument bears on debates over theoretical equivalence.
28th May 2015George Ellis (Cape Town)
‘On the crucial role of top-down causation in complex systems’
Abstract: It will be suggested that causal influences in the real world occurring on evolutionary, developmental, and functional timescales are characterized by a combination of bottom up and top down effects. Digital computers give very clear exemplars of how this happens. There are five different distinct classes of top down effects, the key one leading to the existence of complex systems being adaptive selection. The issue of how there can be causal openness at the bottom allowing this to occur will be discussed. The case will be made that while bottom-up self-assembly can attain a certain degree of complexity, truly complex systems such as life can only come into being if top-down processes come into play in addition to bottom up processes. They allow genuine emergence to occur, based in multiple realisability at lower levels of higher level structures and functions.
21 May 2015
Francesca Vidotto (Radboud University, Nijmegen) “Relational ontology from General Relativity and Quantum Mechanics”.
Abstract: Our current most reliable physical theories, General Relativity and Quantum Mechanics, point both towards a relational description of reality. General Relativity builds up the spacetime structure from the notion of contiguity between dynamical objects. Quantum Mechanics describes how physical systems affect one another in the course of interactions. Only local interactions define what exists, and there is no meaning in talking about entities but in terms of local interactions.
14 May 2015 Harvey Brown (Philosohy, Oxford) and Chris Timpson (Philosophy, Oxford) “Bell on Bell’s theorem: the changing face of nonlocality”.
Between 1964 and 1990, the notion of nonlocality in Bell’s papers underwent a profound change as his nonlocality theorem gradually became detached from quantum mechanics, and referred to wider probabilistic theories involving correlations between separated beables. The proposition that standard quantum mechanics is itself nonlocal (more precisely, that it violates ‘local causality’) became divorced from the Bell theorem per se from 1976 on, although this important point is widely overlooked in the literature. In 1990, the year of his death, Bell would express serious misgivings about the mathematical form of the local causality condition, and leave ill-defined the issue of the consistency between special relativity and violation of the Bell-type inequality. In our view, the significance of the Bell theorem, both in its deterministic and stochastic forms, can only be fully understood by taking into account the fact that a fully Lorentz-covariant version of quantum theory, free of action-at-a-distance, can be articulated in the Everett interpretation.
7 May 2015 Mauro Dorato (Rome) “The passage of time between physics and psychology”.
Abstract: The three main aims of my paper are: To defend a minimalistic theory of objective becoming that takes STR and GTR at face value; to bring to bear relevant neuro-psychological data in support of 1; to combine 1 and 2 to try to explain with as little metaphysics as possible three key features of our experience of passage, namely:
1. Our untutored belief in a cosmic extension of the now (leading to postulate privileged frames and presentism;
2. The becoming more past of the past (leading to Skow’s 2009 moving spotlight, branching spacetimes)
3. The fact that our actions clearly seem to bring new events into being (Broad 1923, Tooley 1997, Ellis 2014)
26 February 2015 James Ladyman (Bristol)”Do local symmetries have ‘direct empirical consequences’?”
Abstract: Hilary Greaves and David Wallace argue that, contrary to the widespread view of philosophers of physics, local symmetries have direct empirical consequences. They do this by showing that there are `Galileo’s Ship Scenarios’ in theories with local symmetries. In this paper I will argue that the notion of `direct empirical consequences’ is ambiguous and admits of two kinds of precisification. Greaves and Wallace do not purport to show that local symmetries have empirical consequences in the stronger of the two senses, but I will argue that it is the salient one. I will then argue that they are right to focus on Galileo’s Ship Scenarios, and I will offer a characterisation of the form of such arguments from symmetries to empirical consequences. I will then discuss how various examples relate to this template. I will then offer a new argument in defence of the orthodoxy that direct empirical consequences do not depend on local symmetries.
19 February 2015 David Wallace (Oxford): “Fields as Bodies: a unified treatment of spacetime and gauge symmetry”.
Abstract: Using the parametrised representation of field theory (in which the location in spacetime of a part of a field is itself represented by a map from the base manifold to Minkowski spacetime) I demonstrate that in both local and global cases, internal (Yang-Mills-type) and spacetime (Poincare) symmetries can be treated precisely on a par, so that gravitational theories may be regarded as gauge theories in a completely standard sense.
12 February 2015 Erik Curiel (Munich), “Problems with the interpretation of energy conditions in general relativity”.
Abstract: An energy condition, in the context of a wide class of spacetime theories (including general relativity), is, crudely speaking, a relation one demands the stress-energy tensor of matter satisfy in order to try to capture the idea that “energy should be positive”. The remarkable fact I will discuss is that such simple, general, almost trivial seeming propositions have profound and far-reaching import for our understanding of the structure of relativistic spacetimes. It is therefore especially surprising when one also learns that we have no clear understanding of the nature of these conditions, what theoretical status they have with respect to fundamental physics, what epistemic status they may have, when we should and should not expect them to be satisfied, and even in many cases how they and their consequences should be interpreted physically. Or so I shall argue, by a detailed analysis of the technical and conceptual character of all the standard conditions used in physics today, including examination of their consequences and the circumstances in which they are believed to be violated in the actual universe.
22nd January 2015 Jonathan Halliwell (Imperial College London):”Negative Probabilities, Fine’s Theorem and Quantum Histories”.
Abstract: Many situations in quantum theory and other areas of physics lead to quasi-probabilities which seem to be physically useful but can be negative. The interpretation of such objects is not at all clear. I argue that quasi-probabilities naturally fall into two qualitatively different types, according to whether their non-negative marginals can or cannot be matched to a non-negative probability. The former type, which we call viable, are qualitatively similar to true probabilities, but the latter type, which we call non-viable, may not have a sensible interpretation. Determining the existence of a probability matching given marginals is a non-trivial question in general. In simple examples, Fine’s theorem indicates that inequalities of the Bell and CHSH type provide criteria for its existence. A simple proof of Fine’s theorem is given. The results have consequences for the linear positivity condition of Goldstein and Page in the context of the histories approach to quantum theory. Although it is a very weak condition for the assignment of probabilities it fails in some important cases where our results indicate that probabilities clearly exist. Some implications for the histories approach to quantum theory are discussed.
4 December 2014: Tony Sudbery (Maths, York), “The logic of the future in the Everett-Wheeler understanding of quantum theory”
Abstract: I discuss the problems of probability and the future in the Everett-Wheeler understanding of quantum theory. To resolve these, I propose an understanding of probability arising from a form of temporal logic: the probability of a future-tense proposition is identified with its truth value in a many-valued and context-dependent logic. I construct a lattice of tensed propositions, with truth values in the interval [0, 1], and derive logical properties of the truth values given by the usual quantum-mechanical formula for the probability of histories. I argue that with this understanding, Everett-Wheeler quantum mechanics is the only form of scientific theory that truly incorporates the perception that the future is open.
27 November 2014 : Owen Maroney (Philosophy, Oxford), “How epistemic can a quantum state be?”
Abstract: The “psi-epistemic” view is that the quantum state does not represent a state of the world, but a state of knowledge about the world. It draws its motivation, in part, from the observation of qualitative similarities between characteristic properties of non-orthogonal quantum wavefunctions and between overlapping classical probability distributions. It might be suggested that it gives a natural explanation for these properties, which seem puzzling for the alternative “psi-ontic” view. However, for two key similarities, quantum state overlap and quantum state discrimination, it turns out that the psi-epistemic view cannot account for the values shown by quantum theory, and for a wide range of quantum states must rely on the same supposedly puzzling explanations as the “psi-ontic” view.
20 November 2014 : Boris Zilber (Maths, Oxford), “The semantics of the canonical commutation relations”
Abstract: I will argue that the canonical commutation relations and the way of calculating with those discovered in the 1920th is in essence a syntactic reflection of a world the semantics of which is still to be reconstructed. The same can be said about the calculus of Feynman integrals. Similar developments have been taking place in pure mathematics since the 1950s in the form of Grothendieck’s schemes and the formalism of non-commutative geometry. I will report on some progress of reconstructing the missing semantics. In particular, for the canonical commutation relations it leads to a theory of representation in finite-dimensional “algebraic Hilbert spaces” which in the limit look rather similar, although not the same, as conventional Hilbert spaces.
13 November 2014 1st BLOC Seminar, KCL, London : Huw Price (Philosophy, Cambridge), “Two Paths to the Paris Interpretation”
Abstract: In 1953 de Broglie’s student, Olivier Costa de Beauregard, raised what he took to be an objection to the EPR argument. He pointed out that the EPR assumption of Locality might fail, without action-at-a-distance, so long as the influence in question is allowed to take a zigzag path, via the past lightcones of the particles concerned. (He argued that considerations of time-symmetry counted in favour of this proposal.) As later writers pointed out, the same idea provides a loophole in Bell’s Theorem, allowing a hidden variable theory to account for the Bell correlations, without irreducible spacelike influence. (The trick depends on the fact that retrocausal models reject an independence assumption on which Bell’s Theorem depends, thereby blocking the derivation of Bell’s Inequality.) Until recently, however, it seems to have gone unnoticed that there is a simple argument that shows that the quantum world must be retrocausal, if we accept three assumptions (one of them time-symmetry) that would have all seemed independently plausible to many physicists in the years following Einstein’s 1905 discovery of the quantisation of light. While it is true that later developments in quantum theory provide ways of challenging these assumptions – different ways of challenging them, for different views of the ontology of the quantum world – it is interesting to ask whether this new argument provides a reason to re-examine the Costa de Beauregard’s ‘Paris interpretation’.
6 November 2014 : Vlatko Vedral (Physics, Oxford), “Macroscopicity”
ABSTRACT: We have a good framework for how to quantify entanglement based, broadly speaking, on two different ideas. One is the fact that local operations and classical communications (LOCCs) do not increase entanglement and hence introduce a natural ordering on the set of entangled states. The other one is inspired by the mean-field theory and quantifies entanglement of a state by how difficult it is to approximate it with disentangled states (the two, while not identical, lead frequently to the same measures). Interestingly, neither of these captures the notion of “macroscopicity” which ask what states are very quantum and macroscopic at the same time. Here the GHZ states win as the ones with the highest macroscopicity, however, they are not highly entangled as far as either the LOCCs or the mean-field theory point of view. I discuss different ways of quantifying macroscopicity and exemplify them with a range of quantum experiments producing different many-body states (GHZ, and general GHZ states, cluster states, topological states). And the winner for producing the highest degree of macroscopicity is…
30 October 2014 : David Wallace (Philosophy, Oxford), “How not to do the metaphysics of quantum mechanics”
Abstract: Recent years have seen an increasing interest in the metaphysics of quantum theory. While welcome, this trend has an unwelcome side effect: an inappropriate (and often unknowing) identification of quantum theory in general with one particular brand of quantum theory, namely the nonrelativistic mechanics of finitely many point particles. In this talk I’ll explain just why this is problematic, partly by analogy with questions about the metaphysics of classical mechanics.
23 October 2014 : Daniel Bedingham (Philosophy, Oxford), “Time reversal symmetry and collapse models”
Abstract: Collapse models are modifications of quantum theory where the wave function is treated as physically real and collapse of the wave function is a physical process. This introduces a time reversal asymmetry into the dynamics of the wave function since the collapses affect only the future state. However, it is shown that if the physically real part of the model is reduced to the set of points in space and time about which the collapses occur then a collapsing wave function picture can be given both forward and backward in time, in each case satisfying the Born rule (under certain conditions). This implies that if the collapse locations can serve as an ontology then these models can in fact have time reversal symmetry.
16 October 2014 : Dennis Lehmkuhl, “Einstein, Cartan, Weyl, Jordan: The neighborhood of General Relativity in the space of spacetime theories”.
Abstract: Recent years have seen a renewed interest in Newton-Cartan theory (NCT), i.e. Newtonian gravitation theory reformulated in the language of differential geometry. The comparison of this theory with the general theory of relativity (GR) has been particularly interesting, among other reasons, because it allows us to ask how `special’ GR really is, as compared to other theories of gravity. Indeed, the literature so far has focused on the similarities between the two theories, for example on the fact that both theories describe gravity in terms of curvature, and the paths of free particles as geodesics. However, the question of how `special’ GR is can only be properly answered if we highlight differences as much as similarities, and there are plenty of differences between NCT and GR. Furthermore, I will argue that it is not enough to compare GR to simpler theories like NCT, we also have to compare it to more complicated theories; more complicated in terms of geometrical structure and gravitational degrees of freedom. While NCT is the most natural degenerative limit of GR, gravitational theory defined on a Weyl geometry (to be distinguished from a unified field theory based on Weyl geometry) and gravitational scalar-tensor theories (like Jordan-Brans-Dicke theory) are two of the most natural generalisations of GR. Thus, in this talk I will compare Newton-Cartan, GR, Weyl and Jordan-Brans-Dicke theory, to see how special GR really is as compared to its immediate neighborhood in the `space of spacetime theories’.
19 June 2014 : Antony Valentini (Physics, Clemson), “Hidden variables in the early universe II: towards an explanation for large-scale cosmic anomalies”
Abstract: Following on from Part I, we discuss the large-scale anomalies that have been reported in measurements of the cosmic microwave background (CMB) by the Planck satellite. We consider how the anomalies might be explained as the result of incomplete relaxation to quantum equilibrium at long wavelengths on expanding space (during a ‘pre-inflationary phase’) in the de Broglie-Bohm formulation of quantum theory. The first anomaly we consider is the reported large-scale power deficit. This could arise from incomplete relaxation for the amplitudes of the primordial perturbations. It is shown, by numerical simulations, that if the pre-inflationary era is radiation dominated then the deficit in the emerging power spectrum will have a characteristic shape (a specific dependence on wavelength). It is also shown that our scenario is able to produce a power deficit in the observed region and of the observed magnitude, for an appropriate choice of cosmological parameters. The second anomaly we consider is the reported large-scale anisotropy. This could arise from incomplete relaxation for the phases of the primordial perturbations. We report on recent numerical simulations for phase relaxation, and we show how to define characteristic scales for amplitude and phase nonequilibrium. While difficult questions remain concerning the extent to which the data might support our scenario, we argue that we have an (at least) viable model that is able to explain two apparently independent cosmological anomalies at a single stroke.
12 June 2014 : Antony Valentini (Physics, Clemson), “Hidden variables in the early universe I: quantum nonequilibrium and the cosmic microwave background”.
Abstract: Assuming inflationary cosmology to be broadly correct, we discuss recent work showing that the Born probability rule for primordial quantum fluctuations can be tested (and indeed is being tested) by measurements of the cosmic microwave background (CMB). We consider in particular the hypothesis of ‘quantum nonequilibrium’ — the idea that the universe began with an anomalous distribution of hidden variables that violates the Born rule — in the context of the de Broglie-Bohm pilot-wave formulation of quantum field theory. An analysis of the de Broglie-Bohm field dynamics on expanding space shows that relaxation to quantum equilibrium is generally retarded (and can be suppressed) for long-wavelength field modes. If the initial probability distribution is assumed to have a less-than-quantum variance, we may expect a large-scale power deficit in the CMB — as appears to be observed by the Planck satellite. Particular attention is paid to conceptual questions concerning the use of probabilities ‘for the universe’ in modern theoretical and observational cosmology.
[Key references: A. Valentini, ‘Inflationary Cosmology as a Probe of Primordial Quantum Mechanics’, Phys. Rev. D 82, 063513 (2010) [arXiv:0805.0163]; S. Colin and A. Valentini, ‘Mechanism for the suppression of quantum noise at large scales on expanding space’, Phys. Rev. D 88, 103515 (2013) [arXiv:1306.1579].]
5 June 2014 : Mike Cuffaro, “Reconsidering quantum no-go theorems from a computational perspective”
Abstract: Bell’s and related inequalities are misleadingly thought of as “no-go” theorems, except in a highly qualified sense. More properly, they should be understood as imposing constraints on locally causal models which aim to recover quantum mechanical predictions. Thinking of them as no-go theorems is nevertheless mostly harmless in most circumstances; i.e., the necessary qualifications are, in typical discussions of the foundations of quantum mechanics, understood as holding unproblematically. But the situation can change once we leave the traditional context. In the context of a discussion of quantum computation and information, for example, our judgements regarding which locally causal models are to be ruled out as implausible will be different than our similar judgements in the traditional context. In particular, the “all-or-nothing” GHZ inequality, which is traditionally considered to be a more powerful refutation of local causality than statistical inequalities like Bell’s, has very little force in the context of a discussion of quantum computation and information. In this context it is only the statistical inequalities which can legitimately be thought of as no-go theorems. Considering this situation serves to emphasise, I argue, that there is a difference in aim between practical sciences like quantum computation and information, and the foundations of quantum mechanics traditionally construed: describing physical systems as they exist and interact with one another in the natural world is different from describing what one can do with physical systems.
22 May 2014 Elise Crull, “Whence Physical Significance in Bimetric Theories?”
Abstract: Recently there has been lively discussion regarding a certain class of alternative theories to general relativity called bimetric theories. Such theories are meant to resolve certain physical problems (e.g. the existence of ghost fields and dark matter) as well as philosophical problems (e.g. the apparent experimental violation of relativistic causality and assigning physical significance to metrics).
In this talk, I suggest that a new type of bimetric theory wherein matter couples to both metrics may yield further insights regarding those same philosophical questions, while at the same time addressing (perhaps to greater satisfaction!) the physical worries motivating standard bimetric theories.
15 May 2014: Julian Barbour (Independent), “A Gravitational Arrow of Time”.
Abstract: My talk (based on arXiv: 1310.5167 [gr-qc]) will draw attention to a hitherto unnoticed way in which scale-invariant notions of complexity and information can be defined in the problem of N point particles interacting through Newtonian gravity. In accordance with these definitions, all typical solutions of the problem with nonnegative energy divide at a uniquely defined point into two halves that are effectively separate histories. They have a common ‘past’ at the point of division but separate ‘futures’. In each half, the arrow from past to future is defined by growth of the complexity and information. All previous attempts to explain how time-symmetric laws can give rise to the various arrows of time have invoked special boundary conditions. In contrast, the complexity and information arrows are inevitable consequences of the form
of the gravitational law and nothing else. General relativity
shares key structural features with Newtonian gravity, so it may be possible to obtain similar results for Einsteinian gravity.
8 May 2014 : Simon Saunders (Philosohy, Oxford), “Reference to indistinguishables, and other paradoxes”.
Abstract:There is a seeming-paradox about indistinguishables: if described only by totally symmetric properties and relations, or by totally (anti)-symmetrized states, then how is reference to them possible? And we surely do refer to subsets of indistinguishable particles, and sometimes individual elementary particles (as in: the electrons, protons, and neutrons of which your computer screen is composed). Call it the paradox of composition.
The paradox can be framed in the predicate calculus as well, in application to everyday things: indistinguishability goes over to weak discernibility. It connects with two other paradoxes: the Gibbs paradox and Putnam’s paradox. It also connects with the h |
8c3fa6e826e1e2ca | Do all quantum trails inevitably lead to Everett?
I’ve been thinking lately about quantum physics, a topic that seems to attract all sorts of crazy speculation and intense controversy, which seems inevitable. Quantum mechanics challenges our deepest held most cherished beliefs about how reality works. If you study the quantum world and you don’t come away deeply unsettled, then you simply haven’t properly engaged with it. (I originally wrote “understood” in the previous sentence instead of “engaged”, but the ghost of Richard Feymann reminded me that if you think you understand quantum mechanics, you don’t understand quantum mechanics.)
At the heart of the issue are facts such as that quantum particles operate as waves until someone “looks” at them, or more precisely, “measures” them, then they instantly begin behaving like particles with definite positions. There are other quantum properties, such as spin, which show similar dualities. Quantum objects in their pre-measurement states are referred to as being in a superposition. That superposition appears to instantly disappear when the measurement happens, with the object “choosing” a particular path, position, or state.
How do we know that the quantum objects are in this superposition before we look at them? Because in their superposition states, the spread out parts interfere with each other. This is evident in the famous double slit experiment, where single particles shot through the slits one at a time, interfere with themselves to produce the interference pattern that waves normally produce. If you’re not familiar with this experiment and its crazy implications, check out this video:
So, what’s going on here? What happens when the superposition disappears? The mathematics of quantum theory are reportedly rock solid. From a straight calculation standpoint, physicists know what to do. Which leads many of them to decry any attempt to further explain what’s happening. The phrase, “shut up and calculate,” is often exclaimed to pesky students who want to understand what is happening. This seems to be the oldest and most widely accepted attitude toward quantum mechanics in physics.
From what I understand, the original Copenhagen Interpretation was very much an instrumental view of quantum physics. It decried any attempt to explore beyond the observations and mathematics as hopeless speculation. (I say “original” because there are a plethora of views under the Copenhagen label, and many of them make ontological assertions that the original formulation seemed to avoid, such as insisting that there is no other reality than what is described.)
Under this view, the wave of the quantum object evolves under the wave function, a mathematical construct. When a measurement is attempted, the wave function “collapses”, which is just a fancy way of saying it disappears. The superposition becomes a definite state.
What exactly causes the collapse? What does “measurement” or “observation” mean in this context? It isn’t interaction with just another quantum object. Molecules have been held in quantum superposition, including, as a new recent experiment demonstrates, ones with thousands of atoms. For a molecule to hold together, chemical bonds have to form, and for the individual atoms to hold together, the components have to exchange bosons (photons, gluons, etc) with each other. All this happens and apparently fails to cause a collapse in otherwise isolated systems.
One proposal thrown out decades ago, which has long been a favorite of New Age spiritualists and similarly minded people, is that maybe consciousness causes the collapse. In other words, maybe it doesn’t happen until we look at it. However, most physicists don’t give this notion much weight. And the difficulties of engineering a quantum computer, which require that a superposition be maintained to get their processing benefits, seems to show (to the great annoyance of engineers) that systems with no interaction with consciousness still experience collapse.
What appears to cause the collapse is interaction with the environment. But what exactly is “the environment”? For an atom in a molecule, the environment would be the rest of the molecule, but an isolated molecule seems capable of maintaining its superposition. How complex or vast does the interacting system need to be to cause the collapse? The Copenhagen Interpretation merely says a macroscopic object, such as a measuring apparatus, but that’s an imprecise term. At what point do we leave the microscopic realm and enter the classical macroscopic realm? Experiments that succeed at isolating ever larger macromolecules seem able to preserve the quantum superposition.
If we move beyond the Copenhagen Interpretation, we encounter propositions that maybe the collapse doesn’t really happen. The oldest of these is the deBroglie-Bohm Interpretation. In it, there is always a particle that is guided by a pilot wave. The pilot wave appears to disappear on measurement, but what’s really happening is that the wave decoheres, loses its coherence into the environment, causing the particle to behave like a freestanding particle.
The problem is that this interpretation is explicitly non-local in that destroying any part of the wave causes the whole thing to cease any effect on the particle. Non-locality, essentially action at a distance, is considered anathema in physics. (Although it’s often asserted that quantum entanglement makes it unavoidable.)
The most controversial proposition is that maybe the collapse never happens and that the superposition continues, spreading to other systems. The elegance of this interpretation is that it essentially allows the system to continue evolving according to the Schrödinger equation, the central equation in the mathematics of quantum mechanics. From an Occam’s razor standpoint, this looks promising.
Well, except for a pesky detail. We don’t observe the surrounding environment going into a superposition. After a measurement, the measuring apparatus and lab setup seem just as singular as they always have. But this is sloppy thinking. Under this proposition, the measuring apparatus and lab have gone into superposition. We don’t observe it because we ourselves have gone into superposition.
In other words, there’s a version of the measuring apparatus that measures the particle going one way, and a version that measures it going the other way. There’s a version of the scientist that sees the measurement one way, and another version of the scientist that sees it the other way. When they call their colleague to tell them about the results, the colleague goes into superposition. When they publish their results, the journal goes into superposition. When we read the paper, we go into superposition. The superposition spreads ever farther out into spacetime.
We don’t see interference between the branches of superpositions because the waves have decohered, lost their phase with each other. Brian Greene in The Hidden Reality points out that it may be possible in principle to measure some remnant interference from the decohered waves, but it would be extremely difficult. Another physicist compared it to trying to measure the effects of Jupiter’s gravity on a satellite orbiting the Earth: possible in principle but beyond the precision of our current instruments.
Until that becomes possible, we have to consider each path as its own separate causal framework. Each quantum event expands the overall wave function of the universe, making each one its own separate branch of causality, in essence, its own separate universe or world, which is why this proposition is generally known as the Many Worlds Interpretation.
Which interpretation is reality? Obviously there’s a lot more of them than I mentioned here, so this post is unavoidably narrow in its consideration. To me, the (instrumental) Copenhagen Interpretation has the benefit of being epistemically humble. Years ago, I was attracted to the deBroglie-Bohm Interpretation, but it has a lot of problems and is not well regarded by most physicists.
The Many Worlds Interpretation seems absurd, but we need to remember that the interpretation itself isn’t so much absurd, but its implications. Criticizing the interpretation because of those implications, as this Quanta Magazine piece does, seems unproductive, akin to criticizing general relativity because we don’t like the relativity of simultaneity, or evolution because we don’t like what it says about humanity’s place in nature.
With every experiment that increases the maximally observed size of quantum objects, the more likely it seems to me that the whole universe is essentially quantum, and the more inevitable this interpretation seems.
Now, it may be possible that Hugh Everett III, the originator of this interpretation, was right that the wave function never collapses, but that some other factor prevents the unseen parts of the post-measurement wave from actually being real. Referred to as the unreal version of the interpretation, this seems to be the position of a lot of physicists. Since we have no present way of testing the proposition as Brian Greene suggested, we can’t know.
From a scientific perspective then, it seems like the most responsible position is agnosticism. But from an emotional perspective, I have to admit that the elegance of spreading superpositions are appealing to me, even if I’m very aware that there’s no way to test the implications.
What do you think? Am I missing anything? Are there actual physics problems with the Many Worlds Interpretation that should disqualify it? Or other interpretations that we should be considering?
55 thoughts on “Do all quantum trails inevitably lead to Everett?
1. There is no such thing as objectivity. Not all physicist dismiss some interaction with consciousness. Some very prominent physicists at least think its a plausible possibility. I love how folks want to dismiss any quantum connections to consciousnesses as mysticism. One has to wonder what would ever be enough evidence to at least get those who don’t want to give credence to at least say its plausible. They so easily accept Hawking’s many worlds and parallel universe hypotheses even though thus far no real method to test them exists either. But suggests that perhaps consciousness is more than the body and your a mystic. I like Penrose’s Biocentrism ideas. But I like others to include perhaps its an illusion or we live in a real matrix. But I certainly don’t consider myself a mystic.
Liked by 1 person
1. Well Im not as learned as you so I would say what is the evidence for it not causing the collapse? Testable verifiable evidence? Just like several hypotheses in quantum Physics testable, repeatable and verifiable evidence is often out of reach. I’m not saying anything is absolutely true but to dismiss a hypothesis out of hand when your preferred solution is just as untestable and unverifiable is not objective and not fair. In the end all of it might be wrong. If consciousness is merely something that fades after the death of the organism so be it, nothing we can do its natural law in that case. But until we know, if we can know, we should at least aknowledge some very smart Physicist do indeed think consciousness may play a role. Roger Penrose is no crazy person nor is Stuart Hammerhoff an uneducated loon. Those are just two people who have a wide variety of hypotheses that say its plausible that consciousness interacts with exotic quantum particles. Many point out the double slit Experiment among other things as an example of what might be. Nobody knows what is as of yet but to call legitimate scientists mystics for saying maybe is just unfair. In the end you and scientists like you might indeed be right but until its proven please give all legitimate scientists the same respect you gave Hawking when he proposed String Theory and all the craziness, parallel worlds, many copies of me on parallel worlds, and all the other things I watched on The Scifi Channel, that come with it. Now some scientists are saying consciousness is an illusion and that’s funny really. When you cant solve it say it doesn’t exist solves the problem only it doesn’t because it does exist only its nature is a mystery
2. To the very limited extent that I understand the mathematics of decoherence, it does seem to make Everett the most natural interpretation. Why should orthogonal states just vanish when their effect on us diminishes? “Us” meaning the states of observers whose device registered a particle going through the left slit, for example, and “orthogonal ” meaning approximately orthogonal, to within some rounding error.
The fact that decoherence is in principle a smooth process, albeit a fast one, takes a lot of the sting out of the Many Worlds label. It’s kind of a misnomer. It would be equally fair to say there’s one world in Everett, but many superposed states that have extremely weak interactions.
A good resource is the wiki article on decoherence. Another is David Wallace, The Emergent Multiverse .
Liked by 1 person
1. Thanks for the references. I agree on the wiki article. I’ll check out the Wallace one.
Good point about the label. The main reason I described MWI the way I did was to downplay the new universes thing. Dewitt reportedly used it as a selling tool, but I think it makes too many people dismiss it as outlandish without understanding what’s actually being proposed.
3. Nobody knows the source or nature of consciousness. There is evidence you remain conscious after the heart stops and blood flow to the brain ceases. For how long is still being examined. Previously this was not thought possible. Now some adjust there position saying activity continues till clinical brain death. No one as of yet can provide evidence consciousness is not affecting quantum particles or the double slit Experiment because nobody knows the nature, origin, components or make up of consciousness. Hell some just give up altogether and say its not real anyway, its an illusion. So all human beings are, what they have acomplished over millions of years of evolution is an illusion. Anyone who matter of factly claims they can prove consciousnesses is not affecting the quantum relm or vis versa know there wrong. Nobody even knows what consciousness is composed of let alone its origins so they can’t say for sure one way or another. They can dismiss it as woo or mysticism, they can belittle those who at least say maybe but, just like those who subjectively hope consciousness doesn’t die, they cant prove anything one way or the other. I wouldn’t be so harsh if people disparage brilliant scientists like Penrose and others by calling it mysticism. No better way to disparage a scientist than to call his or her hypothesis mysticism. Nobody called Hawking a mystic when ge hypothesized String Theory which is a parallel worlds theory with absolutely no direct evidence of it being true. Honestly parallel universes with my double in them sounds pretty darn mystical to me.
4. Don’t confuse the scientific method with the actual scientists. Scientists are people, human beings, and like all human beings they are almost incapable of objectivity on there own. If you can pick it up, put it in a beaker, and test it using the Scientific Method thats objective. Supposedly if the math works that is a good sign it could be true but even if tte math works it still can be wrong. If you can’t pick it up and test it it could be wrong. Quantum Physics reaches out into a largely untestable area of science. In fact many well known scientists ponder aloud that maybe we have reached or soon will reach all we are capable of knowing leaving infinite amounts of questions unanswered and unknowable.
1. Hi Matthew,
“…I would say what is the evidence for it not causing the collapse? Testable verifiable evidence? ”
I alluded to some in the post: the difficulty in constructing a quantum computer. Quantum computing’s unique value is being able to process possible paths in parallel, which requires maintaining a superposition as long as possible. However, long before any conscious entity becomes aware of what’s happening, the superposition decoheres. This is a serious challenge for QC. If it could be overcome simply by keeping conscious systems from seeing it, it likely would have been solved decades ago. As it is, many QC processors have to operate at near 0 Kelvin to minimize interaction with the environment and even that only keeps the qubit circuits in superposition for a very brief time.
“Nobody knows the source or nature of consciousness.”
I think neuroscience is making steady progress in understanding it. (See the posts in my Mind and AI category for why.) Of course, many people don’t like what’s being found, so the assertion that science is utterly helpless in this area remains a popular one.
“Don’t confuse the scientific method with the actual scientists.”
A crucial part of scientific methods (there isn’t just one) is guarding against human bias. It’s why results must be repeatable, transparent, and subject to peer review. In my experience, the ones that pass this test don’t affirm expansive conceptions of consciousness.
But as you note, there is no unique evidence for any one interpretation of quantum physics. It’s why I said that the responsible position is agnosticism on them. For now.
5. Maybe a little beside the point … Please forgive me.
As someone who could not even bother with elementary school and for several years has not been able to master English … he claims that scientists do not understand the basic processes of the universe.
Well, it can be said, it’s just a stupid Pole.
But I will not be giving hundreds of examples of scientific indolence. Only one.
Just what to think of the state of the scientific mind, when one of the most prominent minds, carries out such a thought experiment … whether it was just a joke or just a word of despair
Throw a book into the black hole. The book carries information. Perhaps that information is about physics, perhaps that information is the plot of a romance novel – it could be any kind of information. But as far as anyone knows, the outgoing Hawking radiation is the same no matter what went into the black hole. The information is apparently lost – where did it go?
Do we see one of the greatest idiocys of quantum physics?
Do we see how beautiful minds are stupidity?
Maybe just a stupid pole is dumber than it would seem?
Liked by 1 person
1. Stan,
From what I understand, information lost to a black hole remains a problem that hasn’t been solved. I’ve read some speculation that maybe it’s smeared across the event horizon as a sort of hologram, which sounds like it could conceivably affect Hawking radiation, but it all sounds highly speculative.
One of the problems with physics today is that too much of the theoretical work happens far outside of testable conditions. On the one hand, this should be fine since we never know when such exploration might turn up something testable. But until it does, we have to be stringent in remembering that it’s informed speculation.
Liked by 1 person
6. Mike.
Only this is not a problem with the information that carries the object that falls into a black hole.
This applies to the information that the object carries about itself.
Is known that information is the basis of the quantum universe.
1. Throw two stones into a black hole. On one we paint the flag US and the second flag of Poland.
Does such information mean something.
2. Now we will fire two cannonballs towards the black hole.
A stone ball from Poland and a ball of uranium from the US.
Is this the sense of information for quantum physics?
Liked by 1 person
7. Mike.
If I didn’t believe in your wonderful reasoning … after all, I read your wise statements.
If something is to blame, it is my tragic English.
Besides, the scientists themselves, although they are so wonderful in quantum physics, admit that they absolutely have no idea why this works.
so I disappear… but not on twitter.
Liked by 1 person
8. My problem with MWI is the same one many have: where do all those new realities come from? What does it suggest about matter and energy?
Tegmarkians can talk about how the square root of 4 is both +2 and -2, and no one worries about where the extra answer came from. But I don’t believe we live in a Tegmarkian universe.
There is also, to me, an issue of reality explosion: Wear a pair of polarizing sunglasses, and each photon that hits them has a chance of passing through or not. So each photon seems to be creating new realities. Billions and billions of new realities. Every instant.
MWI fans have said this doesn’t happen, but I’m not clear on why not.
I have played with the idea that what happens is that the standing wave of the universe becomes more complex with each possible branch such that all possible paths that could have been taken are part of that wave. But there’s only one actual reality that emerges from that wave.
I’ve never found the waveform collapse all that mysterious. A particle in flight is a vibration in the relevant particle field, the energy of that quanta is spread out in the wave. But for that energy to interact with, say, an electron in the wall it hits, that single spread out quanta “drains” into the contact point.
The mystery, if I understand it, has to do with what “selects” that contact point, and how does the energy of the wave “drain” into that point? We have no maths for that.
I suspect the contact point gets selected per the same mechanism that “selects” which atom of a radioactive sample decays next. Or as how the first bird of a flock decides to take to the air. Maybe it is literally random (which it seems to be).
I sure wish someone would discover something new. QFT and GR have been at loggerheads far too long.
Liked by 1 person
1. I have to admit that I wonder about the energy aspect of this as well. If every part of the wave becomes a full particle in its own branch of the superposition, then how is the energy of that wave, and every other wave, not effectively magnified? My understanding is that we still don’t understand at a fundamental level how mass is generated. (The Higgs supposedly only explains a subset of it.) If the non-visible parts of the post-measurement wave aren’t real, then maybe that has something to do with it.
What’s interesting about the explosion of superpositions, is virtually all quantum events average out until the macroscopic deterministic world emerges. To me, that implies that most of the “universes” being generated are virtually identical. (There would have been far more divergence in the early instances of the big bang when quantum events generated patterns that later grew into voids and galactic superclusters.) Today, it seems like it would only be the rare case of quantum indeterminancy “bleeding” through that would lead to divergences. It might be that most of the exploding superpositions end up converging back to one reality, or only a few of them. (I have no idea if the mathematics lend any credence whatsoever to just conjecture.) And I’ve read some variances of the interpretation that, instead of proliferating universes, it’s really just interacting ones.
That actually isn’t my understanding of what happens. As I understand it, the entire wave instantly disappears, replaced by the particle, even if the wave has been spread around and fragmented over vast distances, that there’s no timeline for it to drain. (Which admittedly also makes “collapse” a questionable word for the phenomenon.) That said, decoherence isn’t supposed to be instantaneous either, just very fast, so who knows.
Totally agreed that it would be good to see progress somewhere. I remember many physicists hoping the LHC would provide something, anything, unexpected so they’d have something to work with, but other than failing to confirm supersymmetry, most of what they’ve gotten just seemed to reaffirm the Standard Model.
Liked by 1 person
Yeah, the mass of protons and neutrons, for example, comes mainly from the energy of the quark and gluon interactions, which means most of the mass from matter isn’t due to the Higgs.
Which is why I find it easier to think about in terms of energy, although I usually see mass and energy as two faces of the same thing.
“To me, that implies that most of the “universes” being generated are virtually identical.”
Which I think is how MWI fans respond to the question about sunglasses and photons. My question in return is how identical is “virtually” identical?
Remember Bradbury’s famous short story, The Sound of Thunder? Do worldlines converge and merge, or do even quantum differences ultimately diverge and result in separate realities?
A lot of MWI fans think Occam and parsimony support their position, but I (so far) see it the opposite. MWI doesn’t sound like the simple explanation, and the explosion problem defies parsimony.
But then I’m not sure I truly understand MWI, and I’ve gotten the impression a lot of its fans don’t really understand it, either. Plus, there seem to be multiple versions of the theory since Everett.
Greg Egan has a short story, The Infinite Assassin, in his collection, Axiomatic. It’s about an illegal drug that allows users to interact with parallel universes, which turns out to be a Very Bad Thing. What I really liked about the story was the sense of continuum Egan gives to parallel worlds.
One can’t help but wonder what makes them distinct.
Sean Carroll gave a talk about MWI (which I found unconvincing), and he had an experiment set up remotely that did a photon-half-silver-mirror thing with two detectors. Through a phone app he was able to trigger the experiment and get a (random) result which he used to determine if he should jump to the left or to the right. (The right, in this case, IIRC.)
The claim was that this generated two realities accommodating his jumping both ways. Which generated two different audiences (and sets of video viewers) who remember him jumping both ways. Which led to this comment where I recall him jumping right. Presumably the alternate me remembers it differently.
But I keep wondering about those sunglasses and all the quantum interactions happening all the time. I’ve just never heard anything from MWI that gets me past this key objection.
Yes, agreed. (That’s why I quoted “drains” — best word I could think of but hardly adequate.) I think we’re on the same page here, I’m just trying to imagine an ontology that makes sense of “waveform collapse.”
I’ve been thinking about this a bit as I try to wrap my head around some of the strange variations of the two-slit thing. (Have you see the three-slit experiment? Mind-blowing!)
In a single photon event, the laser emits a “photon” with no location but a wave (with momentum) that expands from the laser into the surrounding environment. It’s a single quanta of energy causing a vibration in the EM field.
Now that energy has to go somewhere, and what we see happening is that waveform somehow interacting with some electron in some atom such that the electron is raised to a new energy level. At that point, the photon does have a location (and presumably we can no longer talk about its momentum).
That interaction requires the full energy of the quanta, so the energy in the field “goes” (or “drains” or some better word) into that interaction.
But this is just me pondering the “waveform collapse” issue and WAG-ing at an ontology.
“I remember many physicists hoping the LHC would provide something, anything,”
Yeah, and now it’s shut down for two years for an upgrade. You’d think not finding SUSY at all would take the wind out of certain sails, but they just keep redefining the target. Part of the problem is that String Theory seems to need it, so no SUSY threatens ST.
There’s also that chart you’ve probably seen showing how the three forces unify at very high energies? Those curves intersect at the same point only if SUSY is true. Without SUSY, they don’t.
So it’s a dream that’s hard to kill.
There was some hope of seeing something new in very esoteric sectors involving (IIRC) weak decay. I can’t recall what it was exactly, and no one is jumping up and down, so whatever they saw may have not survived more analysis. They were seeing bumps in both CMS and ATLAS, I think, and combining the two bumps gave them a nice sigma, but the data weren’t compatible so combining them didn’t really say anything.
Or something like that.
Merry Christmas!
Liked by 1 person
1. “My question in return is how identical is “virtually” identical?”
My conception is that normal events, such as all the deterministic events we see in nature where the quantum events average out, don’t create deviations. It’s only when we tie a macroscopic event to a specific quantum outcome, that a notable divergence happens. As you note, even a minor “meaningless” macroscopic event (such as which way Carroll jumped) might eventually butterfly into major changes.
Of course, we can’t rule out the possibility that quantum indeterminacy doesn’t “bleed” into the macroscopic world outside the precision of our instruments and butterfly all on its own, so the idea of similar universes may not be tenable.
There are definitely lots of versions in the Everettian family of interpretations. One I recently heard about on the Rationally Speaking podcast was relational quantum mechanics, which posits that whether a wave has decohered is relative to an observer. In other words, like the relativity of simultaneity in Einstein’s theories, this holds that where you are in the sequence of events determines when you see the collapse. Schrodinger’s cat sees the collapse as soon as the detection device is triggered, but Schrodinger himself doesn’t see it until he opens the box. However, the relational interpretation is reportedly agnostic about the reality of the other outcomes. (It doesn’t seem agnostic to me, but I probably don’t grasp the full idea.)
I need to look up that Egan story. It sounds interesting.
Ah, ok, I missed the quotes on “drain.” Thanks for the description of the photon. Part of what I find interesting about this is that the electrons are presumably constantly exchanging photons with each other and the nucleus, but despite that exhibit quantum waveness to those of us outside the relationship, which makes me think of the relational interpretation again.
I don’t think I knew that uniting all three forces required SUSY. Interesting. I know the weak and electomagnetic one were already shown to be the same. (Which strikes me as an odd pair.)
All in all, I think I’m happy I’m not a physicist right now.
Merry Christmas!
Liked by 1 person
1. “It’s only when we tie a macroscopic event to a specific quantum outcome, that a notable divergence happens.”
That matches what I’ve heard from MWI fans, but it seems to suffer the same micro/macro issues as many quantum things do. What is a “notable divergence” and what happens? Reality doesn’t diverge at all (why not?), or the diverged lines merge into one (again, why?).
That Egan story is good at pointing out how, if we take MWI at face value, our own reality is a fuzzy continuum of indistinguishable nearby realities. At what point am “I” no longer really me?
Chaos theory suggests (to me) that even minute differences may result in large changes down the road. What if, butterfly fashion, a photon that did pass through my sunglasses accounts for some minute change that ultimately destroys Saturn?
I’ve long wanted to sit down with a working theoretical physisict who’s really into, has really studied, MWI, because I’d like to understand how people like Sean Carroll identify MWI as their preferred interpretation. Some even say it’s the mostly glaringly obvious interpretation!
Doesn’t part of that thinking also come up in Copenhagen? The idea that the cat isn’t superposed to itself, but is to the scientist who hasn’t opened the box. Likewise, the science writer standing outside the lab is superposed until the scientist informs them of the result. And millions of readers are superposed until they read the writer’s article. (And everyone in Andromeda remains superposed probably forever.)
I’m not sure I believe in the idea of macro objects being superposed. What does it mean to suggest I’m superposed? Can experiments demonstrate it? Or is it just that I lack knowledge?
Ugh. We really need some advances in HE physics. We’re just grasping in the dark here.
I think at least some of that is accounted for in the difference between virtual photons and actual photons. I’ve seen some physics videos recently emphasizing the difference between them and how you can’t treat virtual photons as real — they’re almost an accounting device, although obviously something physical is going on. Lamb shift and so forth.
Same here! Electro-weak theory. (And the weak force is the one many books hand-wave on that “has something to do with radioactive decay” … yeah, and making the sun work, too!)
It sure made it seem like unification was a thing though, didn’t it. If two things as seemingly different as EM and weak force are unified, why not the strong force?
Again, we need more information! We don’t even really know if gravity is a force!
Liked by 1 person
2. “At what point am “I” no longer really me?”
Michael and I discussed this as well somewhere else on this thread. It seems like reality likes ruining our clean little categories, such as what is life or non-life (see prions or viriods), what is the border between species (some members of species A can mate with species B, but others can’t), what is computation, or what is a planet. It won’t surprise me too much if it scrambles our ideas of the self.
I told you to stop playing with those glasses Wyrd! Now look at what you’ve done. Who’s going to clean up this mess? We’ve got Saturn all over everything! 🙂
I recently went back and read Sean Carroll’s blog post on the MWI. I’m not sure his instincts on explaining it are the best. He tends to emphasize the multiple universes thing, which I think is a mistake.
Paul Torek above recommended David Wallace’s ‘The Emergent Multiverse’, which I’m thinking about picking up. It looks pretty good in the preview. My only pause is it’s pricey. Of course I’ve often spent more on neuroscience books. I just have to decide if I’m interested enough and willing to invest the work it would require.
I can see why people say the MWI is the most straightforward interpretation though. It does explain a lot. I see it as a candidate for reality. The only question is whether the implications of it in any way falsify it. But as I commented on Carroll’s post, that’s the problem with these interpretations. None of them are uniquely testable.
“— they’re almost an accounting device, although obviously something physical is going on. ”
Didn’t quantum physics start with Max Planck introducing a quanta purely as an accounting device? There was a similar disclaimer on Copernicus’ book. It seems like a lot of physics starts with someone saying, “Don’t worry, this is only for calculating convenience. It’s not it’s real or anything.”
“Again, we need more information! We don’t even really know if gravity is a force!”
Totally agreed on needing more information. Although wouldn’t you say we know gravity is a force? Or did you mean if it’s a force like the others in the Standard Model, with bosons (gravitons) and the like?
Liked by 1 person
3. “It won’t surprise me too much if it scrambles our ideas of the self.”
Yeah. The more I learn and think about “the self” the more complex and puzzling it seems.
“[MWI] does explain a lot.”
That I do realize. I’m confounded by the whole multiple universes thing; that’s pretty much the entirety craw stick.
I vaguely remember reading that Sean Carroll post. Think I’ll go back and re-read it this evening.
The Wallace book sounds kinda interesting… once I read about it. The title put me off, because while I’m open-minded-but-skeptical on MWI, I’m disbelieving (and disinterested) in multiverse theories. I found an online review of the Wallace book that sounds like another read for this evening.
“Didn’t quantum physics start with Max Planck introducing a quanta purely as an accounting device?”
Ha, yes, good point!
“Or did you mean if [gravity is] a force like the others in the Standard Model, with bosons (gravitons) and the like?”
Exactly. I want GR to be essentially correct with some minor correction to accommodate quantum, and I want QFT to turn out to be essentially epicyles — a theory that matches our instruments but is seriously wrong in some key regard.
We know matter/energy is quantized, but the jury is out on time/space. I want them to be smooth (providing yet another duality to reality). And that gravity is due to warped spacetime and there is no such thing as a graviton.
My spacetime wishlist. 😀
Liked by 1 person
4. Wow, that review is 19 pages long. I thought I might sneak a quick read before responding, but I think I’ll just add it to my queue too. Thanks for linking to it!
On GR and QM, I don’t really have preferences on which one wins (assuming they both don’t eventually have to be heavily modified). If spacetime does appear to be smooth, I wonder if we could ever be sure it wasn’t quantized at a size below the level of precision of whatever we were using to measure it.
And an infinitely divisible spacetime seems like it would come with its own potential multiverses. If the space between elementary particles is infinitely divisible, it allows patterns to exist there below our notice, such as entire micro-universes. And entire other universes could have been born, existed, and died in the Planck time at the beginning of the big bang. For that matter, an infinity of universes might have existed during the time you read this reply. (Don’t hit me.)
Liked by 1 person
5. I gave up (for now) on that review once I got to the discussion section. They were a little too glowing in their assessment for me to trust, and there was already a bit of a “yelling at the screen” thing going on here on the material they mentioned to that point.
The book does sound interesting, though. I found myself wondering if Wallace explains some of the stuff that was making me yell.
Continuous spacetime does seem to have the same weird issues the real numbers have. Maybe matter/energy being quantized saves the day?
While space might be infinitely small, matter isn’t, so no micro-galaxies hiding in the dust motes. Quantum limits on energy might also affect the minimum time it takes anything to happen (like c limits causality).
The question might be whether we can trust scale. Atoms have sizes due to their properties, so maybe certain things can only happen on certain scales. (And we use atomic vibrations to define the second.)
Or maybe they’ll find a graviton (or a chronon), and that will end the matter. But until then… well, just say that I look at GR and think, yes, that makes sense, but look at QFT and think, wait, what?!
Obviously the universe is under no obligation to fulfill my sense of how it ought to behave (oh, if only). 🙂
Liked by 1 person
6. “While space might be infinitely small, matter isn’t, so no micro-galaxies hiding in the dust motes.”
I actually wasn’t thinking the micro-universe patterns would be made of any matter/energy as we understand it, but something else, something we never see because it exists too far below the scales we can detect. Call it Mini-Me matter which could have it own smaller Mini-me quanta sizes. Of course, between Mini-Me matter might be Mini-mini-Me matter, and so forth and so on. Turtles all the way down.
Or if in fact there is only the matter/energy we’re familiar with, that means an infinite emptiness between every occurrence of it, which would itself be profound.
Liked by 1 person
7. Yes, as profound as the next real number after zero!
Talk about macro objects in superposition… I’m totally superposed on the real numbers being, in fact, real or, as sure seems sometimes, a fabrication of our imagination.
The thing is: how real is a circle, its diameter, and their ratio? If they are real, so is pi.
Liked by 1 person
9. I don’t get the whole ‘measuring changes quantum particles behavior’ thing. And by ‘not get’ it seems like it doesn’t work or is a simplification that lost important details on the way. For example if ‘measuring’ changes the quantum particles, then at what distance can you measure them? Any distance? If so wow, you’ve invented an instantaneous communication device that’s…faster than light. Nice. Or if the distance actually matters, then ‘measure’ is a term that is a heuristic and lacks the actual details like what distances are involved and where does the effect run out?
Liked by 1 person
1. You’re totally right not to get it. “Measurement” or “observation” is a maddeningly vague aspect of this. It reflects the lived experiences of scientists running experiments on quantum phenomena. Niels Bohr reportedly insisted that the description of this be limited to “ordinary” language, presumably because any attempt at a more precise description would imply knowledge we don’t really have.
It’s called “the measurement problem,” and it’s at the heart of the absurd nature of quantum mechanics. Attempts to solve it have led people down all kinds of bizarre paths.
I sometimes think QM represents the limits of our reality, where that reality emerges from some other underlying meta-reality. It might be that any “interpretation” is simply a vain attempt to map that meta-reality back into our little parochial reality. As patterns in and of the parochial reality, we simply may not be equipped to understand the wider meta-reality.
Liked by 1 person
1. FWIW, I see “measurement” as anything that resolves superposition. For me, the cat was always (obviously) either alive or dead, because the detector monitoring the radioactive sample is the measurement. There is no superposition; there is only a lack of knowledge about the cat.
Liked by 1 person
10. Excellent post, Mike. I enjoy mulling these quantum conundrums around. I am left feeling like an extremely poor sommelier of ideas–I get hints of different flavors but… really I have no idea what I’m tasting. It’s just really, really complex and intriguing. My own opinion is that we just don’t really know what we’re studying, and that at some point there will be a breakthrough in our conception of what reality actually is that will assist us in fitting the pieces of the puzzle we’ve found so far into a more insightful framework. As an example, I think our notions of physical and non-physical have pretty much broken down, and we have only vague ideas as to what consciousness might be, most of them extremely myopic, so that we’re in the position of using pretty poor tools for the job.
Just as one example, in that Quanta article to which you linked, Brian Greene suggests that each copy of you in the MWI is really you, and that the true you is the sum total of these you’s. Something like that. When a scientist says that a “self” might be a superposition of conscious selves occupying subtly related windows of reality, it’s an interesting idea to some folks and frowned upon by others–while when the classic New Age book Seth Speaks posits the same notion it is deemed woo woo foo foo to that crowd, but accepted by the other. This is, in a sense, what I mean about once clear concepts and divisions breaking down. So my own feeling is everyone’s a little bit right, and the answer is somehow a superposition of a great many ideas out there… 🙂
I don’t suspect a ton of physicists are lining up to endorse Brian Greene’s idea of the self. I have no idea, actually. But it’s always interesting to me when these parallels emerge. I think it’s safe to say whatever “models” or “conceptual frameworks” we use to try and organize our phenomenal observations are all wanting right now. What I dislike about the Copenhagen Interpretation is that it seems like a consequential moment in defining the purpose of science–which accepts setting aside questions about what the universe really is, and accepting as complete descriptions of what it does. For me, science is much less interesting when only one of the two questions remains in play…
Happy Holidays, Mike!
Liked by 1 person
1. Thanks Michael, and great hearing from you! Your comments are always thought provoking.
On Brian Greene’s notion of the self spanning multiple copies, I think, much like the notion of additional selves that originate from the idea of mind uploading, it’s a matter of philosophy, in other words, not a fact of the matter, but a personal choice. In both cases, the issue gets blurred as the copies get farther and farther away from the original.
For example, is someone born with my exact genetics, but due to an early quantum branching, lived a radically different life, still me? What about someone who branched away from me before I became a skeptic? Or even before I became interested in science? Or someone who branched away before I broke up with one of my old girlfriends, but instead married her and proceeded to have a large family?
My attitude is that these would all be a sort of sibling, albeit in the case of recent copies, far closer to me than any brother or sister. The only way I might be tempted to ever consider them to be me is if we could somehow share memories, but even then I’d expect difference to arise based on the order in which the various copies received the different memories.
On the Copenhagen Interpretation, I can understand not liking its inherent instrumentalism. I totally agree it’s a lot more inspirational to think of science as the pursuit of truth. The pursuit of models that accurately predict future observations…just doesn’t have the same inspirational resonance.
On the other hand, maybe the idea that the pursuit of truth is anything other than the pursuit of predictive models is an illusion. The real dividing line is whether we want to get into models that make predictions we can’t test. The Copenhagen Interpretation (apparently heavily influenced by the logical positivism in vogue during its formulation), labels that as undesirable.
I think by calling these models that go beyond the mathematics of quantum mechanics “interpretations”, physics has found a way to have its cake and eat it too. It allows us to label the predictive aspects of QM as settled science, but keep trying to figure out what it means.
Although as I’ve noted to you before, and as I did to Callan above, I sometimes wonder if quantum phenomena isn’t right at the edge of the reality we, as a subset of that reality, have any ability to make sense of. It might be a hole we can navigate around mathematically, but can never enter. (Although I hope we never stop trying.)
Happy Holidays to you too Michael!
Liked by 1 person
11. I have some strong opinions about this issue, and have been meaning to bring this up with Sabine Hossenfelder over at So far I’ve been too shy however. This is a woman who I absolutely love! She’d like to help “fix” a physics community that seems to have gotten “lost in the math”. Similarly I’d like to help a science community that attempts to function without generally accepted principles of metaphysics, epistemology, and axiology (or the three elements of “philosophy”). Perhaps if I feel that I’m able to develop my QM ideas here well enough, then I’ll become confident enough to speak with her about this over there some time? Well maybe.
Rather than get caught up in all sorts of higher speculation initially, I like to begin with QM basics. We humans perceive matter in terms of “particles” and in terms of “waves”. Are such perceptions good enough? Apparently they are not. When we try to pin down the exact state of a particle we’re confounded with wave like characteristics. Then when we try to pin down the exact state of a wave we’re confounded by particle like characteristics. So it should instead be better to consider matter to function as both. But apparently we can’t measure matter as some kind of hybrid of the two. Therefore it makes sense to me that we’d witness fundamental uncertainty as expressed by Heisenberg’s uncertainty principle, or an inequality that references Planck’s constant.
So to me there isn’t too much to worry about here. If we must measure particles in one way and waves in another way, though matter ultimately functions as neither but both, then we should expect to be confounded by more exacting measurements in either regard. Given the circumstances, is this not logical?
For example, let’s say that we find a material that’s similar to both rock and wood. So if we assess it as a kind of rock then the harder we look at it from this perspective, the more confounding this stuff should seem to us. Or the same could be said if we assess it as a kind of wood. So that’s essentially what I’m saying is happening with our assessments of matter. If it’s effectively “particle-wave”, though we can only provide measurements in one way or the other, then we should naturally fail as our measurements become more precise. Thus I’m good with quantum mechanics as I understand it. Apparently we’re too stupid or whatever to understand what’s going on.
The controversy however seems to be that most physicists (unlike Einstein) haven’t been content settling for such human epistemic failure. So apparently they’ve decided that no, it’s not that we’re trying to measure something as particle or wave that’s neither. Instead it must be that the uncertainty associated with either variety of measurement reflects an ontological uncertainty which exists in nature itself! So the argument is not that we’re stupid, but rather that nature itself functions outside the bounds of causality, or thus nature functions “stupidly”.
It could be that this view is entirely correct, but what irks me here it is that these physicists also refuse to admit that they thus forfeit their naturalism. Apparently they want to call themselves naturalists, but interpreting QM such that nature functions without causality — well that ain’t natural!
It’s the borderlands of science, such as here, brain study, and so on, that seem most in need of effective principle of philosophy. For this issue I offer my single principle of metaphysics. It reads:
To the extent that causality fails, there’s nothing to figure out anyway.
Unless I’m missing something this “Many worlds” interpretation appears in violation. I interpret it as physicists deciding that reality functions without causality (or “magically”), and then attempt to make sense of this anyway by theorizing “many worlds”. The more that we leave the bounds of causality behind, or thus introduce magical function, explanations should grow obsolete. From here reality should just be what it is. So I consider these sorts of interpretations of quantum mechanics to illustrate category error.
Liked by 1 person
1. A lot of your criticism seems aimed at the more ontological versions of the Copenhagen Interpretation, the ones that say that not only are we faced with an epistemic limit, but that there’s nothing else there, that reality isn’t set until the measurement. That’s usually the version of the CI that critics inveigh against, and I agree with that criticism. The ontological versions of the CI seem excessively pessimistic.
I think Neil Bohr’s version of the CI was closer to your sentiment. Here are the observations, and here are mathematics that can make predictions about those observations, with limitations, but within those limitations predictions are accurate enough to build technologies on top of them, so, “shut up and calculate!” I’ve grown to respect this view more as I’ve continued to learn about quantum physics. It’s not satisfying, but it’s at least epistemically humble.
But I think an MWI enthusiast would respond to you that their interpretation does restore determinism. Unfortunately, it’s determinism for reality overall, not a determinism we can observe. Which of course raises the question, if something is deterministic but not deterministic from any observer’s perspective, is that really deterministic? Who is it deterministic for?
One question I’d have for you is, how do you define naturalism? Is that definition mutable on new evidence? Myself, if I encounter phenomena that doesn’t meet my understanding of naturalism, I would still want to understand the phenomena as much as I could. But naturalism for me is just a set of working assumptions, ones subject to being adjusted as I learn more.
Liked by 1 person
2. If I may interrupt, two quick thoughts:
Firstly, I’m also a big fan of Sabine’s blog, been reading it for years. I highly recommend it. (Peter Woit also has a good blog.)
Secondly, just as (and I very much agree) physicists benefit from philosophy, philosophers can benefit from looking into some of the math involved. Quantum physics is highly mathematical, and the wave-particle duality confusion is, at least in part, a failure of language. At the math level, the confusion essentially goes away.
The way it’s usually put is that matter (as in particles) is something outside our direct experience that has wave-like properties and particle-like properties depending on what aspect of the particle one tests.
Liked by 2 people
3. Wyrd,
I was hoping to hear from you most of all! Perhaps on some level I mentioned Sabine because I recall you mentioning her another time? Anyway it was late 2015 that I became interested in her. Massimo Pigliucci had blogged about her position from a Munich physics conference that he attended.
On philosophers benefiting from math and physics, I certainly agree. I was initially most interested in philosophy as a university student, but didn’t want to become acclimated to accept no generally accepted agreements in the field. And beyond questions what could they teach me without generally accepted positions? Mental and behavioral sciences were next, though I found them far too speculative for comfort. So I looked for a field that could teach me how to learn. Yes physics! But alas, my own mind would not get me through upper division courses. I eventually earned a degree in economics, which I chose somewhat because it corresponded with my own amoral theory of value.
I didn’t mean to imply that modern physicists would improve if they were to become versed in modern philosophy. I actually believe that the field has tremendous problems, though needs improvement in order to better found science.
Regarding language, that’s one of my own main themes. So QM interpretations work pretty well mathematically? But I suppose that natural language explanations are needed most. Mathematics is many orders less descriptive than English. Notice that there’s nothing in mathematics which can’t be described in English, and yet much in English can’t be described in mathematics. Still the English interpretation of the mathematical QM interpretation that you’ve provided seems pretty close to mine.
It’s good to hear that you oppose the ontological version of the Copenhagen Interpretation. Actually I was under the impression that Bohr’s interpretation was more ontological, though perhaps not. Did he ever support Einstein’s “I, at any rate, am convinced that He [God] does not throw dice.”? (Though in practice I support Einstein about that, my own metaphysics is a bit more pragmatic. It’s more like “To the extent that God throws dice, nothing exists to figure out anyway!”)
If Many World enthusiasts are truly causal determinists, then tell me this. Do you think their position holds that all of these worlds actually exist? As in ontologically exist? As a solipsist I can stomach all sorts of crazy notions from a supernatural premise. But in a causal sense that position seems utterly ridiculous. Conversely if these many worlders are simply going epistemological with their position, as in “It can be helpful for us to think about QM this way…” then I could give their position some reasonable consideration.
Yep Mike, it’s deterministic. Who for? All that exists. Once again, I’m a solipsist. Reality is reality regardless of the human’s various idiotic notions.
I define naturalism as a belief that reality functions causally in the end. This definition is a definition, and therefore isn’t mutable to new evidence. Even if I ultimately decide that reality does not function causally, I should still consider this to be a useful definition. Here I’d either be a supernaturalist, or a hypocrite that changes my definition in order to call myself a naturalist.
I understand the desire to understand. This seems quite human and adaptive. Even the most faithful god fearing person should need to use reason in his or her life in order to get along. But to the extent that causality fails, as in ontological interpretations of the uncertainty associated with Heisenberg’s principle, things should not exist to figure out anyway.
Liked by 1 person
1. Eric,
Bohr very much did not support Einstein in his statement about God not playing dice. His response was along the lines of, “Einstein, don’t tell God what to do.” Honestly, while I think his and Heisenberg’s initial strategy was more epistemic, more instrumental, I do get the impression that they crossed the line in later debates. But it’s the instrumental version that I think remains useful.
“Do you think their position holds that all of these worlds actually exist? As in ontologically exist?”
It depends on which ones you talk to. Some are agnostic about whether the other wave function branches continue to exist. Others feel they don’t. But the most vocal proponents tend to think they do exist.
As I mentioned to Wyrd, it’s an old trick in physics to introduce something but then say, “Don’t panic, this is just a useful accounting gimmick. It’s not like this crazy thing is real or anything.” This has been particularly true for quantum mechanics. Max Planck originally introduced quanta purely to make his calculations work. I suspect some Everettians take this tack to side step the ontological debates. The thing is, many things that are mathematically convenient go on to become ontological necessity.
“Reality is reality regardless of the human’s various idiotic notions.”
That may be true, but how do we know whether we know reality? I think the only answer is whether our predictions are accurate. Of course, QM can’t predict a single quantum event, only the probabilities of certain outcomes. But as the numbers of events climb, those probabilities average out to solid predictions.
Given the above, whatever QM is, it has to be isomorphic with the reality in some way, otherwise those predictions would fail. As Wyrd mentioned, this may only be in the sense that epicycles were useful in Ptolemaic cosmology. (Interestingly, epicycles today remain as a useful perspective observational concept, despite the fact that we know they’re an illusion.)
Liked by 1 person
4. Mike,
If it’s the case that Bohr and Heisenberg began with a responsible epistemological position for their Copenhagen Interpretation, then why would they escalate it to ontology? Might I suggest a bit of jealousy? Even then Einstein was “the great one”. How wonderful it would feel to up him! But perhaps Einstein should mainly be blamed for selfishly not realizing that a responsible epistemological position had actually been presented, and so he chose to interprete their interpretation ontologically? Notice that “God doesn’t play dice” is an ontological claim. If he used this to counter the CI then he effectively should have goaded them into an irresponsible ontological position. And apparently they not only accepted, but used it to kick his ass! Today in popular media, and even among physicists, it’s thought that Einstein really blew it regarding QM.
I account for this incidence through a far larger structural problem. Notice that we’re asking physicists to do physics, though without provide them with any effective rules of metaphysics or epistemology to work from. Thus we should need a community of professionals armed with generally accepted rules from which to guide the function of science. Notice that the field of philosophy today has the flavor of “art and culture” rather than “science” to it. I’m not saying that this needs to change however. I’m saying that a new community of professionals must emerge that has a single mission — to straighten out science by means of its own accepted principles of metaphysics, epistemology, and axiology.
And what specifically do I propose to fix this particular mess? I’d mandate that the authors of any given position clearly state whether their proposal is theorized to just be “useful” (epistemology), or to also be “real” (ontology). Then as for those ambitions theorists that insist upon proposing an ontology regarding QM, there would be my single principle of metaphysics to contend with. Theorizing that any given bit of reality is not causally determined to occur exactly as it does occur, takes the theorist beyond the bounds of naturalism. Here there can be nothing to explain because without causal dynamics, no explanation will thus exist. This is the realm of magic. And I’m not saying that this doesn’t effectively occur. I’m saying that the position of Einstein and I, conversely, happens to be “natural”.
Well yes today, though once we have a community of professionals that’s able to effectively regulate the function of science through proven principles, there should only be “epistemic necessity”.
The only reality that I “know” exists, is that I exist in some form or other. If you’re conscious then you could say the same about yourself. And I consider it quite special to be able to truly know even that. Conversely my computer shouldn’t know that it exists (if it does exist), let along anything else.
I consider quantum mechanics to mark an incredible human achievement, though epistemologically rather than ontologically. And I do believe that it’s isomorphic with reality. But if any associated dynamic is not causally determined to occur exactly as it does occur, or “ontological uncertainty”, then the theory should effectively describe the function of magic.
But wait a minute, as I define it no explanation can exist to describe non-causal function, or magic. Right… So the effectiveness of QM theory suggests that all associated dynamics must be causally determined to occur exactly as they do occur. You’re not going to like that bit of circularity! I’ll remind you however that we’re measuring particles and waves here, though apparently matter functions as something associated but different.
Liked by 1 person
1. Eric,
I don’t know if you remember, but I actually think the distinction between instrumentalism and scientific-realism is a false dichotomy. We never have access to reality. We only ever have theories, predictive models about that reality. The “real” is only another more primally felt model. In the end, all we have are the models.
(This actually includes our model of self, as counter-intuitive as that sounds. Psychology has shown that access to our own mind is subject to just as many limitations as the information we get from the outside world.)
The only real distinction is between predictions that are testable and those that aren’t. The ones that are testable, and which have been demonstrated to have some level of accuracy, are “right” to whatever level they meet. But predictions that haven’t or can’t be tested should be regarded as speculative to varying degrees.
An untested or untestable prediction which is tightly bound to a tested prediction has a higher chance of eventually being shown to be accurate. But the more steps beyond observation to get to the prediction, the shakier the ground it rests on.
Under this guideline, the successfully tested predictions we have are the evolution of the wave function according to the Schrodinger equation, until information about it leaks into the environment, then we have the more definite state (position of the particle), etc. This is the instrumental Copenhagen Interpretation.
Everything else: assertions that the Copenhagen Interpretation is the only reality, pilot waves, spreading superpositions continuing under the Schrodinger equation, etc, have to be viewed as speculation, at least until someone can figure out some way to test them.
Still, speculation is fun, and should be fine as long as we acknowledge what we’re doing.
Liked by 1 person
5. Mike,
Well it sounds like we’re generally on the same page with that, though I wouldn’t refer to the distinction between instrumentalism and scientific–realism as a false dichotomy. Even if science only ever has models, we of course need words such as “real” which reference what actually exists beyond our models. And if some of these MWI’ers have decided that the lack of certainty in our measurements mandate “many worlds” in truth rather than simply as an accounting heuristic, then this would seem to be a wonderful example of “scientific realism”. This also strikes me as “the tail wagging the dog”.
Furthermore I don’t mind going ontological myself in some ways. I happen to believe that “God doesn’t throw dice”, which is to say I believe in absolute causality regardless of what we humans are able to figure out. Perhaps a reasonable name for this position would be “extreme naturalist”? So then what shall a person be called who makes the ontological claim that some things under a QM framework aren’t causality determined to occur exactly as they do occur? “Super-naturalist” seems over the top, and even quasi-naturalist”. So I’ll just go with straight “naturalist”, but in addition note that from this distinction “spooky stuff” does ontologically occur in some capacity.
Then there is my logical proposition from last time. My metaphysics holds that if something functions without causality, then nothing exists here to even theoretically figure out. Why? Because it’s the causality that would found any ontological explanation for any given event. The causality would be the vital element regardless of any potential understanding — nothing would otherwise exist to even look for.
I’m fine with how the QM probability distribution produces a macroscopic world which seems to function causally. But how can it be possible for something that is not perfectly caused to do whatever it does, to in the end become a causal constituent for a causal realm? I see that as a contradiction. Non-causal function, where by definition nothing exists to potentially figure out, should have no potential to produce causal function. (I suspect that there’s a simple way for this to be illustrated mathematically.) Thus if we notice that quantum function does produce causal function, then from here it must only be possible that all elements of quantum function occur causally in the end, and even if things continue to seem random to us humans.
Yes speculation is fun! Furthermore once science has better rules from which to work, it should also become more productive than today. (I see you’ve now put up a post on Sean Carrol. Sweet!)
Liked by 1 person
1. Interesting observation Steve! I’ve noticed a couple of interpretations for the Law of Large Numbers. One is that with enough trials, all sorts of implausible things eventually occur. The other seems more relevant however. It’s that the more times that you run a given experiment, the more statistically verified a given result will be. It’s essentially that all of these “random” results end up building a stronger and stronger case for a given figure. Is that what you meant?
I can see how it seems appropriate to apply this principle to quantum mechanics given that we’re discussing probability distributions for matter rather than exact states of being. But then again, my sense is that the LLN was set up to address every day causal events rather than quantum events that are theorized to not function causally. Does it address quantum strangeness as well? Have you found an infinitely better challenge to Einstein than the utterly pathetic “Don’t tell God what to do”? Is this a true answer, as in “God’s dice create order”? This deserves some academic consideration!
I’d be surprised if something fully beyond causality in an ontological sense is able to then go on to construct the causal function observed in nature. Causality is kind of my thing. But I’d love for this theory to get out there as a challenge to us causalists.
Liked by 1 person
1. While it’s true that a large number of random events will yield some rare outliers as part of the ensemble, when taken as a whole, it leads to highly predictable results. It’s the basis of statistical mechanics. Even in classical statistical mechanics, individual particles are assumed to behave randomly, but when the ensemble contains 10^23 particles, the values of pressure, temperature, etc are entirely deterministic. My statistical mechanics lecturer at university joked that when very large numbers are involved, “it is better to gamble than to count.”
Causality may be an illusion, as well as ontological fact.
Liked by 2 people
2. I agree entirely with your former professor’s observations Steve, and indeed, the Law of Large Numbers as I believe it’s traditionally been used. This is to say that if you do a single experiment a large number of times, it will continue to validate the same point in the end. And I also agree with that other interpretation. Even though a psychic may get a given prediction right, the LLN shall demonstrate the truth or falsity of this person’s powers over time.
And why does the LLN remain solid? Because of causality itself. Without an ordered world where cause leads to associated effect and the converse, it might be that the exact same experiment would not generally continue to provide the same sort of result. Or it might be that a human could indeed gain psychic powers and all sorts of “spooky” stuff. Causal order is required in order for the LLN to remain valid. Otherwise we’d need to count rather than to gamble.
I suppose that this is why advocates of ontological voids in causality haven’t yet tried to use the LLN to argue their case. Thus we instead get pedigreed snake oil carnival hawkers like Sean Carrol. Apparently people love hearing this sort of thing.
(I haven’t yet found a mathematical proof that causality can’t emerge from non-causality, but perhaps I will.)
I prefer “emergent” to “illusion”, but it’s the same concept.
If it’s true that there is a fundamental uncertainty to QM function, then yes, the causality that we observe must emerge from non-causality. Or it could be that there is a causality which we don’t grasp here given that we erroneously perceive existence in terms of particles and waves.
Causality may not be the fundamental thing we take it to be.
Right. But a better way to say this might be that causality may or may not be absolute. Somehow to me your statement implies that we’d still call something “causal” even if it isn’t. Or perhaps I’m being pedantic? You wouldn’t term something “causal” if it weren’t causally mandated to occur in the exact manner that it does would you?
Liked by 2 people
1. Eric,
If causality is emergent, that is, real but a composite process made up of lower level processes which are not themselves causal, then I would use it in the same manner I use “temperature”, “weather”, or “molecule”. Each of these things objectively exist, but are composed of things which are not that thing, in other words, they are composite phenomena.
The idea that causality is a composite phenomena is very counter-intuitive, but then so are many things in science.
Liked by 3 people
3. All true Mike, so apparently I was being pedantic there. If causality emerges from non-causality then it isn’t the fundamental thing that we take it for, similar to “molecule” and all the rest. But given our flawed perspectives I do still suspect that it’s fundamental in the end.
Liked by 1 person
12. Great post, and a clear summary of the position. I (like most people) have problems with all the proposed solutions, and that is as it should be, since none of them are entirely persuasive. The most unconvincing commentators are those who argue passionately for one particular interpretation.
My gut feeling is that we are still missing a fundamental insight, and I hope this will emerge either through some new observation, or else a new theory. My instinct is that entanglement holds the key to unlocking the answer. Disclaimer – it may be that this is wrong, and that it is just me who is lacking the fundamental insight 🙂
Liked by 1 person
1. Thanks Steve!
In recent decades, decoherence has become the preferred description of what happens when the wave appears to become a particle. Under that description, what actually happens is the wave becomes “entangled” with the environment. So your gut may be on to something!
It feels like all physicists can keep doing is testing the boundaries of this stuff until something unexpected comes up. After all, it was the necessity of dealing with bizarre observations that initially forced them to their current understanding of QM, such as it is. The answer probably lies in continuing to pile up those observations until something new emerges from the data, but that might take decades or centuries.
Liked by 1 person
Leave a Reply to SelfAwarePatterns Cancel reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
3e8f4cf07fb563a0 | Created by Materia for OpenMind Recommended by Materia
Start How to Film an Electron: the Chemistry of the Improbable
20 November 2018
How to Film an Electron: the Chemistry of the Improbable
Estimated reading time Time 3 to read
This is an improbable story. Improbable because until less than a decade ago it seemed impossible to be able to see how electrons move in a molecule, breaking and forming their bonds, in other words, pulling the threads of chemistry. In the subatomic world everything happens incredibly fast, at speeds measured in attoseconds—a billionth of a billionth of a second (that is, 10-18 seconds). And on that scale, a second is a virtually infinite amount of time.
Improbable also because to see and record the movement of something so small and so fast you need huge installations and supercomputers calculating for years. Improbable, in short, because a discovery rarely comes along that can change the way chemistry is practiced. This is, therefore, a story that requires a bold imagination.
In 2001, a technological breakthrough occurred that altered this improbability. Researchers at the Max Planck Institute of Quantum Optics, in the German city of Göttingen, generated with super-fast lasers the first pulses of light that lasted just attoseconds. For humans this is an irrelevant interval of time, but in those very brief instants is when electrons display their natural rhythm. For the first time, the necessary light source was available to “see” them and, perhaps, to record them.
The first attosecond camera
Eight years later, a team led by Fernando Martín, professor of chemistry and physics at the Autonomous University of Madrid, Marcus Vrakking, director of the Max Born Institute in Berlin and Mauro Nisoli, professor of physics at the Polytechnic of Milan, designed the first attosecond camera capable of seeing the movement of electrons in molecules. The first film gave us in real-time an intimate view of hydrogen, the simplest molecule in the universe.
The experiment was inspired by the camera that the Egyptian Nobel Prize winner Ahmed Zewail had designed to see the movement of the atomic nuclei, but with greater resolution. The camera works by sending a pulse of light lasting some attoseconds that irradiates a molecule and induces the movement of electrons. At intervals, also measured in attoseconds, another pulse of the same ultra-fast nature takes photographs of the movement, which are then projected in a concatenated way, creating the illusion of movement—like the train arriving at a station, which so amazed audiences of the first films of the Lumière brothers in 1896.
A look inside the molecules. Credit: UAM
“The difference with a normal movie is that to film something that moves in times as short as attoseconds, you have to take pictures with exposure times that are of the same order. Otherwise they would come out blurry,” explains Martín.
Overcoming the technical complexity—these lasers occupy the entire floor of a building and have thousands of pieces and optical devices—was made possible by combining the talents of the three scientists: Nisoli was a pioneer in the development of one of the first attosecond light pulses, Vrakking an expert in molecular spectroscopy, and Martin leads one of the only two groups in the world capable of developing the visualisation tools needed, because the images that emerge from these cameras are not understood at all—they are only blurry spots.
“It’s a bit more complicated but the basic idea is the same as in 3D movies: if you don’t wear the special glasses you get at the cinema, the image looks blurred. We have to develop the equivalent of glasses to transform the images into something we understand,” Martín continues. These tools are obtained by solving the Schrödinger equation, which governs the atomic and subatomic world in the same way that Newton’s rules govern the macroscopic one. However, they are much more difficult to solve, especially in the case of molecules, and supercomputers are needed. Martín’s team used the MareNostrum supercomputer at the National Supercomputing Centre in Barcelona. The calculations took a year.
Control chemical reactions
Four years later, in 2014, Martín and Nisoli obtained the first film of a molecule with biological interest: phenylalanine, an essential amino acid. During the experiment, another unlikely effect appeared—in addition to seeing the movement of electrons in a more complex molecule, the scientists found that with these pulses of light they could, one might say, modify it at will. And now is when this story takes a turn that we can only imagine. Because the bonds between different atoms are broken or formed according to what the electrons say, if they moved in a different way they could break or form other bonds; in other words, the resulting chemistry could be completely different from what we know.
“The objective is to try to control chemical reactions at will, for example, forcing something to react because of a pulse of light that is going to change the movement of its electrons, or the opposite, that molecules that normally react spontaneously stop doing so,” explains Fernando Martín.
These discoveries are giving rise to a new way of doing chemistry based on the use of attosecond lasers and supercomputing, a term for which has already been coined: attochemistry. There is still much to be done until techniques emerge that can reach the laboratories. Meanwhile, several groups are now investigating its application in condensed matter such as graphene, the new material that some say is going to change the world.
Eugenia Angulo
Related publications
Comments on this publication
Write a comment here…* (500 words maximum)
This field cannot be empty, Please enter your comment.
*Your comment will be reviewed before being published
Captcha must be solved |
0807a9f7fc70011c | Take the 2-minute tour ×
Sometimes (often?) a structure depending on several parameters turns out to be symmetric w.r.t. interchanging two of the parameters, even though the definition gives a priori no clue of that symmetry.
As an example, I'm thinking of the Littlewood–Richardson coefficients: If defined by the skew Schur function $s_{\lambda/\mu}=\sum_\nu c^\lambda_{\mu\nu}s_\nu$, where the sum is over all partitions $\nu$ such that $|\mu|+|\nu|=|\lambda|$ and $s_{\lambda/\mu}$ itself is defined e.g. by $ s_{\lambda/\mu}= \det(h _{\lambda_i-\mu_j-i+j}) _{1\le i,j\le n}$, it is not at all straightforward to see from that definition that $c^\lambda_{\mu\nu} =c^\lambda_{\nu\mu} $.
Granted that this way of looking at it may seem a bit artificial, as I guess that in many of such cases, it is possible to come up with a "higher level" definition that shows the symmetry right away (e.g. in the above example, the usual (?) definition of $c_{\lambda\mu}^\nu$ via $s_\lambda s_\mu =\sum c_{\lambda\mu}^\nu s_\nu$), but showing the equivalence of both definitions may be more or less involved. So I am aware that it might just be a matter of "choosing the right definition". Therefore, maybe it would be better to think of the question as asking especially for cases where historically, the symmetry of a certain structure has been only stated 'later', after defining or obtaining it in a different way first.
Another example that would fit here: the Perfect graph theorem, featuring a 'conceptual' symmetry between a graph and its complement.
What are other examples of "unexpected" or at least surprising symmetries?
(NB. The 'combinatorics' tag seemed the most obvious to me, but I won't be surprised if there are upcoming examples far away from combinatorics.)
share|improve this question
Quadratic reciprocity. – Terry Tao Dec 13 '13 at 22:55
The relation between $\zeta(1-x)$ and $\zeta(x)$ for the Riemann $\zeta$ function. – Lev Borisov Dec 14 '13 at 2:26
Number of partitions of $n$ into no more than $k$ terms that are each no larger than $l$. The symmetry between $l$ and $k$ might not be immediately obvious to novices. – Yoav Kallus Dec 14 '13 at 2:46
The Peano definition of addition, even. – Joe Z. Dec 14 '13 at 2:56
I saw the title and my first thought was "Littlewood-Richardson coefficients". :) – darij grinberg Dec 14 '13 at 20:55
33 Answers 33
If $a$ and $b$ are positive integers, and you make the definition $$ a \cdot b = \underbrace{a + \cdots + a}_{b \text{ times} }$$ then it's a slightly surprising fact that $a \cdot b$ is actually equal to $b \cdot a$.
share|improve this answer
Indeed, this fails in general when $a,b$ are ordinals. – Terry Tao Dec 15 '13 at 4:51
It's even more surprising if you start with the inductive definitions of plus and times. The proof that $ab=ba$ comes as Proposition 72 in the first development of this theory, by Grassmann in 1861. – John Stillwell Jan 13 at 9:12
A nice example from classical mechanics is this: there is a hidden $SO(4)$ symmetry in the elliptical orbits of a particle in an inverse square potential, ie. the Kepler problem.
The system has an obvious $SO(3)$ symmetry because the inverse square law is invariant under rotations. But there's no a priori clue that an $SO(4)$ symmetry exists in this system.
You can read about it here: http://math.ucr.edu/home/baez/classical/runge_pro.pdf
This carries over to the quantum mechanical case when you solve the Schrödinger equation for an inverse square potential.
You can read about that here: http://hep.uchicago.edu/~rosner/p342/projs/weinberg.pdf
The result is that the hidden $SO(4)$ symmetry explains the "coincidence" that many hydrogen atom states have the same energy.
share|improve this answer
1. I think that if you put yourself back in the position of someone discovering this for the first time, the equality (under suitable hypotheses) $${\partial^2f\over\partial x\partial y}={\partial^2 f\over\partial y\partial x}\quad (1)$$ should count.
2. Here's a surprising application of that suprising equality. Suppose you're a profit-maximizing competitive firm, hiring both labor ($L$) (at a wage rate of $W$) and capital ($K$) (at a rental rate of $R$). Then an increase in $W$ will, in general, lead you to reduce your output and so employ less capital, but at the same time lead you to substitute capital for labor and so employ more capital. On balance, the derivative $dK/dW$ could be either positive or negative. Likewise for the derivative $dL/dR$. It does not seem to me to be at all intuitively obvious that these derivatives even have the same sign, much less that they are equal. But if one takes $f$ in (1) to be profit as a function of $x$ (labor) and $y$ (capital) then one discovers that in fact
$${dK\over dW}={dL\over dR}$$
(Of course this looks more symmetric if you write $X_1$ and $X_2$ for labor and capital, and $P_1$ and $P_2$ for the wage rate and the rental rate.)
share|improve this answer
Higher Homotopy groups $\pi_n(X)$ are abelian. This is quite surprising if you see the defintion for the first time and probably got in touch with the classical fundamental group before, which is not abelian in general.
In fact, higher homotopy groups should serve as a generalization to the fundamental group in contrast to the abelian homology groups, when they were introduced, but as one recognized, that they are abelian too, they seemed to be not a nice generalization.
share|improve this answer
Rolling one surface on another without slipping binds the velocity of the rolling surface and its angular velocity, giving a rank 2 subbundle in the tangent bundle of the 5-dimensional space of tangential positionings of the 2 surfaces in space. This subbundle, when you roll one sphere on another, has an 8 dimensional symmetry group, unless one sphere has exactly one third the radius of the other sphere, in which case the subbundle is preserved by a 14 dimensional group of diffeomorphisms of the 5-dimensional manifold: the split real form of the simple Lie group $G_2$.
share|improve this answer
This subbundle is my favorite example of a non-integrable distribution (if the surfaces are "generic", at least) - you can physically see that rolling a sphere in an "infinitesimal square" on a plane makes the sphere rotate. – Peter Samuelson Dec 14 '13 at 15:29
Consider the Desargues configuration. It consists of (1) two triangles, say $ABC$ and $A'B'C'$ such that the lines $AA'$, $BB'$, and $CC'$ all meet at a point $P$, and (2) the three points of intersection of corresponding sides $X=(BC)\cap(B'C')$, $Y=(AC)\cap(A'C')$, and $Z=(AB)\cap(A'B')$. Desargues's theorem says that then $X$, $Y$, and $Z$ are collinear. The Desargues configuration consists of the 10 points mentioned above ($A,B,C,A',B',C',P,X,Y,Z$) and the 10 lines mentioned (the three sides of both triangles, the three lines through $P$, and the line $XYZ$). The surprising (to me) symmetry is an action of the cyclic group of order 5. In fact, the graph whose vertices are the 10 points of the Desargues configuration and whose edges join any two points that are not together on any of the configuration's 10 lines is the Petersen graph, which is usually drawn in a way that makes the cyclic 5-fold symmetry visible.
share|improve this answer
Have used Desargues for easily a hundred times in my schooldays and never realized this. I actually wasn't aware that the Petersen graph had any deeper meaning than that of a counterexample to some conjectures of days gone by. Nice!! – darij grinberg Dec 14 '13 at 21:01
Hermite's reciprocity: as representations of $GL_2$, we have $$ S^k(S^l\mathbb{C}^2)\simeq S^l(S^k\mathbb{C}^2). $$
share|improve this answer
The joint distribution of IID normal random variables is spherically symmetric.
Although invariance under permutations of the coordinates is obvious for any IID variables, spherical symmetry is rare. In fact, this characterizes the normal distribution.
share|improve this answer
In fact, the "correct" definition of Littlewood-Richardson coefficients shows a surprising $S_3$-symmetry among all the indices $\lambda,\mu,\nu$. See http://arxiv.org/abs/0704.0817.
A further example related to symmetric functions is the symmetry between the area and bounce statistics of Dyck paths. See for instance Chapter 3 of http://www.math.upenn.edu/~jhaglund/books/qtcat.pdf. No combinatorial proof of symmetry is known.
There are many enumeration problems with "hidden symmetry." For instance, what is the probability that 1 and 2 are in the same cycle of a (uniform) random permutation of $1,2,\dots,n$? More interesting, suppose that I shuffle an ordinary deck of 26 red cards and 26 black cards. I turn the cards face up one at a time. At any point before the last card is dealt, you can guess that the next card is red. What strategy maximizes the probability of guessing correctly? The surprising answer is that all strategies have a probability of 1/2 of success! There is a very elegant way to see this.
share|improve this answer
@StevenLandsburg: imagine the dealer turns over the bottom card of the deck when you guess, instead of the top one. Clearly this situation is symmetric to the one described above, but also clearly every strategy gives 50/50 odds as the outcome is determined before the game even starts. – Sam Hopkins Dec 14 '13 at 1:00
Can you fix the first link to point to the abstract rather than directly to the PDF? Thank you! – Harry Altman Dec 14 '13 at 18:11
From school days... Take positive reals x,y,z,w. The following statement is actually symmetric in x,y,z,w:
"there exists an equilateral triangle of side length w, and a point whose distances from the three vertices are x,y,z"
enter image description here
A quick proof: Let $ABC$ be equilateral and $P$ arbitrary. Construct $BPQ$ equilateral. Let $AB=AC=BC=w$, $AP=x$, $BP=y$ and $CP=z$. Then $BP=PQ=BQ=y$ by construction, $CP=z$ and $CB=w$ obviously, so it remains to check that $CQ=x$. Now note that triangle $CBQ$ is the $60^\circ$ rotation of $ABP$ around $B$.
share|improve this answer
A pedestrian definition of the rank of a matrix as the maximum number of linearly independent columns equals the maximum number of linearly independent rows.
share|improve this answer
The combinatorial definition of the Schur functions is $$ s_\lambda(x) = \sum_{T \in SSYT(\lambda)} x^{cont(T)} $$ where $SSYT(\lambda)$ is the set of semi-standard Young tableaux of shape $\lambda$ and $x^{cont(T)}$ is the product over all $i$ of $x_i^{\# i\text{'s in }T}$. This is not manifestly a symmetric function. The Bender-Knuth involution proves that $s_\lambda(x)$ is invariant after swapping $x_i$ with $x_{i+1}$, and thus $s_\lambda(x)$ is, indeed, symmetric.
share|improve this answer
And more startlingly (or at least far less obviously), the Stanley symmetric functions and their generalizations. – darij grinberg Jan 22 at 17:43
The outer automorphism of $S_6$.
share|improve this answer
This is a rather specialized example, but dear to my heart.
Consider the set of "Richardson subvarieties" of the flag manifold $GL_n/B$, intersections of Schubert and opposite Schubert varieties. The only part of the Weyl group that preserves this set is $\{1,w_0\}$ where the $w_0$ exchanges Schubert and opposite Schubert varieties.
Now project these varieties to a $k$-Grassmannian, obtaining "positroid varieties". This includes the Richardson varieties in the Grassmannian, and many new varieties.
Now the part of the Weyl group that preserves this collection is the dihedral group $D_n$! The symmetry has gotten bigger by a factor of $n$.
share|improve this answer
I always found $\mathrm{Tor}_R\left(M,N\right) \cong \mathrm{Tor}_R\left(N,M\right)$ for a commutative ring $R$ and two $R$-modules $M$ and $N$ to be mysterious. Then again I have no idea about homology and thus wouldn't be surprised if this is a triviality from an appropriate viewpoint.
Volker Strehl's generalized cyclotomic identity (Corollary 6 in Volker Strehl, Cycle counting for isomorphism types of endofunctions states that $\prod\limits_{k\geq 1} \left(\dfrac{1}{1-az^k}\right)^{M_k\left(b\right)} = \prod\limits_{k\geq 1}\left(\dfrac{1}{1-bz^k}\right)^{M_k\left(a\right)}$ in the formal power series ring $\mathbb Q\left[\left[z,a,b\right]\right]$, where $M_k\left(t\right)$ denotes the $k$-th necklace polynomial $\dfrac{1}{k}\sum\limits_{d\mid k} \mu\left(d\right) t^{k/d}$. I recall this being not particularly difficult, but quite useful.
Every nontrivial commutativity of some family of operators probably qualifies as an unexpected symmetry. Here are three examples:
1. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $Y_i \in \mathbb Z\left[S_n\right]$ by $Y_i = \left(1,i\right) + \left(2,i\right) + ... + \left(i-1,i\right)$ (a sum of $i-1$ transpositions). Then, $Y_i Y_j = Y_j Y_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is a simple exercise, and the $Y_i$ are called the Young-Jucys-Murphy elements.
2. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{0,1,...,n\right\}$, define an element $\mathrm{Sch}_i \in \mathbb Z\left[S_n\right]$ as the sum of all permutations $\sigma \in S_n$ satisfying $\sigma\left(1\right) < \sigma\left(2\right) < ... < \sigma\left(i\right)$. (Note that $\mathrm{Sch}_0 = \mathrm{Sch}_1$ when $n\geq 1$.) Then, $\mathrm{Sch}_i \mathrm{Sch}_j = \mathrm{Sch}_j \mathrm{Sch}_i$ for all $i$ and $j$ in $ \left\{0,1,...,n\right\}$. In fact, $\mathrm{Sch}_i \mathrm{Sch}_j = \sum\limits_{k=0}^{\min\left\{n,i+j-n\right\}} \dbinom{n-j}{i-k} \dbinom{n-i}{j-k} \left(n+k-i-j\right)! \mathrm{Sch}_k$, which makes the symmetry maybe not that surprising (no similar equalities hold in cases 1 and 3!). See Manfred Schocker, Idempotents for derangement numbers, Discrete Mathematics, vol. 269 (2003), pp. 239-248 for a proof.
3. Consider the group ring $\mathbb Z\left[S_n\right]$ of the symmetric group $S_n$. For every $i\in \left\{1,2,...,n\right\}$, define an element $\mathrm{RSW}_i \in \mathbb Z\left[S_n\right]$ as
$\sum\limits_{1\leq u_1 < u_2 < ... < u_i\leq n} \sum\limits_{\substack{\sigma\in S_n, \\ \sigma\left(u_1\right) < \sigma\left(u_2\right) < ... < \sigma\left(u_i\right)}} \sigma$.
Then, $\mathrm{RSW}_i \mathrm{RSW}_j = \mathrm{RSW}_j \mathrm{RSW}_i$ for all $i$ and $j$ in $ \left\{1,2,...,n\right\}$. This is Theorem 1.1 in Victor Reiner, Franco Saliola, Volkmar Welker, Spectra of Symmetrized Shuffling Operators, arXiv:1102.2460v2, and a nice proof remains to be found.
share|improve this answer
The Tor symmetry is basically just that $M \otimes N \cong N \otimes M$, and you take the derived functors of both sides. Generalizing, any and all nice properties of (co)homology groups would seem to be mysterious symmetries if you consider the definition to be messing around with projective or injective modules, and not something more intrinsic like derived functors. – Ryan Reich Dec 15 '13 at 5:14
Morley's trisector theorem allows you to build a triangle which is maximally symmetric out of one which has no symmetry at all.
share|improve this answer
Let $G$ be a finite group with order $n$. For each $d$ dividing $n$, the number of subgroups of $G$ of order $d$ equals the number of subgroups of order $n/d$ if $G$ is abelian. More broadly, the lattice of subgroups of a finite abelian group looks the same if you flip it around by 180 degrees.
This is not at all obvious at the level at which the statement can first be understood, essentially because there is no natural way to construct subgroups of index $d$ from subgroups of order $d$ in a general finite abelian group with order divisible by $d$. It is not clear at a beginning level how the commutativity of the group leads to such conclusions.
share|improve this answer
A couple very disparate answers that spring to mind (fortunately, this is community wiki, and actual experts should feel very free to improve my exposition of either):
The negative gradient flow for the Chern-Simons functional on a 3-manifold $M$ naturally satisfies a four-dimensional symmetry. Namely, if one has a principal $G$-bundle on $M$ and some connection $A$ on this $G$-bundle (which I'll carelessly think of as a $\mathfrak{g}$-valued $1$-form on $M$), the Chern-Simons functional $CS(A) = \int_M \Big( dA + \frac{2}{3} A \wedge A \Big) \wedge A$ is a perfectly well-defined function on the space of connections, and one can attempt to perform the negative gradient flow with respect to a natural metric on this space of connections (this being a very natural thing to do from the point of view of Morse theory, for example). If you want, you can interpret the solution to this flow as a connection on the bundle pulled back to $M \times \mathbb{R}$, and while this connection clearly transforms nicely under $Diff(M)$, there's no particular reason to think it's a well-behaved object under the diffeomorphism group of the four-manifold $M \times \mathbb{R}$. However, this negative gradient flow equation turns out to be exactly the anti-self dual equation $F^+ = 0$, where the curvature $F = dA + A \wedge A$ and its self-dual part is $F^+ = \frac{1}{2}(F + *F)$. This equation manifestly respects the symmetries of the entire four-manifold, and this point of view is a very effective one for proving even basic things, like gauge invariance, of the Chern-Simons functional. Witten is very fond of making this point and my understanding is that this insight allowed him to extend his QFT description of the Jones polynomial to a QFT description of its categorification, Khovanov homology.
And now for something completely different: associativity of the quantum cup product. A familiar object to many people is the cohomology ring $H^*(X)$ of a space $X$, which is associative, (graded) commutative, and just generally great. If $X$ is a symplectic manifold, there's an interesting way to deform the multiplication on this ring using counts of $J$-holomorphic curves passing through various cycles. In effect, one picks a compatible almost-complex structure on the symplectic manifold, and then if one writes $\alpha * \beta = \sum_{\gamma} c_{\alpha \beta \gamma} \gamma$, where we think of $\alpha, \beta, \gamma$ as cycles in $X$ (using Poincare duality), the coefficient $c_{\alpha \beta \gamma}$ is a generating function in some formal variables, the coefficients of which are counts of holomorphic curves of fixed genus and homology class intersecting our three cycles $\alpha, \beta, \gamma$. Using this deformed multiplication gives the quantum cohomology ring $QH^*(X)$. Now, some properties of this ring, like graded commutativity, are fairly easy to see from the definition, but associativity is really quite tricky! (I realise this isn't exactly what you asked in your question as it's not just a symmetry of some coefficient, but you can phrase associativity as a symmetry of something or other -- if you want to be technical, a four-point Gromov-Witten invariant -- so I think it qualifies.) The associativity is somehow not so bad to see in the algebro-geometric case (or perhaps this is just my bias as an algebraic geometer), but in symplectic geometry you really need some nontrivial analytic estimates at some point in the proof. And you get a lot out of it! Associativity of this quantum cohomology ring encapsulates a wealth of information on enumerative geometry counts associated to $M$; indeed, it was basically this idea that allowed Kontsevich to find his recursion for the number of degree $d$ curves through $3d + 1$ general points in $\mathbb{P}^2$.
Finally, I kind of want to mention strange duality, even though that now really isn't an answer to the question, as you have to modify one side or the other; I'll just copy a very quick summary from the abstract to arxiv.org/abs/math/0602018: ``For X a compact Riemann surface of positive genus, the strange duality conjecture predicts that the space of sections of certain theta bundle on moduli of bundles of rank r and level k is naturally dual to a similar space of sections of rank k and level r.'' The paper itself is a great place to learn more about it if you're interested!
share|improve this answer
In number theory, Terry Tao already mentioned Quadratic Reciprocity in his first comment, but there's also the reciprocity formula $$ s(b,c) + s(c,b) = \frac1{12}\left( \frac{b}{c} + \frac1{bc} + \frac{c}{b} \right) - \frac14 $$ for Dedekind sums, symmetrized further in Rademacher's formula $$ D(a,b;c) + D(b,c;a) + D(c,a;b) = \frac1{12} \frac{a^2+b^2+c^2}{abc} - \frac14. $$ [Here $D(a,b;c) = \sum_{n\,\bmod\,c} ((an/c)) ((bn/c))$, where $((\cdot))$ is the sawtooth function taking $x$ to $0$ if $x \in {\bf Z}$ and to $x - \lfloor x \rfloor - 1/2$ otherwise; and the Dedekind sum is the special case $s(b,c) = D(1,b;c)$.]
share|improve this answer
But I don't understand what is so special about this, at least in terms of symmetry: for about any function $s(\cdot,\cdot)$, including the Legendre symbol, $s(b,c)+s(c,b)$ or $s(b,c)s(c,b)$ is symmetric in $b$ and $c$. Where is the surprise? – Wolfgang Dec 18 '13 at 18:01
@Wolfgang asks a fair question. To add to Matt Young's answer, we can define $s'(b,c) = s(b,c) + 1/8 - b/12c - 1/24bc$, and then the reciprocity formula says that $s'(b,c)$ is antisymmetric: $s'(b,c) = -s'(c,b)$. – Noam D. Elkies Dec 18 '13 at 20:25
@NoamD.Elkies Granted. That reminds me of the relation between $\zeta(1-s)$ and $\zeta(s)$, cast as $\Xi(1-s)=\Xi(s)$ with appropriate $\Xi$. – Wolfgang Dec 19 '13 at 7:56
Here is an example from potential theory where symmetry is a not-so-obvious property: the Green function of a bounded open subset $\Omega \subset \mathbb{C}$. More precisely, having specified a point $a \in \Omega$, one defines the classical Green function for $\Omega$ with pole at $a$, , as a function on $\mathbb{C}$ with the following properties: (i) $G_\Omega(\cdot; a)$ is harmonic in $\Omega \setminus \{a\}$; (ii) $z \mapsto G(z;a) + \log |z-a|$ extends to a harmonic function on $\Omega$; (iii) for each $w \in \partial \Omega$, $\lim_{z \to w} G_\Omega(z;a)=0$.
The symmetry property says that $G_\Omega(z;w)=G_\Omega(w;z)$ for any $z,w \in \Omega$ such that $z \ne w$. Note that the functions on either side of the equation are different: one has a pole at $w$ and the other at $z$. It is not very hard to prove the symmetry property, but it is not obvious either.
The existence of such a function is related to the solution of a Dirichlet problem for the Laplace equation in $\Omega$. Analogous functions can be considered for domains in $\mathbb{R}^n, \ n>2$ or in $\mathbb{C}^n, n > 1$, and they also enjoy the symmetry property.
share|improve this answer
Characters of affine Kac-Moody Lie algebras and Virasoro Lie algebra are modular forms. These modular symmetries are not that much evident from the definitions.
share|improve this answer
Consider a differential inequality, like the Hardy-Sobolev inequality $$\left|\int\int_{{\mathbb R}^N\times{\mathbb R^N}}\frac{\overline{f(x)}g(y)}{|x-y|^\lambda}dxdy\right|\leq C\|f\|_r\|g\|_s.$$ Even if you put the sharp constant $C$ in this inequality, for most functions the inequality is strict. Now look for maximizers, i.e., functions for which the LHS is equal to the RHS: they are highly symmetric functions, actually spherically symmetric and very smooth. This is a general phenomenon, connected with monotonicity of $L^p$ and Sobolev norms with respect to symmetrization procedures.
share|improve this answer
Maxwell's equations were originally formulated for Newtonian physics. However, special relativity has found that these equations have a surprising symmetry to Lorentz transformations. The equations remain true in a moving reference frame. The transformation of the values is such that (loosely speaking) what looks like pure electric charge in one reference frame can be electric current and charge in another reference frame; and what looks like pure electric field from one reference frame can be magnetic and electric field in another reference frame.
See https://en.wikipedia.org/wiki/Covariant_formulation_of_classical_electromagnetism for a precise formulation.
share|improve this answer
Betti numbers: the symmetry $\dim(H^k(M^n))=\dim(H^{n-k}(M^n))$ does not immediately follow from the definition.
share|improve this answer
@DanielLitt, I know, I just don't want to deal with torsion, and for the purpose of this question Betti numbers' symmetry is sufficient. – Michael Dec 13 '13 at 21:23
My point is that the symmetry does not come from the Betti numbers, but from the space $M$; I don't think this is an example of what the question asks for. – Daniel Litt Dec 13 '13 at 23:40
There is a philosophy that the functional equation of a zeta function should be a consequence of Poincare duality on some exotic space. For zeta functions of varieties over finite fields, this was made rigorous in the 1960s, but over number fields it's still just a philosophy. So we have two non-obvious symmetries that are the same, but not obviously the same. In other words, we have a non-obvious symmetry between non-obvious symmetries. – JBorger Jan 12 at 19:01
Two (unrelated) examples from combinatorics:
The first is Proposition 7.19.9 of volume 2 of Stanley's "Enumerative Combinatorics." Define a descent of a (skew) Standard Young Tableau $T$ of shape $\lambda/\mu$ to be an index $i$ such that $i+1$ is in a lower row than $i$. Let $D(T)$ denote the set of descents of $T$. Then for any $|\lambda/\mu|=n$ and for any $1 \leq i \leq n-1$, the number of SYTs $T$ of shape $\lambda/\mu$ such that $i \in D(T)$ is independent of $i$.
The second follows from a bijection of De Médicis and Viennot (1994, Adv. Appl. Math.) Let $\mathcal{M}_n$ denote the set of perfect matchings of $[2n]$, i.e. the set of partitions of $[2n] := \{1,2,\ldots,2n\}$ into pairs. Let $M \in \mathcal{M}_n$. For $p = \{a,b\}, q = \{c,d\} \in M$ with $a<b$, $c<d$, and $a<c$, we say that $p$ and $q$ cross if $a < c < b< d$ and we say they nest if $a<c<d<b$. Finally, we say they are aligned if they neither cross nor nest, i.e., $a<b<c<d$. Define:
$\mathrm{ne}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ nest}\}|;$
$\mathrm{cr}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ cross}\}|;$
$\mathrm{al}(M):= |\{\{p,q\}\subset M\colon \textrm{$p$ and $q$ are aligned}\}|.$
Then $\sum_{M \in \mathcal{M}_n}x^{\mathrm{ne}(M)}y^{\mathrm{cr}(M)}=\sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{ne}(M)}$. However, crossings and alignments (or nestings and alignments) are not equidistributed: $\sum_{M \in \mathcal{M}_n}x^{\mathrm{al}(M)}y^{\mathrm{cr}(M)} \neq \sum_{M \in \mathcal{M}_n}x^{\mathrm{cr}(M)}y^{\mathrm{al}(M)}$.
share|improve this answer
The Jacobson radical of a ring $R$ is defined to be the intersection of all maximal left ideals in $R$. It turns out that the Jacobson radical is the intersection of all maximal right ideals in $R$ as well, so the Jacobson radical does not depend on whether one considers left or right ideals. In particular, the Jacobson radical of a ring is a two-sided ideal. In fact, there are several characterizations of the Jacobson radical that do not appear to be symmetric with respect to "leftness" and "rightness" including the following.
1. The intersection of all maximal left ideals.
2. $\bigcap\{\textrm{Ann}(M)|M\,\textrm{is a simple left}\,R-\textrm{module}\}$
3. $\{x\in R|1-rx\,\textrm{has a left inverse for each}\,r\in R\}$
4. $\{x\in R|1-rx\,\textrm{has a two-sided inverse for each}\,r\in R\}$
share|improve this answer
Let $r_4(n)$ be the number of $4$-tuples $a,b,c,d\in \bf Z$ satisfying $a^2+b^2+c^2+d^2=n$. Then $\sum_{n\geq 0}r_4(n)e^{2\pi i\, nz}dz$ is a holomorphic differential form on the upper half-plane that is invariant by a subgroup of finite index in ${\rm SL}_2(\bf Z)$ (acting by $\frac{az+b}{cz+d}$).
The same is true if you replace $r_4(n)$ by $a_n(E)$ where:
-- $E$ is an elliptic curve defined over $\bf Q$,
-- if $p$ is a prime number, $a_p(E)=p+1-N_p(E)$ and $N_p(E)$ is the number of points of $E$ in ${\bf Z}/p{\bf Z}$,
-- $a_n(E)$, for $n\in\bf N$, is defined by $\sum_n a_n(E)n^{-s}=\prod_p(1-a_p(E)p^{-s}+p^{1-2s})^{-1}$ (the product has to be taken over the prime numbers $p$ such that $E$ remains an elliptic curve modulo $p$ which excludes finitely many of them).
share|improve this answer
I would like to add an example coming from the area of additive theory known as Freiman's structure theory. If I am not (too) blind, this has not been mentioned yet, and hopefully it qualifies as an appropriate answer.
Assume that $\mathbb{A} = (A, +)$ is a (possibly non-commutative) semigroup, and let $X$ be a non-empty subset of $A$. Given an integer $n \ge 1$, we write $nX$ for $\{x_1+\cdots + x_n: x_1, \ldots, x_n \in X\}$. In principle, we have $1 \le |nX| \le |X|^n$, and for all $k \in \mathbb{N}^+$ and $i \in \{1, \ldots, k\}$ we can actually find a pair $(\mathbb{A}, X)$ such that $|X| = k$ and $|nX| = i$, with the result that, in general, not much can be concluded about the "structure" of $X$. However, if $|nX|$ is sufficiently small with respect to $|X|$ and $\mathbb{A}$ has suitable properties, then "surprising" things start happening, and for instance we have the following:
Theorem. If $\mathbb{A}$ is a linearly orderable semigroup (i.e., there exists a total order $\preceq$ on $A$ such that $x + z \prec y + z$ and $z + x \prec z + y$ for all $x,y,z \in A$ with $x \prec y$) and $|2X| \le 3|X|-3$, then the smallest subsemigroup of $\mathbb{A}$ containing $X$ is abelian.
This implies at once an analogous result by Freiman and coauthors which is valid for linearly ordered groups; see Theorem 1.2 in [F] (a preprint can be found here). I don't know of any similar result for larger values of $n$.
[F] G. Freiman, M. Herzog, P. Longobardi, and M. Maj, Small doubling in ordered groups, to appear in J. Austr. Math. Soc.
share|improve this answer
In the definition of "Latin square" there is complete symmetry between the roles of "row", "column" and "symbol", so that any of the 6 permutations of that role produces another Latin square.
share|improve this answer
Let $a(m,n)$ be the number of partitions with no more than $m$ parts, each part (strictly) less than $n$, and the sum a multiple of $n$. Then $a(m,n)=a(n,m)$.
share|improve this answer
Your Answer
|
e169a337668dca36 | « · »
Section 8.2: The Quantum-mechanical Free-particle Solution
Please wait for the animation to completely load.
In order to tackle the free-particle problem, we begin with the Schrödinger equation in one dimension with V(x) = 0,
[−(ħ2/2m)∂2Ψ(x,t)/∂x2 = (∂/∂t)Ψ(x,t) . (8.2)
We can simplify the analysis somewhat by performing a separation of variables and therefore considering the time-independent Schrödinger equation:
−(ħ2/2m)(d2/dx2) ψ(x) = E ψ(x) , (8.3)
which we can rewrite as:
[(d2/dx2) + k2] ψ(x) = 0 , (8.4)
where k2 ≡ 2mE/ħ2. We find the solutions to the equation above are of the form ψ(x) = Aexp(ikx) where we allow k to take both positive and negative values.2 Unlike a bound-state problem, such as the infinite square well, there are no boundary conditions to restrict the k and therefore E values. However, each plane wave has a definite k value and therefore a definite momentum (and also a definite energy) since pψ(x) = −(d/dx) ψ(x) = ħkψ(x), again with k taking on both positive and negative values (so that pψ(x) = ± ħ|k|ψ(x)). The time dependence is now straightforward from the Schrödinger equation:
(∂/∂t) Ψ(x,t) = EΨ(x,t) , (8.6)
or by acting the time-evolution operator, UT(t) = eiHt/ħ, on ψ(x, t). Both procedures yield ψ(x,t) = AeikxiEt/ħ (again k can take positive and negative values) and since E = p2/2m = ħ2k2/2m, we also have that
ψ(k > 0)(x,t) = Aexp(ikxiħk2t/2m) or ψ(k < 0)(x,t) = Aexp(-i|k|xiħk2t/2m), (8.7)
where ħk2/2m ≡ ω. These solutions describe both right-moving (k > 0) and left-moving (k < 0) plane waves. Recall that solutions to the classical wave equation are in the form of f(kx −/+ ωt) for a wave moving to the right (−) or left (+). These quantum-mechanical plane waves, however, are complex functions and can be written in the form f(±|k|x − ωt).
RestartIn the animation, ħ = 2m = 1. A right-moving plane wave is represented in terms of its amplitude and phase (as color) and also its real, cos(kx - ħk2t/2m), and imaginary, sin(kx ħk2t/2m), parts.
What is the velocity of this wave? If this were a classical free particle with non-relativistic velocity, E = mv2/2 = p2/2m and vclassical = p/m as expected. But what about our solution? The velocity of our wave is ω/k which gives: ħk/2m = p/2m, half of the expected (classical) velocity! This velocity is the phase velocity. If instead we consider the group velocity,
vg = ∂ω/∂k, (8.8)
we find that
vg = ∂(ħk2/2m)/∂k = ħk/m, (8.9)
the expected (classical) velocity.
Consider the right-moving wave,
ψ(x,t) = Aexp(ikxiħk2t/2m) ,
which has a definite momentum, p = ħk. We notice that the amplitude of the wave function, A, is a finite constant over all space. However, we also find that ∫ ψ*ψ dx = ∞ [integral from −∞ to +∞] even though ψ*ψ = |A|2 is finite. While the plane wave is a definite-momentum solution to the Schrödinger equation, it is not a localized solution. In this case then, we must discard, or somehow modify, these solutions if we wish a localized and normalized free-particle description.
2The most general solution to the differential equation is
ψ(x) = Aeikx + Beikx (8.5)
with k values positive.
3We can however, box normalize the wave function. In box normalization, we normalize the wave function such that over a finite region of space the wave function is normalized.
The OSP Network:
Open Source Physics - Tracker - EJS Modeling
Physlet Physics
Physlet Quantum Physics |
9d98e4822fbc95cf | 2007 Schools Wikipedia Selection. Related subjects: General Physics
Physics (from the Greek, φύσις (phúsis), "nature" and φυσική (phusiké), "knowledge of nature") is the science concerned with the discovery and understanding of the fundamental laws which govern matter, energy, space and time. That is, physics deals with the elementary constituents of the Universe and their interactions, as well as the analysis of systems which are best understood in terms of these fundamental principles. Physics is a study of the inorganic, physical world, as opposed to the organic world of biology, physiology, etc. Chemistry concerning the electro-chemical interactions of substances overlaps with physics.
Physics attempts to describe the natural world by the application of the scientific method, including modelling by theoreticians. Formerly, physics included the study of natural philosophy, its counterpart which had been called "physics" (earlier physike) from classical times up to the separation of physics from philosophy as a positive science in the 19th century, as the study of the changing world by philosophy. Mixed questions, of which solutions can be attempted through the applications of both disciplines (e.g. the divisibility of the atom) can involve natural philosophy in physics (the science) and vice versa .
Connected Studies
Many other sciences and fields of thought are related to physics.
Discoveries in physics find connections throughout the other natural sciences as they regard the basic constituents of the Universe. Some of the phenomena studied in physics, such as the phenomenon of conservation of energy, are common to all material systems. These are often referred to as laws of physics. Other phenomena, such as superconductivity, stem from these laws, but are not laws themselves because they only appear in some systems. Physics is often said to be the "fundamental science", because each of the other sciences (biology, chemistry, geology, physiology, archaeology, anthropology, etc.) deals with particular types of material systems that obey the laws of physics. For example, chemistry is the science of matter (such as atoms and molecules) and the chemical substances that they form in the bulk. The structure, reactivity, and properties of a chemical compound are determined by the properties of the underlying molecules, which can be described by areas of physics such as quantum mechanics (called in this case quantum chemistry), thermodynamics, and electromagnetism. (Refer to Branches of physics)
Physics relies on mathematics, which provides the logical framework in which physical laws can be precisely formulated and their predictions quantified. Physical definitions, models and theories are invariably expressed using mathematical relations. There is a large area of research intermediate between physics and mathematics, known as mathematical physics.
Physics is also closely related to engineering and technology. For instance, electrical engineering is the study of the practical application of electromagnetism. Statics, a subfield of mechanics, is responsible for the building of bridges. Further, physicists, or practitioners of physics, invent and design processes and devices, such as the transistor, whether in basic or applied research. Experimental physicists design and perform experiments with particle accelerators, nuclear reactors, telescopes, barometers, synchrotrons, cyclotrons, spectrometers, lasers, and other equipment.
Beyond the known Universe, the field of theoretical physics also deals with hypothetical issues, such as parallel universes, a multiverse, or whether the universe could have expanded as predominantly antimatter rather than matter.
Branches of physics
Physicists study a wide range of physical phenomena, from quarks to black holes, from individual atoms to the many-body systems of superconductors.
Central theories
While physics deals with a wide variety of systems, there are certain theories that are used by all physicists. Each of these theories were experimentally tested numerous times and found correct as an approximation of nature (within a certain domain of validity). For instance, the theory of classical mechanics accurately describes the motion of objects, provided they are much larger than atoms and moving at much less than the speed of light. These theories continue to be areas of active research; for instance, a remarkable aspect of classical mechanics known as chaos was discovered in the 20th century, three centuries after the original formulation of classical mechanics by Isaac Newton ( 1642– 1727). These "central theories" are important tools for research into more specialized topics, and any physicist, regardless of his or her specialization, is expected to be literate in them.
• Classical mechanics is a model of the physics of forces acting upon bodies. It is often referred to as "Newtonian mechanics" after Newton and his laws of motion. Classical mechanics is subdivided into statics (which models objects at rest), kinematics (which models objects in motion), and dynamics (which models objects subjected to forces). See also mechanics.
• Electromagnetism, or electromagnetic theory, is the physics of the electromagnetic field: a field, encompassing all of space, which exerts a force on those particles that possess the property of electric charge, and is in turn affected by the presence and motion of such particles. Electromagnetism encompasses various real-world electromagnetic phenomena.
• Thermodynamics is the branch of physics that deals with the action of heat and the conversions from one to another of various forms of energy. Thermodynamics is particularly concerned with how these affect temperature, pressure, volume, mechanical action, and work. Historically, it grew out of efforts to construct more efficient heat engines — devices for extracting useful work from expanding hot gases.
• Statistical mechanics, a related theory, is the branch of physics that analyzes macroscopic systems by applying statistical principles to their microscopic constituents and, thus, can be used to calculate the thermodynamic properties of bulk materials from the spectroscopic data of individual molecules.
• Quantum mechanics is the branch of mathematical physics treating atomic and subatomic systems and their interaction with radiation in terms of observable quantities. It is based on the observation that all forms of energy are released in discrete units or bundles called quanta. Quantum theory typically permits only probable or statistical calculation of the observed features of subatomic particles, understood in terms of wave functions.
• The theory of relativity, or relativity theory, is:
• A physical theory which is based on two postulates (1) that the speed of light in a vacuum is constant and independent of the source or observer and (2) that it is impossible to determine ones absolute velocity in any inertial systems and which leads to the deduction of the equivalence of mass and energy and of change in mass, dimension, and time with increased velocity — called also special relativity, special theory of relativity;
• An extension of the theory to include gravitation and related acceleration phenomena — called also general relativity, general theory of relativity.
Theory Major subtopics Concepts
Classical mechanics Newton's laws of motion, Lagrangian mechanics, Hamiltonian mechanics, Kinematics, Statics, Dynamics, Chaos theory, Acoustics, Fluid dynamics, Continuum mechanics Density, Dimension, Gravity, Space, Time, Motion, Length, Position, Velocity, Acceleration, Mass, Momentum, Force, Energy, Angular momentum, Torque, Conservation law, Harmonic oscillator, Wave, Work, Power
Electromagnetism Electrostatics, Electrodynamics, Electricity, Magnetism, Maxwell's equations, Optics Capacitance, Electric charge, Current, Electrical conductivity, Electric field, Electric permittivity, Electric potential, Electrical resistance, Electromagnetic field, Electromagnetic induction, Electromagnetic radiation, Gaussian surface, Magnetic field, Magnetic flux, Magnetic monopole, Magnetic permeability
Thermodynamics and Statistical mechanics Heat engine, Kinetic theory Boltzmann's constant, Conjugate variables, Enthalpy, Entropy, Equation of state, Equipartition theorem, Free energy, Heat, Ideal gas law, Internal energy, Laws of thermodynamics, Irreversible process, Ising model, Mechanical action, Partition function, Pressure, Reversible process, Spontaneous process, State function, Statistical ensemble, Temperature, Thermodynamic equilibrium, Thermodynamic potential, Thermodynamic processes, Thermodynamic state, Thermodynamic system, Viscosity, Volume, Work
Quantum mechanics Path integral formulation, Scattering theory, Schrödinger equation, Quantum field theory, Quantum statistical mechanics Adiabatic approximation, Blackbody radiation, Correspondence principle, Free particle, Hamiltonian, Hilbert space, Identical particles, Matrix Mechanics, Planck's constant, Observer effect, Operators, Quanta, Quantization, Quantum entanglement, Quantum harmonic oscillator, Quantum number, Quantum tunneling, Schrödinger's cat, Dirac equation, Spin, Wavefunction, Wave mechanics, Wave-particle duality, Zero-point energy, Pauli Exclusion Principle, Heisenberg Uncertainty Principle
Theory of relativity Special relativity, General relativity, Einstein field equations Covariance, Einstein manifold, Equivalence principle, Four-momentum, Four-vector, General principle of relativity, Geodesic motion, Gravity, Gravitoelectromagnetism, Inertial frame of reference, Invariance, Length contraction, Lorentzian manifold, Lorentz transformation, Mass-energy equivalence, Metric, Minkowski diagram, Minkowski space, Principle of Relativity, Proper length, Proper time, Reference frame, Rest energy, Rest mass, Relativity of simultaneity, Spacetime, Special principle of relativity, Speed of light, Stress-energy tensor, Time dilation, Twin paradox, World line
Major fields of physics
• Condensed matter physics, by most estimates the largest single field of physics, is concerned with how the properties of bulk matter, such as the ordinary solids and liquids we encounter in everyday life, arise from the properties and mutual interactions of the constituent atoms. A magnet levitating above a high-temperature superconductor (with boiling liquid nitrogen underneath), demonstrating the Meissner effect, is a phenomenon of importance to the field of condensed matter physics.
• The field of atomic, molecular, and optical physics deals with the behaviour of individual atoms and molecules, and in particular the ways in which they absorb and emit light.
• Finally, the field of astrophysics applies the laws of physics to explain celestial phenomena, ranging from the Sun and the other objects in the solar system to the Universe as a whole.
Since the 20th century, the individual fields of physics have become increasingly specialized, and nowadays it is not uncommon for physicists to work in a single field for their entire careers. "Universalists" like Albert Einstein ( 1879– 1955) and Lev Landau ( 1908– 1968), who were comfortable working in multiple fields of physics, are now very rare.
Many fields and subfields of physics are listed in the table below.
Field Subfields Major theories Concepts
Astrophysics Cosmology, Gravitation physics, High-energy astrophysics, Planetary astrophysics, Plasma physics, Space physics, Stellar astrophysics Big Bang, Lambda-CDM model, Cosmic inflation, General relativity, Law of universal gravitation Black hole, Cosmic background radiation, Cosmic string, Cosmos, Dark energy, Dark matter, Galaxy, Gravity, Gravitational radiation, Gravitational singularity, Planet, Solar system, Star, Supernova, Universe
Atomic, molecular, and optical physics Atomic physics, Molecular physics, Atomic and Molecular astrophysics, Chemical physics, Optics, Photonics Quantum optics, Quantum chemistry, Quantum information science Atom, Molecule, Diffraction, Electromagnetic radiation, Laser, Polarization, Spectral line, Casimir effect
Particle physics Nuclear physics, Nuclear astrophysics, Particle astrophysics, Particle physics phenomenology Standard Model, Quantum field theory, Quantum chromodynamics, Electroweak theory, Effective field theory, Lattice field theory, Lattice gauge theory, Gauge theory, Supersymmetry, Grand unification theory, Superstring theory, M-theory Fundamental force ( gravitational, electromagnetic, weak, strong), Elementary particle, Spin, Antimatter, Spontaneous symmetry breaking, Brane, String, Quantum gravity, Theory of everything, Vacuum energy
Condensed matter physics Solid state physics, High pressure physics, Low-temperature physics, Nanoscale and Mesoscopic physics, Polymer physics BCS theory, Bloch wave, Fermi gas, Fermi liquid, Many-body theory Phases (gas, liquid, solid, Bose-Einstein condensate, superconductor, superfluid), Electrical conduction, Magnetism, Self-organization, Spin, Spontaneous symmetry breaking
Classical, quantum and modern physics
Since the construction of quantum mechanics in the early twentieth century, it generally became evident to the physical community that it would be preferable for many known descriptions of nature to be quantized, that is, to follow the postulates of quantum mechanics. To this effect, all results that were not quantized are called classical: this includes the Special Theory and General Theory of Relativity. Simply because a result is classical does not mean that it was discovered before the advent of quantum mechanics. Classical theories are, generally, much easier to work with and much research is still being conducted on them without the express aim of quantization. However, there exist problems in physics in which classical and quantum aspects must be combined to attain some approximation or limit that may acquire several forms as the passage from classical to quantum mechanics is often difficult — such problems are termed semiclassical.
However, because relativity and quantum mechanics provide the most complete known description of fundamental interactions, and because the changes brought by these two frameworks to the physicist's world view were revolutionary, the term modern physics is used to describe physics which relies on these two theories. Colloquially, modern physics can be described as the physics of extremes: from systems at the extremely small (atoms, nuclei, fundamental particles) to the extremely large (the Universe) and of the extremely fast (relativity).
Theoretical and experimental physics
The culture of physics research differs from the other sciences in the separation of theory and experiment. Since the 20th century, most individual physicists have specialized in either theoretical physics or experimental physics. The great Italian physicist Enrico Fermi ( 1901– 1954), who made fundamental contributions to both theory and experimentation in nuclear physics, was a notable exception. In contrast, almost all the successful theorists in biology and chemistry (e.g. American quantum chemist and biochemist Linus Pauling) have also been experimentalists, though this is changing as of late.
Roughly speaking, theorists seek to develop through abstractions and mathematical models theories that can both describe and interpret existing experimental results and successfully predict future results, while experimentalists devise and perform experiments to explore new phenomena and test theoretical predictions. Although theory and experiment are developed separately, they are strongly dependent on each other. However, theoretical research in physics may further be considered to draw from mathematical physics and computational physics in addition to experimentation. Progress in physics frequently comes about when experimentalists make a discovery that existing theories cannot account for, necessitating the formulation of new theories. Likewise, ideas arising from theory often inspire new experiments. In the absence of experiment, theoretical research can go in the wrong direction; this is one of the criticisms that has been leveled against M-theory, a popular theory in high-energy physics for which no practical experimental test has ever been devised.
Discredited theories
Scientific theories sometimes end up being discredited or superseded. In some of these cases the theory was announced prematurely and gained press attention before being discredited. Other times an established theory is overthrown and a new one erected in its place. Some famous examples are:
• Dynamic theory of gravity — Announced in a press release by Nikola Tesla in 1937 but never published.
• Steady state theory — An established theory of cosmology in the early and middle 20th century, made obsolete by the success of Big Bang theory.
• Luminiferous aether — An established theory in the late 19th century, which was contradicted by observations and made "superfluous" by relativity.
• Cold fusion — Announced in a press conference in 1989 but never confirmed. Still controversial.
• Phlogiston theory — An established theory of the 18th century that attributed combustion to the liberation of phlogiston from a material.
Phenomenology is intermediate between experiment and theory. It is more abstract and includes more logical steps than experiment, but is more directly tied to experiment than theory. The boundaries between theory and phenomenology, and between phenomenology and experiment, are somewhat fuzzy and to some extent depend on the understanding and intuition of the scientist describing these. An example is Einstein's 1905 paper on the photoelectric effect, " On a Heuristic Viewpoint Concerning the Production and Transformation of Light".
Applied physics
Applied physics is physics that is intended for a particular technological or practical use, as for example in engineering, as opposed to basic research. This approach is similar to that of applied mathematics. Applied physics is rooted in the fundamental truths and basic concepts of the physical sciences but is concerned with the utilization of scientific principles in practical devices and systems, and in the application of physics in other areas of science. "Applied" is distinguished from "pure" by a subtle combination of factors such as the motivation and attitude of researchers and the nature of the relationship to the technology or science that may be affected by the work.
Branches of Applied Physics
Accelerator physics, Acoustics, Agrophysics, Biophysics, Chemical Physics, Communication Physics, Econophysics, Engineering physics, Fluid dynamics, Geophysics, Materials physics, Medical physics, Nanotechnology, Optics, Optoelectronics, Photovoltaics, Physical chemistry, Physics of computation, Plasma physics, Solid-state devices, Quantum chemistry, Quantum electronics, Quantum information science, Vehicle dynamics
Since antiquity, people have tried to understand the behaviour of matter: why unsupported objects drop to the ground, why different materials have different properties, and so forth. The character of the Universe was also a mystery, for instance the Earth and the behaviour of celestial objects such as the Sun and the Moon. Several theories were proposed, most of which were wrong. These first theories were largely couched in philosophical terms, and never verified by systematic experimental testing as is popular today. The works of Ptolemy and Aristotle, however, were also not always found to match everyday observations. There were exceptions and there are anachronisms - for example, Indian philosophers and astronomers gave many correct descriptions in atomism and astronomy, and the Greek thinker Archimedes derived many correct quantitative descriptions of mechanics and hydrostatics.
The willingness to question previously held truths and search for new answers eventually resulted in a period of major scientific advancements, now known as the Scientific Revolution of the late 17th century. The precursors to the scientific revolution can be traced back to the important developments made in India and Persia, including the elliptical model of the planets based on the heliocentric solar system of gravitation developed by Indian mathematician-astronomer Aryabhata; the basic ideas of atomic theory developed by Hindu and Jaina philosophers; the theory of light being equivalent to energy particles developed by the Indian Buddhist scholars Dignāga and Dharmakirti; the optical theory of light developed by Muslim scientist Ibn al-Haitham (Alhazen); the Astrolabe invented by the Persian astronomer Muhammad al-Fazari; and the significant flaws in the Ptolemaic system pointed out by Persian scientist Nasir al-Din Tusi.
As the influence of the Arab Empire expanded to Europe, the works of Aristotle preserved by the Arabs, and the works of the Indians and Persians, became known in Europe by the 12th and 13th centuries. This eventually led to the scientific revolution which culminated with the publication of the Philosophiae Naturalis Principia Mathematica in 1687 by the mathematician, physicist, alchemist and inventor Sir Isaac Newton ( 1643- 1727).
The Scientific Revolution is held by most historians (e.g., Howard Margolis) to have begun in 1543, when the first printed copy of Nicolaus Copernicus's De Revolutionibus (most of which had been written years prior but whose publication had been delayed) was brought to the influential Polish astronomer from Nuremberg.
Further significant advances were made over the following century by Galileo Galilei, Christiaan Huygens, Johannes Kepler, and Blaise Pascal. During the early 17th century, Galileo pioneered the use of experimentation to validate physical theories, which is the key idea in modern scientific method. Galileo formulated and successfully tested several results in dynamics, in particular the Law of Inertia. In 1687, Newton published the Principia, detailing two comprehensive and successful physical theories: Newton's laws of motion, from which arise classical mechanics; and Newton's Law of Gravitation, which describes the fundamental force of gravity. Both theories agreed well with experiment. The Principia also included several theories in fluid dynamics. Classical mechanics was re-formulated and extended by Leonhard Euler, French mathematician Joseph-Louis Comte de Lagrange, Irish mathematical physicist William Rowan Hamilton, and others, who produced new results in mathematical physics. The law of universal gravitation initiated the field of astrophysics, which describes astronomical phenomena using physical theories.
In 1821, the English physicist and chemist Michael Faraday integrated the study of magnetism with the study of electricity. This was done by demonstrating that a moving magnet induced an electric current in a conductor. Faraday also formulated a physical conception of electromagnetic fields. James Clerk Maxwell built upon this conception, in 1864, with an interlinked set of 20 equations that explained the interactions between electric and magnetic fields. These 20 equations were later reduced, using vector calculus, to a set of four equations by Oliver Heaviside.
One part of the theory of general relativity is Einstein's field equation. This describes how the stress-energy tensor creates curvature of spacetime and forms the basis of general relativity. Further work on Einstein's field equation produced results which predicted the Big Bang, black holes, and the expanding universe. Einstein believed in a static universe and tried (and failed) to fix his equation to allow for this. However, by 1929 Edwin Hubble's astronomical observations suggested that the universe is expanding.
In 1900, Max Planck published his explanation of blackbody radiation. This equation assumed that radiators are quantized, which proved to be the opening argument in the edifice that would become quantum mechanics. By introducing discrete energy levels, Planck, Einstein, Niels Bohr, and others developed quantum theories to explain various anomalous experimental results. Quantum mechanics was formulated in 1925 by Heisenberg and in 1926 by Schrödinger and Paul Dirac, in two different ways that both explained the preceding heuristic quantum theories. In quantum mechanics, the outcomes of physical measurements are inherently probabilistic; the theory describes the calculation of these probabilities. It successfully describes the behaviour of matter at small distance scales. During the 1920s Schrödinger, Heisenberg, and Max Born were able to formulate a consistent picture of the chemical behaviour of matter, a complete theory of the electronic structure of the atom, as a byproduct of the quantum theory.
Chen Ning Yang and Tsung-Dao Lee, in the 1950s, discovered an unexpected asymmetry in the decay of a subatomic particle. In 1954, Yang and Robert Mills then developed a class of gauge theories which provided the framework for understanding the nuclear forces (Yang, Mills 1954). The theory for the strong nuclear force was first proposed by Murray Gell-Mann. The electroweak force, the unification of the weak nuclear force with electromagnetism, was proposed by Sheldon Lee Glashow, Abdus Salam and Steven Weinberg and confirmed in 1964 by James Watson Cronin and Val Fitch. This led to the so-called Standard Model of particle physics in the 1970s, which successfully describes all the elementary particles observed to date.
Quantum mechanics also provided the theoretical tools for condensed matter physics, whose largest branch is solid state physics. It studies the physical behaviour of solids and liquids, including phenomena such as crystal structures, semiconductivity, and superconductivity. The pioneers of condensed matter physics include Felix Bloch, who created a quantum mechanical description of the behaviour of electrons in crystal structures in 1928. The transistor was developed by physicists John Bardeen, Walter Houser Brattain and William Bradford Shockley in 1947 at Bell Telephone Laboratories.
The United Nations declared the year 2005, the centenary of Einstein's annus mirabilis, as the World Year of Physics.
Future directions
Thousands of particles explode from the collision point of two relativistic (100 GeV per nucleon) gold ions in the STAR detector of the Relativistic Heavy Ion Collider; an experiment done in order to investigate the properties of a quark gluon plasma such as the one thought to exist in the ultrahot first few microseconds after the big bang.
Although much progress has been made in high-energy, quantum, and astronomical physics, many everyday phenomena, involving complexity, chaos, or turbulence are still poorly understood. Complex problems that seem like they could be solved by a clever application of dynamics and mechanics, such as the formation of sandpiles, nodes in trickling water, the shape of water droplets, mechanisms of surface tension catastrophes, or self-sorting in shaken heterogeneous collections are unsolved. These complex phenomena have received growing attention since the 1970s for several reasons, not least of which has been the availability of modern mathematical methods and computers which enabled complex systems to be modeled in new ways. The interdisciplinary relevance of complex physics has also increased, as exemplified by the study of turbulence in aerodynamics or the observation of pattern formation in biological systems. In 1932, Horace Lamb correctly prophesied the success of the theory of quantum electrodynamics and the near-stagnant progress in the study of turbulence:
Retrieved from "" |
2ad2da962ea3db52 | You are here
The Equations: Icons of Knowledge
Sander Bais
Harvard University Press
Publication Date:
Number of Pages:
[Reviewed by
David M. Bressoud
, on
The world is awash is popularizations of modern physics. Here is one more. The author writes clearly and incisively about the physical ideas, but so have other good popularizers. Bais has chosen an original means of setting himself apart. Most books of this genre take it as axiomatic that they must avoid equations. Bais embraces them, building his book around the equations of physics and using them as the vehicle for describing the counter-intuitive world of relativity theory, quantum mechanics, and string theory.
This book is structured around a comprehensive collection of important equations, starting with the logistic equation and continuing through the equations of Newtonian mechanics, electricity and magnetism, the Korteweg-De Vries and Navier-Stokes equations, the Boltzmann equation, equations of special and general relativity, the Schrödinger and Dirac equations, and ending with the equations of quantum chromodynamics, the Glashow-Weinberg-Salam model for electro-weak interactions, and super-string action. Quite an impressive list, especially when one considers that his audience really is the neophyte, witnessed by the need to begin this book with explanations of the concepts of function, vector, and derivative.
The equations provide useful hooks on which to hang a discussion of the big ideas of physics. Each equation (or collection of equations) gets its own page in which it is displayed in all its incomprehensible glory. In that sense, the equations serve their role as icons, mystical pointers to a meaning beyond what is visible to the uninitiated.
Bais is trying to do more than this. He would like the reader to gain an appreciation for the meaning of these equations. He does attempt an explanation of the significance of each of the symbols and operators, more diligently and successfully in the earlier equations than the latter. Ultimately, he fails. To someone with familiarity with the language of mathematics and with some of the physics, there is too little here. To the person who really is unfamiliar with such equations, too much comes too quickly. Once Bais has moved to partial differential equations and Lagrangian operators, the mathematics is treated perfunctorily and cryptically. I cannot fault him for this. Bais is trying to communicate an important physical principle in a few pages. There is neither time nor space for the mathematics.
The result is that the title of this book is something of a cheat. This is really about the physics, not the equations. The best that might be hoped for the general reader is an increased appreciation that equations play an important role in our understanding of physical universe.
Yet when all is said and done, this is a pretty book that is well-written, and I would not hesitate to give it as stocking-stuffer to a young person who is just beginning to discover the wonder of our physical universe and to appreciate the power of mathematics as the language in which we express our understanding of this universe.
David M. Bressoud is DeWitt Wallace Professor Mathematics at Macalester College in St. Paul, Minnesota.
Rise and fall
The logistic equation
Mechanics and gravity
Newton's dynamical equations and universal law of gravity
The electromagnetic force on a charge
The Lorentz force law
A local conservation law
The continuity equation
The Maxwell equations
Electromagnetic waves
The wave equations
Solitary waves
The Korteweg - De Vries equation
The three laws of thermodynamics
Kinetic theory
The Bolzmann equation
The Navier-Stokes equations
Special relativity
Relativistic kinematics
General relativity
The Einstein equations
Quantum mechanics
The Schrödinger equation
The relativistic electron
The Dirac equation
The strong force
Quantum chromodynamics
Electro-weak Interactions
The Gashow-Weinberg-Salam model
String theory
The superstring action
Back to the future
A final perspective |
1030729bd7b394cf | Take the 2-minute tour ×
I haven't understood this thing: Physics is invariant for CPT trasform...But the Heat or diffusive equation $\nabla^2 T=\partial_t T$ is not invariant for time reversal...but it's P invariant..So CPT simmetry would be violed... What I haven't undestood?
Thank you
share|improve this question
2 Answers 2
up vote 10 down vote accepted
The heat equation is a macroscopic equation. It describes the flow of heat from hot objects to cold ones. Of course it can not be time-reversible, since the opposite movement never happens.
Well, I say 'of course' but you actually have stumbled on something important. As you say, the fundamental laws of nature should be CPT invariant, or at least we expect them to be. The reason the heat equation is not CPT invariant is that it is not a fundamental law, but a macroscopic law emerging from the microscopic laws governing the motions of elementary particles.
There is however a problem here, how does this time asymmetry arise from microscopic laws that are themselves time reversal invariant? The answer to that is given by statistical mechanics. While the microscopic laws are time-reversible (I'll focus on T, and leave CP aside), not all states are equally likely with respect to certain choices of the macroscopic variables. There are more configurations of particles corresponding to a room filled with air than with a room where all the air would be concentrated in one corner. It is this asymmetry that forms the basis of all explanations in statistical mechanics.
I hope that clears things up a bit.
share|improve this answer
Ok! A last question now: the Schrodingher equation isn't invariant for time reversal, for the first derivative in t. But is not that a microscopical law? – Boy Simone Dec 2 '10 at 12:54
Actually, the Schrödinger equation is invariant. But you have to take the complex conjugate of $\psi$. Since $\psi^*$ and $\psi$ have the same probability distributions $|\psi|^2$, the physics remains the same. – Raskolnikov Dec 2 '10 at 12:58
Great! Thank you :-) – Boy Simone Dec 2 '10 at 13:14
Nice summary. It worth noting, however, that this is a deep enough topic that multi-hundred page books have been written on the matter. – dmckee Dec 2 '10 at 19:33
@dmckee: Of course, I didn't mean to give an exhaustive explanation. In fact, I left my explanation open to many attacks on purpose. I hope that Boy will think further and come to these questions by himself. But a thorough answer would indeed need a thorough course in statistical mechanics. – Raskolnikov Dec 2 '10 at 22:43
CPT theorem is not a theorem for all of physics but only for a quantum field theory (QFT). Also CPT invariance doesn't mean that QFT is necessarily invariant with respect to any of C, P and T (or PT, TC and CP, which is the same by CPT theorem) transform. Indeed, all of these symmetries are violated by weak interaction.
Second, even if the macroscopic laws were completely correct it wouldn't mean that they need to preserve microscopic laws. E.g. most of the microscopic physics is time symmetric (except for small violation by the weak interaction) but second law of thermodynamics (which is universally true for any macroscopic system just by means of logic and statistics) tells you that entropy has to increase with time. We can say that the huge number of particles breaks the microscopical time-symmetry.
Now, the heat equation essentially captures dynamics of this time asymmetry of the second law. It tells you that temperatures eventually even out and that is an irreversible process that increases entropy.
share|improve this answer
thank you for the answer! And why, in your example, huge number of particles breaks the microscopical time-symmetry? Why don't macroscopic effects preserve microscopical invariance CPT of quantum-field theory? – Boy Simone Dec 2 '10 at 12:39
@Boy: that has to do with statistical mechanics. You should really ask this as a separate question because answer is not completely simple. But in short: any given macroscopic state (given e.g. by energy and pressure) of the system can be realized by many microscopic states. Now your answer boils down to basic questions in probability theory: the more microscopic states there are, the more likely the resulting macroscopic state is. So system is more likely to move to move from the less probable state to more probable state and not in the other way. – Marek Dec 2 '10 at 12:46
Your Answer
|
8a3f54687e90c1be | Is electric charge truly conserved for bosonic matter? | PhysicsOverflow
• Register
Please help promote PhysicsOverflow ads elsewhere if you like it.
New printer friendly PO pages!
Migration to Bielefeld University was successful!
Please vote for this year's PhysicsOverflow ads!
... see more
Tools for paper authors
Submit paper
Claim Paper Authorship
Tools for SE users
Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post
Public \(\beta\) tools
Report a bug with a feature
Request a new functionality
404 page design
Send feedback
(propose a free ad)
Site Statistics
179 submissions , 141 unreviewed
4,383 questions , 1,704 unanswered
5,115 answers , 21,772 comments
1,470 users with positive rep
657 active unimported users
More ...
Is electric charge truly conserved for bosonic matter?
+ 6 like - 0 dislike
Even before quantization, charged bosonic fields exhibit a certain "self-interaction". The body of this post demonstrates this fact, and the last paragraph asks the question.
Notation/ Lagrangians
Let me first provide the respective Lagrangians and elucidate the notation.
I am talking about complex scalar QED with the Lagrangian $$\mathcal{L} = \frac{1}{2} D_\mu \phi^* D^\mu \phi - \frac{1}{2} m^2 \phi^* \phi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ Where $D_\mu \phi = (\partial_\mu + ie A_\mu) \phi$, $D_\mu \phi^* = (\partial_\mu - ie A_\mu) \phi^*$ and $F^{\mu \nu} = \partial^\mu A^\nu - \partial^\nu A^\mu$. I am also mentioning usual QED with the Lagrangian $$\mathcal{L} = \bar{\psi}(iD_\mu \gamma^\mu-m) \psi - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$ and "vector QED" (U(1) coupling to the Proca field) $$\mathcal{L} = - \frac{1}{4} (D^\mu B^{* \nu} - D^\nu B^{* \mu})(D_\mu B_\nu-D_\nu B_\mu) + \frac{1}{2} m^2 B^{* \nu}B_\nu - \frac{1}{4} F^{\mu \nu} F_{\mu \nu}$$
The four-currents are obtained from Noether's theorem. Natural units $c=\hbar=1$ are used. $\Im$ means imaginary part.
Noether currents of particles
Consider the Noether current of the complex scalar $\phi$ $$j^\mu = \frac{e}{m} \Im(\phi^* \partial^\mu\phi)$$ Introducing local $U(1)$ gauge we have $\partial_\mu \to D_\mu=\partial_\mu + ie A_\mu$ (with $-ieA_\mu$ for the complex conjugate). The new Noether current is $$\mathcal{J}^\mu = \frac{e}{m} \Im(\phi^* D^\mu\phi) = \frac{e}{m} \Im(\phi^* \partial^\mu\phi) + \frac{e^2}{m} |\phi|^2 A^\mu$$ Similarly for a Proca field $B^\mu$ (massive spin 1 boson) we have $$j^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))$$ Which by the same procedure leads to $$\mathcal{J}^\mu = \frac{e}{m} \Im(B^*_\mu(\partial^\mu B^\nu-\partial^\nu B^\mu))+ \frac{e^2}{m} |B|^2 A^\mu$$
Similar $e^2$ terms also appear in the Lagrangian itself as $e^2 A^2 |\phi|^2$. On the other hand, for a bispinor $\psi$ (spin 1/2 massive fermion) we have the current $$j^\mu = \mathcal{J}^\mu = e \bar{\psi} \gamma^\mu \psi$$ Since it does not have any $\partial_\mu$ included.
Now consider very slowly moving or even static particles, we have $\partial_0 \phi, \partial_0 B \to \pm im\phi, \pm im B$ and the current is essentially $(\rho,0,0,0)$. For $\phi$ we have thus approximately $$\rho = e (|\phi^+|^2-|\phi^-|^2) + \frac{e^2}{m} (|\phi^+|^2 + |\phi^-|^2) \Phi$$ Where $A^0 = \Phi$ is the electrostatic potential and $\phi^\pm$ are the "positive and negative frequency parts" of $\phi$ defined by $\partial_0 \phi^\pm = \pm im \phi^\pm$. A similar term appears for the Proca field.
For the interpretation let us pass back to SI units, in this case we only get a $1/c^2$ factor. The "extra density" is $$\Delta \rho = e\cdot \frac{e \Phi}{mc^2}\cdot |\phi|^2$$ That is, there is an extra density proportional to the ratio of the energy of the electrostatic field $e \Phi$ and the rest mass of the particle $mc^2$. The sign of this extra density is dependent only on the sign of the electrostatic potential and both frequency parts contribute with the same sign (which is superweird). This would mean that classicaly, the "bare" charge of bosons in strong electromagnetic fields is not conserved, only this generalized charge is.
After all, it seems a bad convention to call $\mathcal{J}^\mu$ the electric charge current. By multiplying it by $m(c^2)/e$ it becomes a matter density current with the extra term corresponding to mass gained by electrostatic energy. However, that does not change the fact that the "bare charge density" $j^0$ seems not to be conserved for bosons.
Now to the questions:
• On a theoretical level, is charge conservation at least temporarily or virtually violated for bosons in strong electromagnetic fields? (Charge conservation will quite obviously not be violated in the final S-matrix, and as an $\mathcal{O}(e^2)$ effect it will probably not be reflected in first order processes.) Is there an intuitive physical reason why such a violation is not true for fermions even on a classical level?
• Charged bosons do not have a high abundance in fundamental theories, but they do often appear in effective field theories. Is this "bare charge" non-conservation anyhow reflected in them and does it have associated experimental phenomena?
• Extra clarifying question: Say we have $10^{23}$ bosons with charge $e$ so that their charge is $e 10^{23}$. Now let us bring these bosons from very far away to very close to each other. As a consequence, they will be in a much stronger field $\Phi$. Does their measured charge change from $e 10^{23}$? If not, how do the bosons compensate in terms of $\phi, B, e, m$? If this is different for bosons rather than fermions, is there an intuitive argument why?
This post imported from StackExchange Physics at 2015-06-09 14:50 (UTC), posted by SE-user Void
asked Sep 24, 2014 in Theoretical Physics by Void (1,635 points) [ revision history ]
edited Jun 9, 2015 by Void
Most voted comments show all comments
By Noether's theorem, Noether currents are conserved since they are derived from an infinitesimal symmetry; they are observable iff they are gauge invariant. Are you missing something in the answer by Qmechanic?
@ArnoldNeumaier I added an extra clarifying question to what bugs me. I am well aware about the conservation and observability, I mainly wanted to inquire about the deeper physical explanation of these facts.
The charge doesn't change, as it is an integral over the whole space - only the charge density develops a very localized peak. What should need compensation?
Note that bare stuff doesn't matter; it is irrelevant scaffolding removed by renormalization.
Just a dumb idea: Maybe this is somehow related to the fact in the SM, introducing mass-terms for the bosons simply as $\frac{1}{2}m\phi^{*}\phi$ without a higgs field or mechanism breaks the gauge symmetry, and therefore is no conserved current corresponding to the by the mass term broken symmetry?
@Dilaton: Yes, there seems to be something funky about massive or charged elementary bosons. I was just hoping there is an established argument what exactly is the crux of this funkiness -- perhaps through such things as charged pions and their relation to $U(1)$.
Most recent comments show all comments
@drake I just meant that for example Proca mass terms such as $\frac{1}{2}m^2 B^{*\nu} B_{\nu}$ break gauge symmetries such as $U(1)$, and could therefore spoil charge conservation.
@Dilaton I don't get your point... Here the gauge field is $A$, which doesn't have any mass term. In the SM one wants to give mass to the gauge fields. I think you are wrong.
4 Answers
+ 3 like - 0 dislike
Comments to the question (v3):
1. In contrast to QED with fermionic matter, in QED with bosonic matter, the full Noether current ${\cal J}^{\mu}$ (for global gauge transformations) tends to depend explicitly on the gauge potential $A^{\mu}$, see e.g. Refs. 1-2 and this Phys.SE post.
2. The reason for this difference is because the QED Lagrangian for fermionic (bosonic) matter typically contains one (two) spacetime derivative(s) $\partial_{\mu}$, which after minimal coupling $\partial_{\mu}\to D_{\mu}$ leads to e.g. no (a) quartic matter-matter-photon-photon coupling term, respectively.
3. The full Noether current ${\cal J}^{\mu}$ is a gauge-invariant and conserved quantity, $d_{\mu }{\cal J}^{\mu} \approx 0$. [Here $d_{\mu}\equiv\frac{d}{dx^{\mu}}$ means a total spacetime derivative, and the $\approx$ symbol means equality modulo eom.] The electric charge $Q=\int \! d^3x ~{\cal J}^{0}$ is a conserved quantity.
4. The only physical observables in a gauge theory are gauge-invariant quantities. The quantity $j^{\mu}$, which OP calls the "bare current", is not gauge-invariant, and hence not a consistent physical observable to consider.
5. As Trimok mentions in a comment, the situation for non-Abelian (as opposed to Abelian) Yang-Mills is radically different. The full Noether current ${\cal J}^{\mu a}$ (for global gauge transformations) is a conserved $d_{\mu }{\cal J}^{\mu a} \approx 0$, but ${\cal J}^{\mu a}$ is not gauge-invariant (or even gauge covariant), and hence not a consistent physical observable to consider. There is not a well-defined observable for color charge that one can measure. This follows also from Weinberg-Witten theorem (for spin 1): A theory with a global non-Abelian symmetry under which massless spin-1 particles are charged does not admit a gauge- and Lorentz-invariant conserved current, cf. Ref. 3.
1. M. Srednicki, QFT, Chapter 61.
2. M.D. Schwartz, QFT and the Standard Model, Section 8.3 and Chapter 9.
3. M.D. Schwartz, QFT and the Standard Model, Section 25.3.
answered Sep 24, 2014 by Qmechanic (2,860 points) [ no revision ]
Yes, some of these are the observations which lead me to this question. But say we have a macroscopic material with bosonic charged particles, object it to a very strong electrostatic field and measure it's charge. Would we have to be measuring $\mathcal{J}^0$ under all conditions? I guess 3. implies yes, and that means we would measure the object to have a charge different from the zero field situation. The extra "non-bare" charge obviously comes from the field, but this is a very different notion from the usual intuition of "charge".
${\cal J}^{\mu}$ is a covariant quantity, then it should verify $D_\mu {\cal J}^{\mu}=0$, but a conserved quantity corresponds to $\partial_\mu {\cal J}^{\mu}=0$. So, here, are covariant and conserved current compatible notions ? (for instance, this is not the case in Yang-Mills theories).
I updated the answer.
+ 1 like - 0 dislike
I have actually taken the time to compute the equations of motion and the situation is more complicate than I previously thought. The Lagrangian in the static situation $\vec{A} = 0, \partial_t \to 0$ reads
$$\mathcal{L} = -\frac{1}{2} |\nabla \phi|^2 - \frac{1}{2} m^2 |\phi|^2 + e^2 |\Phi|^2 |\phi|^2 + \frac{1}{2} |\nabla \Phi|^2 $$
which leads to EOM:
$$(\Delta - m^2 + 2 e^2 |\Phi|^2) \phi = 0$$
$$ (\Delta - 2 e^2 |\phi|^2) \Phi = 0 $$
Amongst other things, this implies that minimally coupled bosons do not act as a usual source of the electromagnetic field at all. As it stands (a more detailed analysis of the non-stationary equations might show otherwise), the bosons actually "easen" their motion (effectively loose mass) in the presence of the electromagnetic field at the cost of weakening (rendering massive and short-range) the electromagnetic field.
The coupling constant $e$ really does not have any reasonable interpretation in terms of a usual charge. For instance, the sign of $e$ is irrelevant and the particles and antiparticles of quantized $\phi$ have the same effect on $\Phi$. The $U(1)$ charge is just a conserved quantity with no intuitive interpretation in terms of the usual charge. Hence, the original form of the question does not have a proper meaning; $U(1)$ coupling for bosons simply means something totally different than for fermions.
(If you have any more observations or a different view, please contribute, I am interested.)
answered Jun 10, 2015 by Void (1,635 points) [ no revision ]
Are you allowed to simply put $A=0$? It changes the dynamics.
@ArnoldNeumaier: If we still hold $\partial_t \to 0$ a nonzero $\vec{A}$ would only make $|\Phi|^2 \to |\Phi|^2 - |A|^2$ and an extra $\vec{A}$ equation coupled to $\phi$ similarly as in the $\Phi$ case.
+ 1 like - 0 dislike
Dear mods, I am sorry this answer is not graduate-upward level, but I have not been able to come up with a more sophisticated one.
1) Yes, the charge is truly conserved but the respective current depends on the 4-potential A. What it is confusing you, I think, is that the current for a scalar field depends on 4-potential $A$, whereas that of a spin-1/2 does not. This is obviously related to the number of derivatives in the Lagrangian kinetic term and, likewise, to the number of derivatives in the current. It can help you understand what it's going on to adopt the canonical formalism (also known as the language of gentlemen), in which in both cases the density (and the charge too) involves the product of the canonical momentum and the field, as it could not be otherwise because the charge is nothing else but the infinitesimal generator of $U(1)$ transformations for both the field and the canonical momentum.
2) What you call the "bare charge", which probably is not a good name since this term is reserved for something else, lacks of physical content before fixing a gauge, as it is not a gauge invariant quantity. Note however that one can always choose one's favorite gauge. And if one picks the temporal gauge (\(A_0 = 0\)), the charge does not depend on the 4-potential and the form is the same as your "bare charge", which is conserved in this gauge.
3) The only difference in the movement of spin-one-half particles and spin zero-particles in an electromagnetic field is a term proportional to \[\sigma_{\mu\nu}\, F^{\mu\nu}\]
in the equation for spin-1/2 particles. This term gives rise to the term
\[\bf{S}\cdot \bf{B} \]
in the non-relativistic limit, that is, the interaction between the spin of the particle and the magnetic field.
4) It can help you to get the equation in your answer to first think of the equation of motion in the non-relativist limit, which is the Schrödinger equation in an electromagnetic field, that is, the Schrödinger equation replacing partial derivatives with gauge-covariant ones (for scalar particles, for spin-1/2 there is the additional term I wrote above).
answered Jun 12, 2015 by drake (885 points) [ revision history ]
edited Jun 12, 2015 by drake
+ 0 like - 4 dislike
The charge $e$ introduced into your Lagrangians/equations is a constant in time by definition, no Noether theorem is necessary to "conserve" it: $\frac{de}{dt}=0$.
Another thing is your equations/theory or "charge definition" via equations/solutions (as an integral bla-bla-bla). Here everything depends on your equations. Do not think that equations for bosons are already well established and finalized. For one formulation you get one result, for another you do another. So, there is no 'truly" thing, keep it firmly in your mind!
answered Jun 9, 2015 by Vladimir Kalitvianski (132 points) [ revision history ]
edited Jun 9, 2015 by Vladimir Kalitvianski
Your answer
Live preview (may slow down editor) Preview
Your name to display (optional):
Anti-spam verification:
user contributions licensed under cc by-sa 3.0 with attribution required
Your rights |
f5da0bc6eaefb27d | Stochastic differential equation
From Wikipedia, the free encyclopedia
(Redirected from Stochastic differential equations)
Jump to navigation Jump to search
Early work on SDEs was done to describe Brownian motion in Einstein's famous paper, and at the same time by Smoluchowski. However, one of the earlier works related to Brownian motion is credited to Bachelier (1900) in his thesis 'Theory of Speculation'. This work was followed upon by Langevin. Later Itô and Stratonovich put SDEs on more solid mathematical footing.
The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations. This understanding of SDEs is ambiguous and must be complemented by an "interpretation". The most famous interpretations are provided by Itô and Stratonovich calculi, with the former being most frequently used in mathematics and quantitative finance. An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation, an equation describing the time evolution of probability distribution functions. The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator.
In physical science, there is an ambiguity in the usage of the term "Langevin SDEs". While Langevin SDEs can be of a more general form, this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure,[1] leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics. From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic.
Stochastic calculus[edit]
Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.
Numerical solutions[edit]
Numerical solution of stochastic differential equations and especially stochastic partial differential equations is a young field relatively speaking. Almost all algorithms that are used for the solution of ordinary differential equations will work very poorly for SDEs, having very poor numerical convergence. A textbook describing many different algorithms is Kloeden & Platen (1995).
Methods include the Euler–Maruyama method, Milstein method and Runge–Kutta method (SDE).
Use in physics[edit]
In physics, SDEs have widest applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence.
There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs:
where is the position in the system in its phase (or state) space, , assumed to be a differentiable manifold, the is a flow vector field representing deterministic law of evolution, and is a set of vector fields that define the coupling of the system to Gaussian white noise, . If is a linear space and are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. This term is somewhat misleading as it has come to mean the general case even though it appears to imply the limited case in which .
For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition.[2] Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation. In this case, SDE must be complemented by what is known as "interpretations of SDE" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation.
In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.[citation needed]
Use in probability and mathematical finance[edit]
The notation used in probability theory (and in many applications of probability theory, for instance mathematical finance) is slightly different. This notation makes the exotic nature of the random function of time in the physics formulation more explicit. It is also the notation used in publications on numerical methods for solving stochastic differential equations. In strict mathematical terms, cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation.
A typical equation is of the form
where denotes a Wiener process (Standard Brownian motion). This equation should be interpreted as an informal way of expressing the corresponding integral equation
The equation above characterizes the behavior of the continuous time stochastic process Xt as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length δ the stochastic process Xt changes its value by an amount that is normally distributed with expectation μ(Xttδ and variance σ(Xtt)² δ and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function μ is referred to as the drift coefficient, while σ is called the diffusion coefficient. The stochastic process Xt is called a diffusion process, and satisfies the Markov property.
The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution. Both require the existence of a process Xt that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space (). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space.
An important example is the equation for geometric Brownian motion
which is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model of financial mathematics.
There are also more general stochastic differential equations where the coefficients μ and σ depend not only on the present value of the process Xt, but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, X, is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of X, the defining equation is called a stochastic delay differential equation.
Existence and uniqueness of solutions[edit]
As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in n-dimensional Euclidean space Rn and driven by an m-dimensional Brownian motion B; the proof may be found in Øksendal (2003, §5.2).
Let T > 0, and let
be measurable functions for which there exist constants C and D such that
for all t ∈ [0, T] and all x and y ∈ Rn, where
Let Z be a random variable that is independent of the σ-algebra generated by Bs, s ≥ 0, and with finite second moment:
Then the stochastic differential equation/initial value problem
has a P-almost surely unique t-continuous solution (tω) ↦ Xt(ω) such that X is adapted to the filtration FtZ generated by Z and Bs, s ≤ t, and
Some explicitly solvable SDEs[3][edit]
Linear SDE: general case[edit]
Reducible SDEs: Case 1[edit]
for a given differentiable function is equivalent to the Stratonovich SDE
which has a general solution
Reducible SDEs: Case 2[edit]
for a given differentiable function is equivalent to the Stratonovich SDE
which is reducible to
where where is defined as before. Its general solution is
SDEs and supersymmetry[edit]
In supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos, turbulence, self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect, 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc. The theory also offers a resolution of the Ito–Stratonovich dilemma in favor of Stratonovich approach.
See also[edit]
1. ^ Parisi, G.; Sourlas, N. (1979). "Random Magnetic Fields, Supersymmetry, and Negative Dimensions". Physical Review Letters. 43 (11): 744–745. Bibcode:1979PhRvL..43..744P. doi:10.1103/PhysRevLett.43.744.
2. ^ Slavík, A. (2013). "Generalized differential equations: Differentiability of solutions with respect to initial conditions and parameters". Journal of Mathematical Analysis and Applications. 402 (1): 261–274. doi:10.1016/j.jmaa.2013.01.027.
3. ^ Kloeden 1995, pag.118
Further reading[edit]
• Adomian, George (1983). Stochastic systems. Mathematics in Science and Engineering (169). Orlando, FL: Academic Press Inc.
• Adomian, George (1986). Nonlinear stochastic operator equations. Orlando, FL: Academic Press Inc.
• Adomian, George (1989). Nonlinear stochastic systems theory and applications to physics. Mathematics and its Applications (46). Dordrecht: Kluwer Academic Publishers Group.
• Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications. Berlin: Springer. ISBN 3-540-04758-1.
• Teugels, J. and Sund B. (eds.) (2004). Encyclopedia of Actuarial Science. Chichester: Wiley. pp. 523–527.CS1 maint: extra text: authors list (link)
• C. W. Gardiner (2004). Handbook of Stochastic Methods: for Physics, Chemistry and the Natural Sciences. Springer. p. 415.
• Thomas Mikosch (1998). Elementary Stochastic Calculus: with Finance in View. Singapore: World Scientific Publishing. p. 212. ISBN 981-02-3543-7.
• Seifedine Kadry, (2007). A Solution of Linear Stochastic Differential Equation. USA: WSEAS TRANSACTIONS on MATHEMATICS, April 2007. p. 618. ISSN 1109-2769.CS1 maint: extra punctuation (link)
• P. E. Kloeden & E. Platen, (1995). Numerical Solution of Stochastic Differential Equations. Springer. ISBN 0-387-54062-8.CS1 maint: extra punctuation (link)
• Higham., Desmond J. (January 2001). "An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations". SIAM Review. 43 (3): 525–546. Bibcode:2001SIAMR..43..525H. CiteSeerX doi:10.1137/S0036144500378302. |
3aa595bf8c494afb | All quantum operations must be unitary to allow reversibility, but what about measurement? Measurement can be represented as a matrix, and that matrix is applied to qubits, so that seems equivalent to the operation of a quantum gate. That's definitively not reversible. Are there any situations where non-unitary gates might be allowed?
Unitary operations are only a special case of quantum operations, which are linear, completely positive maps ("channels") that map density operators to density operators. This becomes obvious in the Kraus-representation of the channel, $$\Phi(\rho)=\sum_{i=1}^n K_i \rho K_i^\dagger,$$ where the so-called Kraus operators $K_i$ fulfill $\sum_{i=1}^n K_i^\dagger K_i\leq \mathbb{I}$ (notation). Often one considers only trace-preserving quantum operations, for which equality in the previous inequality holds. If additionally there is only one Kraus operator (so $n=1$), then we see that the quantum operation is unitary.
However, quantum gates are unitary, because they are implemented via the action of a Hamiltonian for a specific time, which gives a unitary time evolution according to the Schrödinger equation.
| improve this answer | |
• 4
$\begingroup$ +1 Everyone interested in quantum mechanics (not just quantum information) should know about quantum operations e.g. from Nielsen and Chuang. I think it is worth mentioning (since the Wikipedia page on Stinespring dilation is too technical) that every finite-dimensional quantum operation is mathematically equivalent to some unitary operation in a larger Hilbert space followed by a restriction to the subsystem (by the partial trace). $\endgroup$ – Ninnat Dangniam Mar 20 '18 at 5:31
• 1
$\begingroup$ I've downvoted this because it refers to a concept which is arguably more advanced than the perspective from which the original question is being asked. $\endgroup$ – Alexander Soare May 21 at 11:55
Short Answer
Quantum operations do not need to be unitary. In fact, many quantum algorithms and protocols make use of non-unitarity.
Long Answer
Measurements are arguably the most obvious example of non-unitary transitions being a fundamental component of algorithms (in the sense that a "measurement" is equivalent to sampling from the probability distribution obtained after the decoherence operation $\sum_k c_k\lvert k\rangle\mapsto\sum_k |c_k|^2\lvert k\rangle\langle k\rvert$).
More generally, any quantum algorithm that involves probabilistic steps requires non-unitary operations. A notable example that comes to mind is HHL09's algorithm to solve linear systems of equations (see 0811.3171). A crucial step in this algorithm is the mapping $|\lambda_j\rangle\mapsto C\lambda_j^{-1}|\lambda_j\rangle$, where $|\lambda_j\rangle$ are eigenvectors of some operator. This mapping is necessarily probabilistic and therefore non-unitary.
Any algorithm or protocol that makes use of (classical) feed-forward is also making use of non-unitary operations. This is the whole of one-way quantum computation protocols (which, as the name suggests, require non-reversible operations).
The most notable schemes for optical quantum computation with single photons also require measurements and sometimes post-selection to entangle the states of different photons. For example, the KLM protocol produces probabilistic gates, which are therefore at least partly non-reversible. A nice review on the topic is quant-ph/0512071.
Less intuitive examples are provided by dissipation-induced quantum state engineering (e.g. 1402.0529 or srep10656). In these protocols, one uses an open map dissipative dynamic, and engineers the interaction of the state with the environment in such a way that that the long-time stationary state of the system is the desired one.
| improve this answer | |
At risk of going off-topic from quantum computing and into physics, I'll answer what I think is a relevant subquestion of this topic, and use it to inform the discussion of unitary gates in quantum computing.
The question here is: Why do we want unitarity in quantum gates?
The less specific answer is as above, it gives us 'reversibility', or as physicists often talk about it, a type of symmetry for the system. I'm taking a course in quantum mechanics right now, and the way unitary gates cropped up in that course was motivated by the desire to have physical transformations $\hat{U}$: that act as symmetries. This imposed two conditions on the transformation $\hat{U}$:
1. The transformations should act linearly on the state (this is what gives us a matrix representation).
2. The transformations should preserve probability, or more specifically inner product. This means that if we define:
$$|\psi '\rangle = U |\psi\rangle, |\phi'\rangle = U |\phi\rangle$$
Preservation of inner product means that $\langle \phi | | \psi \rangle= \langle \phi' | | \psi'\rangle$. From this second specification, unitarity can be derived (for full details see Dr. van Raamsdonk's notes here).
So this answers the question of why operations that keep things "reversible" have to be unitary.
The question of why measurement itself is not unitary is more related to quantum computation. A measurement is a projection on to a basis; in essence, it must "answer" with one or more basis states as the state itself. It also leaves the state in a way that is consistent with the "answer" to the measurement, and not consistent with the underlying probabilities that the state began with. So the operation satisfies specification 1. of our transformation $U$, but definitively does not satisfy specification 2. Not all matrices are created equal!
To round things back to quantum computation, the fact that measurements are destructive and projective (ie. we can only reconstruct the superposition through repeated measurements of identical states, and every measurement only gives us a 0/1 answer), is part of what makes the separation between quantum computing and regular computing subtle (and part of why it's difficult to pin that down). One might assume quantum computing is more powerful because of the mere size of the Hilbert space, with all those state superpositions available to us. But our ability to extract that information is heavily limited.
As far as I understand it this shows that for information storage purposes, a qubit is only as good as a regular bit, and no better. But we can be clever in quantum computation with the way that information is traded around, because of the underlying linear-algebraic structure.
| improve this answer | |
• 1
$\begingroup$ I find the last paragraph a bit cryptic. What do you mean by "slippery" separation here? It is also non-obvious how the fact that measurements are destructive implies something about such separation. Could you clarify these points? $\endgroup$ – glS Mar 15 '18 at 20:05
• 2
$\begingroup$ @glS, good point, that was worded poorly. Does this help? I don't think I'm saying anything particularly deep, simply that Hilbert space size alone isn't a priori what makes quantum computation powerful (and it doesn't give us any information storage advantages) $\endgroup$ – Emily Tyhurst Mar 15 '18 at 20:41
There are several misconceptions here, most of them originate from exposure to only the pure state formalism of quantum mechanics, so let's address them one by one:
1. All quantum operations must be unitary to allow reversibility, but what about measurement?
This is false. In general, the states of a quantum system are not just vectors in a Hilbert space $\mathcal{H}$ but density matrices $-$ unit-trace, positive semidefinite operators acting on the Hilbert space $\mathcal{H}$ i.e., $\rho: \mathcal{H} \rightarrow \mathcal{H}$, $Tr(\rho) = 1$, and $\rho \geq 0$ (Note that the pure state vectors are not vectors in the Hilbert space but rays in a complex projective space; for a qubit this amounts to the Hilbert space being $\mathbb{C}P^1$ and not $\mathbb{C}^2$). Density matrices are used to describe a statistical ensemble of quantum states.
The density matrix is called pure if $\rho^2 = \rho$ and mixed if $\rho^2 < \rho$. Once we are dealing with a pure state density matrix (that is, there's no statistical uncertainty involved), since $\rho^2 = \rho$, the density matrix is actually a projection operator and one can find a $|\psi\rangle \in \mathcal{H}$ such that $\rho = |\psi\rangle \langle\psi|$.
The most general quantum operation is a CP-map (completely positive map), i.e., $\Phi: L(\mathcal{H}) \rightarrow L(\mathcal{H})$ such that $$\Phi(\rho) = \sum_i K_i \rho K_i^\dagger; \sum_i K_i^\dagger K_i \leq \mathbb{I}$$ (if $\sum_i K_i^\dagger K_i = \mathbb{I}$ then these are called CPTP (completely positive and trace-preserving) map or a quantum channel) where the $\{K_i\}$ are called Kraus operators.
Now, coming to the OP's claim that all quantum operations are unitary to allow reversibility -- this is just not true. The unitarity of time evolution operator ($e^{-iHt/\hbar}$) in quantum mechanics (for closed system quantum evolution) is simply a consequence of the Schrödinger equation.
However, when we consider density matrices, the most general evolution is a CP-map (or CPTP for a closed system to preserve the trace and hence the probability).
1. Are there any situations where non-unitary gates might be allowed?
Yes. An important example that comes to mind is open quantum systems where Kraus operators (which are not unitary) are the "gates" with which the system evolves.
Note that if there is only a single Kraus operator then, $\sum_i K_i^\dagger K_i = \mathbb{I}$. But there's only one $i$, therefore, we have, $K^\dagger K = \mathbb{I}$ or, $K$ is unitary. So the system evolves as $\rho \rightarrow U \rho U^\dagger$ (which is the standard evolution that you may have seen before). However, in general, there are several Kraus operators and therefore the evolution is non-unitary.
Coming to the final point:
In standard quantum mechanics (with wavefunctions etc.), the system's evolution is composed of two parts $-$ a smooth unitary evolution under the system's Hamiltonian and then a sudden quantum jump when a measurement is made $-$ also known as wavefunction collapse. Wavefunction collapses are described as some projection operator say $|\phi\rangle \langle\phi|$ acting on the quantum state $|\psi\rangle$ and the $|\langle\phi|\psi\rangle|^2$ gives us the probability of finding the system in the state $|\phi\rangle$ after the measurement. Since the measurement operator is after all a projector (or as the OP suggests, a matrix), shouldn't it be linear and physically similar to the unitary evolution (also happening via a matrix). This is an interesting question and in my opinion, difficult to answer physically. However, I can shed some light on this mathematically.
If we are working in the modern formalism, then measurements are given by POVM elements; Hermitian positive semidefinite operators, $\{M_{i}\}$ on a Hilbert space $\mathcal{H}$ that sum to the identity operator (on the Hilbert space) $\sum _{{i=1}}^{n}M_{i}=\mathbb{I}$. Therefore, a measurement takes the form $$ \rho \rightarrow \frac{E_i \rho E_i^\dagger}{\text{Tr}(E_i \rho E_i^\dagger)}, \text{ where } M_i = E_i^\dagger E_i.$$
The $\text{Tr}(E_i \rho E_i^\dagger) =: p_i$ is the probability of the measurement outcome being $M_i$ and is used to renormalize the state to unit trace. Note that the numerator, $\rho \rightarrow E_i \rho E_i^\dagger$ is a linear operation, but the probabilistic dependence on $p_i$ is what brings in the non-linearity or irreversibility.
Edit 1: You might also be interested Stinespring dilation theorem which gives you an isomorphism between a CPTP map and a unitary operation on a larger Hilbert space followed by partial tracing the (tensored) Hilbert space (see 1, 2).
| improve this answer | |
I'll add a small bit complementing the other answers, just about the idea of measurement.
Measurement is usually taken as a postulate of quantum mechanics. There's usually some preceding postulates about hilbert spaces, but following that
• Every measurable physical quantity $A$ is described by an operator $\hat{A}$ acting on a Hilbert space $\mathcal{H}$. This operator is called an observable, and it's eigenvalues are the possibly outcomes of a measurement.
• If a measurement is made of the observable $A$, in the state of the system $\psi$, and the outcome is $a_n$, then the state of the system immediately after measurement is $$\frac{\hat{P}_n|\psi\rangle}{\|\hat{P}_n|\psi\rangle\|},$$ where $\hat{P}_n$ is the projector onto the eigen-subspace of the eigenvalue $a_n$.
Normally the projection operators themselves should satisfy $\hat{P}^\dagger=\hat{P}$ and $\hat{P}^2=\hat{P}$, which means they themselves are observables by the above postulates, and their eigenvalues $1$ or $0$. Supposing we take one of the $\hat{P}_n$ above, we can interpret the $1,0$ eigenvalues as a binary yes/no answer to whether the observable quantity $a_n$ is available as an outcome of measurement of the state $|\psi\rangle$.
| improve this answer | |
Measurements are unitary operations, too, you just don't see it: A measurement is equivalent to some complicated (quantum) operation that acts not just on the system but also on its environment. If one were to model everything as a quantum system (including the environment), one would have unitary operations all the way.
However, usually there is little point in this because we usually don't know the exact action on the environment and typically don't care. If we consider only the system, then the result is the well-known collapse of the wave function, which is indeed a non-unitary operation.
| improve this answer | |
Quantum states can change in two ways: 1. quantumly, 2. classically.
1. All the state changes taking place quantumly, are unitary. All the quantum gates, quantum errors, etc., are quantum changes.
2. There is no obligation on classical changes to be unitary, e.g. measurement is a classical change.
All the more reason, why it is said that the quantum state is 'disturbed' once it's measured.
| improve this answer | |
• 1
$\begingroup$ Why would errors be "quantum"? $\endgroup$ – Norbert Schuch Oct 28 '18 at 22:20
• $\begingroup$ @NorbertSchuch: Some errors could come in the form of the environment "measuring" the state, which could be considered classical in the language of this user, but other errors may come in the form of rotations/transformations in the Bloch sphere which don't make sense classically. Certainly you need to do full quantum dynamics if you want to model decoherence exactly (non-Markovian and non-perturbative ideally, but even Markovian master equations are quantum). $\endgroup$ – user1271772 Oct 29 '18 at 1:05
• $\begingroup$ Surely not all errors are 'quantum', but I meant to say that all 'quantum errors' ($\sigma_x,\sigma_y,\sigma_z$ and their linear combinations) are unitary. Please correct me if I am wrong, thanks. $\endgroup$ – alphaQuant Oct 29 '18 at 5:49
• $\begingroup$ To be more precise, errors which are taken care of by QECCs. $\endgroup$ – alphaQuant Oct 29 '18 at 5:56
• 1
$\begingroup$ I guess I'm not sure what "quantum" and "classical" means. What would a CP map qualify as? $\endgroup$ – Norbert Schuch Oct 29 '18 at 6:45
Your Answer
|
63fae86ddb06e827 |
Jean-François Bougron
Quantum formalism and non-equilibrium Repeated Interaction Systems
Wednesday, 14 November, 2018 - 17:00
Résumé :
Starting from common knowledge in quantum mechanics, this talk will introduce a simplified version of the quantum formalism. More precisely, I will recall basic facts about the wave function, the observables and the Schrödinger equation, and explain how in specific cases, one can solve quantum mechanical problems using finite dimensional linear algebra.
The second purpose of this talk is to apply this simplified formalism to a specific problem : the Linear Response Theory and the Entropic Fluctuations of Repeated Interaction Systems. Physically speaking, one can think of a beam of atoms whose temperatures are non-equal. This beam goes through a cavity which is filled with an electromagnetic field. In mean, this electromagnetic field will take energy from the hottest atoms and give energy to the coldest ones. Some well-known physical results of thermodynamics can be proved on a particular form in this situation.
Institution de l'orateur :
Thème de recherche :
Salle :
logo uga logo cnrs |
74ee46500e92d91a | Thursday, March 21, 2019
actual infinite falling (all-)together &/or chaosmos COLLAPSE
Musée d'Orsay/January 2018 (for more A/Z photography see portfolio here);
Clara Colosimo in Fellini's Prova d'orchestra;
Arquipélago dos Pombos Correios (o soverdouro);
The great abyss inframince (by A/Z, for more see here);
"... the term quantum mechanics is very much a misnomer. It should, perhaps, be called quantum nonmechanics..."
David Bohm
"But he wants The Cold like he wants His Junk—NOT OUTSIDE where it does him no good but INSIDE so he can sit around with a spine like a frozen hydraulic jack..."
"Because Fats' nerves were raw and peeled to feel the death spasms of a million cold kicks... Fats learned The Algebra of Need and survived..."
"Cure is always: Let Go! Jump!"
William S. Burroughs
"Ihr verehrt mich: aber wie, wenn eure Verehrung eines Tages umfällt?"
"Ich komme aus Höhen, die kein Vogel je erflog, ich kenne Abgründe, in die noch kein Fuß sich verirrt hat..."
"Vor mir giebt es diese Umsetzung des dionysischen in ein philosophisches Pathos nicht."
"... ein vollkommnes Ausser-sich-sein mit dem distinktesten Bewusstsein einer Unzahl feiner Schauder und Überrieselungen bis in die Fusszehen; eine Glückstiefe, in der das Schmerzlichste und Düsterste nicht als Gegensatz wirkt, sondern als bedingt, als herausgefordert, sondern als eine nothwendige Farbe innerhalb eines solchen Lichtüberflusses; ein Instinkt rhythmischer Verhältnisse, der weite Räume von Formen überspannt..."
"... la majorité est travaillé par une minorité proliférante et non dénombrable qui risque de détruire la majorité dans son concept même, c'est-à-dire en tant qu'axiome... le étrange concept de non-blanc ne constitue pas un ensemble dénombrable... Le propre de la minorité, c'est de faire valoir la puissance du non-dénombrable, même quand elle est composée d'un seul membre. C'est la formule des multiplicités. Non-blanc, nous avons tous à le devenir, que nous soyons blancs, jaunes ou noirs."
Deleuze & Guattari
"Eighteenth-century masters achieved most pleasing effects with forgrounds of warm brown and fading distances of cool, silvery blues... Constable wanted to try out the effect of respecting the local color of grass somewhat more, in his Wivenhoe Park he is seen pushing the range more in the direction of bright greens. Only in the direction of, it is a transposition, not a copy."
Ernst H. Gombrich (Art and Illusion)
"Oboukhoff acumule alors en un étroit espace les demi-tons; la sensation est déchirante, horripilante... Avec le glissando, c'est tout l'infini du monde sonore qui fait irruption dans la musique tempérée..."
Boris de Schloezer (1925)
"Note the parallels between ordinary awareness, classical physics, and the natural and counting integers..."
Dean Radin (Real Magic)
"É a luz que dá o salto e cria toda mágica do espetáculo."
Ney Matogrosso
This is AGAINST Carlo Rovelli's dictum or pseudo-problem: "visto que tudo se atrai, a única maneira de um Universo finito não desmoronar sobre si mesmo é que se expanda" [since all things attract one another, the only way a finite Universe can avoid collapse is to expand] (A realidade não é o que parece, p. 105)—but why should one use the term "finite" (or even "infinite") to describe a universe with no definite borders (like a 3-sphere, or something even more complex)? The infinite is not equivalent to the huge. The infinite is simply (according to Dedekind) what can be matched up to its own parts (the only reason to deny this is hysteria, paradox-freakishness). The universe (the chaosmos) both expands & collapse! As a whole and at the length of its space-time infinitesimals (or epsilon-delta limits, whatever), the macro/micro contractions, the revolving ruminations (what Rovelli confusedly calls "granulations," as if they were incompatible with any notion of continuity) of an autophagic real-virtual Einsteinian mollusk. If you have three fundamental constants (as Rovelli suggests, A realidade não é o que parece, p. 229), velocity [of light], information and Planck's length (c, ħ, ℓp), what matters is the relation among them (which might be revealed in established, finite proportions) not each one of their supposedly fixed (absolute) values (and even the relation might vary, fluctuate). Otherwise you behave like a very stupid painter arguing over the positive value of (what we call) green or brown in the transposition of tonal gradations to canvas.
Main Hall:
Time out of joints or the excessive solution (academically and sophistically called 'the measurement problem'):
"If quantum state evolution proceeds via the Schrödinger equation or some other linear equation, then, as we have seen in the previous section, typical experiments will lead to quantum states that are superpositions of terms corresponding to distinct experimental outcomes. It is sometimes said that this conflicts with our experience, according to which experimental outcome variables, such as pointer readings, always have definite values. This is a misleading way of putting the issue, as it is not immediately clear how to interpret states of this sort as physical states of a system that includes experimental apparatus, and, if we can’t say what it would be like to observe the apparatus to be in such a state, it makes no sense to say that we never observe it to be in a state like that," Wayne Myrvold's "Philosophical Issues in Quantum Mechanics," Stanford Encyclopedia of Philosophy
"... von Neumann makes the logical structure of quantum theory very clear by identifying two very different processes, which he calls process 1 and process 2... Process 2 is the analogue in quantum theory of the process in classic physics that takes the state of a system at one time to its state at a later time. This process 2, like its classic analogue, is local and deterministic. However, process 2 by itself is not the whole story: it generates a host of ‘physical worlds’, most of which do not agree with our human experience. For example, if process 2 were, from the time of the big bang, the only process in nature, then the quantum state (centre point) of the moon would represent a structure smeared out over a large part of the sky, and each human body–brain would likewise be represented by a structure smeared out continuously over a huge region. Process 2 generates a cloud of possible worlds, instead of the one world we actually experience...," Jeffrey M. Schwartz's, Henry P. Stapp's and Mario Beauregard's "Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction," Philosophical Transactions of the Royal Society (2005).
"... a seminal discovery by Heisenberg... in order to get a satisfactory quantum generalization of a classic theory one must replace various numbers in the classic theory by actions (operators). A key difference between numbers and actions is that if A and B are two actions then AB represents the action obtained by performing the action A upon the action B. If A and B are two different actions then generally AB is different from BA: the order in which actions are performed matters. But for numbers the order does not matter: AB=BA. The difference between quantum physics and its classic approximation resides in the fact that in the quantum case certain differences AB–BA are proportional to a number measured by Max Planck in 1900, and called Planck’s constant. Setting those differences to zero gives the classic approximation," Jeffrey M. Schwartz's, Henry P. Stapp's and Mario Beauregard's "Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction," Philosophical Transactions of the Royal Society (2005).
"At their narrowest points, calcium ion channels are less than a nanometre in diameter... The narrowness of the channel restricts the lateral spatial dimension. Consequently, the lateral velocity is forced by the quantum uncertainty principle to become large. This causes the quantum cloud of possibilities associated with the calcium ion to fan out over an increasing area as it moves away from the tiny channel to the target region... This spreading of this ion wave packet means that the ion may or may not be absorbed on the small triggering site. Accordingly, the contents of the vesicle may or may not be released... the quantum state of the brain splits into a vast host of classically conceived possibilities, one for each possible combination of the release-or-no-release options at each of the nerve terminals... a huge smear of classically conceived possibilities," Jeffrey M. Schwartz's, Henry P. Stapp's and Mario Beauregard's "Quantum physics in neuroscience and psychology: a neurophysical model of mind–brain interaction," Philosophical Transactions of the Royal Society (2005).
"... waves make diffraction patterns precisely because multiple waves can be at the same place at the same time, and a given wave can be at multiple places at the same time... by definition particles are localized entities that take up space, they can be here or there, but not in two places at once. However it turns out that particles can produce diffraction patterns under specific circumstances... a given particle can be in a state of superposition... to be in a state of superposition between two positions, for exemple, is not to be here or there or even here and there, but rather it is to be indeterminately here-there. That is, it is not simply that the position is unknown, but rather there is no fact of the matter to whether it is here or there... it is a matter of ontological indeterminacy and not merely epistemological uncertainty... patterns of difference... are arguably at the core or what matter is and are at the heart of how quantum physics understands the world... the quantum probabilities are calculated by taken account of all the possible paths connecting the points. In other words, a given particle that starts out here and winds up there is understood as is understood to be in a superposition of all possible paths between two points. Or in its four dimensional quantum field theory elaboration, all possible space-time histories... the very meaning of superposition is that all possible histories are happening together, they all coexist and mutually contribute to this overall pattern or else there wouldn't be a diffraction pattern..." Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription.
"Quantum physics opens up another possibility beyond the relatively familiar phenomena of spatial diffraction, namely, temporal diffraction. The existence of temporal diffraction is due to a less well-known indeterminacy principle than the usual position/momentum indeterminacy principle... something call the energy/time indeterminacy principle. This indeterminacy principle plays a key role in quantum field theory... temporalities are not merely multiple, but rather temporalities are specifically entangled and threaded through one another such that there is no determinate answer to the question what time is it? Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription.
"During the waning decades of the 20th century, the most murdering century by some accounts in history, the notion that the past might be open to revision through a quantum erasure came to the fore. The quantum erasure experiment is a variation of the two slit diffraction experiment, an experiment which Feynman said contains all the mysteries of quantum physics. Against this fantastic claim of the possibility of erasure, I will claim that in paying close attention to the material labours entailed the claim of erasure possibility fades, at least full erasure, while at the same time bringing to the forth a relational ontology sensibility to questions of time, memory and history... the nature of time and being, or rather time-being itself is in question and can't be assumed. What this experiment tells us is not simply that a given particle would have done something different in the past, but that the very nature of its being, its ontology, in the past remains open to future reworkings... In particular I argue that this experiment offers empirical evidence for a relational ontology or perhaps more accurately a hauntology as against a metaphysics of presence... Remarkably this experiment makes evident that entanglement survives the measurement process and further more that material traces of attempts at erasure can be found in tracing the entanglements... While the past is never finished, and the future is not what would unfold, the world holds or rather is the memories of its iterated reconfigurings" Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription.
"If classical physics insists that the void has no matter and no energy, the quantum principle of ontological indeterminacy, and particularly the indeterminacy relation between energy and time, pose into question the existence of such a zero energy, zero matter state... the indeterminacy principle allows for fluctuations of the vacuum... the vacuum is far from empty, it is fill with all possible indeterminate yearnings of space-time mattering... we can understand vacuum fluctuation in terms of virtual particles. Virtual particles are the quanta of the vacuum fluctuations... the void is a spectral ground, not even nothing can be free of ghosts... there is infinite number of possibilities, but not everything is possible. The vacuum isn't empty but neither is anything in it... particles together with their antiparticles and pairs can be created out of the vacuum by putting the right amount of energy into the vacuum... So, similarly, particles together with their antiparticles and pairs can go back into the vacuum, emitting the excess energy" Karen Barad's "Troubling Time's & Ecologies of Nothingness," European Graduate School Video Lectures (YouTube), my transcription.
Labyrinthine corridors, rooms:
"This was on Friday afternoon. Saturday morning I awoke early and read the two papers. Bohm, in simple clear language, declared that indeed there were conceptual problems in both macro- and microphysics, and that they were not to be swept under the carpet... And, further, Bohm suggested that the root of those problems was the fact that conceptualizations in physics had for centuries been based on the use of lenses which objectify (indeed the lenses of telescopes and microscopes are called objectives). Lenses make objects, particles," Karl Pribram's "The Implicate Brain";
"An equally important step in understanding came at a meeting at the University of California in Berkley, in which Henry Stapp and Geoffrey Chew of the Department of Physics pointed out that most of quantum physics, including their bootstrap formulations based on Heisenberg's scatter matrices, were described in a domain which is the Fourier transform of the spacetime domain. This was of great interest to me because Russell and Karen DeValois of the same university had shown that the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern. ***The Fourier theorem states that any pattern, no matter how complex, can be analyzed into regular waveform components of different frequencies, amplitudes, and (phase) relations among frequencies. Further, given such components, the original pattern can be reconstructed. This theorem was the basis for Gabor's invention of holography," Karl Pribram's "The Implicate Brain";
[***when different wave patterns meet, they add up to form new patterns; you can analyse complex wave patterns as if they were a superposition of more simple waves, which have, for instance a definite, uniforme wavelength; the illustration at left is taken from the site of professor John D. Norton (Pittsburg University): "Einstein for Everyone"; it is important to note that real wave patterns studied in physics are much more complex than this two dimensional representation, and that they are ultimately formed by something that is neither strictly speaking a wave nor a particle as these are classically understood; I shall also say that not all John D. Norton's explanations given in the referred site seem very enlightening to me]
(see picture above)
From Maxwell's equations, we should expect an infinite number of frequencies of electromagnetic waves (or radiation, which includes visible light, and waves whose frequencies are bellow the one which produces the red colour, such as radio waves, and also waves whose frequencies are above the one which produces the violet colour, such gama rays). All these electromagnetic waves travel at what is called the speed of light (the frequencies can vary because the wavelength also varies proportionally) and constitute the electromagnetic spectrum. High frequency means also high photon energy. The photon energy is related to how single atoms of different material objects can absorb and emit electromagnetic waves, which happens always in quantum discrete amounts. As concrete musical instruments, atoms can produce oscillations only in certain restricted ways, and they do so very energetically. The physical production of what we perceive as forms and colours has to do, however, more directly with the way electromagnetic waves travel much more freely and continuously in space, through, for instance, air or water, interfering (constructively or destructively) with one another, interacting with molecules—and we are talking about electromagnetic waves of lower energy and frequencies, which are visible. What we see, although, isn't everything.
Does the continuum (infinitely divisible) preclude plurality? Does the discrete precludes unity? Of course no! Except for the lack of imagination of the purist & prudish. But thanks gosh, in philosophy we also have Leibniz's "Natura non facit saltus," and Peirce's synechism (everything is connected), the immemorial and unending, irreducible battles between the one and the many. Why should people be so afraid of a conundrum of straight lines, curves, and points (which besides going for these one- and two-dimensions, can be extrapolated to n-)? Infinitesimals, differentials, and limits, what is the real difference? Epsilon-delta definition (Cauchy, Bolzano, Weierstrass) and nonstandard analysis (Abraham Robinson) are all in the end perfectly compatible. Add to that synthetic differential geometry or the smooth infinitesimal (F. W. Lawvere), whatever! The actual infinite—everything else starts from it! Just don't be afraid of lingo—the science wars are an affair of securing university bonus in times of economic havoc. And don't forget that continuity doesn't have to be only local, that is, the chaosmos is full of nonlocal connections, the innermost separations! What matters is attitude, not content or specific formulations.
["Whenever a point x is within δ units of c, f(x) is within ε units of L," graphic and definition from the Wikipedia's epsilon-delta entry ]
["Infinitesimals (ε) and infinites (ω) on the hyperreal number line (1/ε = ω/1)," graphic and definition from Wikipedia's hyperreal number entry]
(see picture above)
"Cusanus... took the circle to be an infinilateral regular polygon, that is, a regular polygon with an infinite number of (infinitesimally short) sides... The idea of considering a curve as an infinilateral polygon was employed by a number of later thinkers, for instance, Kepler, Galileo and Leibniz... Traditionally, geometry is the branch of mathematics concerned with the continuous and arithmetic (or algebra) with the discrete. The infinitesimal calculus that took form in the 16th and 17th centuries, which had as its primary subject matter continuous variation, may be seen as a kind of synthesis of the continuous and the discrete, with infinitesimals bridging the gap between the two. The widespread use of indivisibles and infinitesimals in the analysis of continuous variation by the mathematicians of the time testifies to the affirmation of a kind of mathematical atomism which, while logically questionable, made possible the spectacular mathematical advances with which the calculus is associated. It was thus to be the infinitesimal, rather than the infinite, that served as the mathematical stepping stone between the continuous and the discrete," John L. Bell's "Continuity and Infinitesimals" (Stanford Encyclopedia of Philosophy) [I like this passage very much, and this is a very useful article, but I'm not subscribing in detail to all ideas Bell developed there];
"... science needs calculus; calculus needs the continuum; the continuum needs a very careful definition; and the best definition requires there to be actual infinities (not merely potential infinities) in the micro-structure and the overall macro-structure of the continuum... Informally expressed [for Dedekind], any infinite set can be matched up to a part of itself; so the whole is equivalent to a part. This is a surprising definition because, before this definition was adopted, the idea that actually infinite wholes are equinumerous with some of their parts was taken as clear evidence that the concept of actual infinity is inherently paradoxical... [Cantor's] new idea [similar to Dedekind's] is that the potentially infinite set presupposes an actually infinite one. If this is correct, then Aristotle’s two notions of the potential infinite and actual infinite have been redefined and clarified," Bradley Dowden's "The Infinite" (Internet Encyclopedia of Philosophy) [I like this passage very much, and this is a very useful article, but I'm not subscribing in detail to all ideas Dowden developed there];
"... in Quantum Electrodynamics... processes of much greater complexity [than a simple electron-electron scattering] could intervene in the scattering process. For example, the exchanged photon could convert to an electron-positron pair which would subsequently recombine... or one of the incoming electrons might emit a photon and reabsorb it on the way out... in general, the exchange of arbitrarily large numbers of photons, electrons and positrons can contribute to electromagnetic interactions... very complicated multiparticle exchanges have to be taken into account in the analysis of physical systems. Indeed, no exact solutions to the Quantum Electrodynamics are known, nor have such solutions ever been shown rigorously to exist [but precise approximations are possible]," Andrew Pickering's Constructing Quarks (p. 63);
"... in quantum field theory all forces are mediated by particle exchange... It is equally important to stress that the exchanged particles... are not observable... To explain why this is so, it is necessary to make a distinction between 'real' and 'virtual' particles... particles with unphysical values of energy and momentum are said to be 'virtual' or 'off mass-shell' particles. In classical physics they could not exist at all... In quantum physics, in consequence of the Uncertainty Principle, virtual particles can exist, but only for an infinitesimal and experimentally undetectable length of time. In fact, the lifetime of a virtual particle is inversely dependent upon how far its mass diverges from its physical value," Andrew Pickering's Constructing Quarks (p. 64-65);
"In quantum mechanics the particles themselves can be represented as fields. An electron, for example, can be considered a packet of waves with some finite extension in space. Conversely, it is often convenient to represent a quantum mechanical field as if it were a particle. The interaction of two particles through their interpenetrating fields can then be summed up by saying the two particles exchange a third particle, which is called the quantum of the field. For example, when two electrons, each surrounded by an electromagnetic field, approach each other and bounce apart, they are said to exchange a photon, the quantum of the electromagnetic field. The exchanged quantum has only an ephemeral existence... The larger their energy, the briefer their existence. The range of an interaction is related to the mass of the exchanged quantum. If the field quantum has a large mass, more energy must be borrowed in order to support its existence, and the debt must be repaid sooner lest the discrepancy be discovered. The distance the particle can travel before it must be reabsorbed is thereby reduced and so the corresponding force has a short range. In the special case where the exchanged quantum is massless [such as a photon] the range is infinite," Gerard 't Hooft's "Gauge Theories of the Forces between Elementary Particles" (Scientific American, vol. 242, n. 6, 1980, pp. 104-141);
"It was not immediately apparent that quantum electrodynamics could qualify as a physically acceptable theory. One problem arose repeatedly in any attempt to calculate the result of even the simplest electromagnetic interactions, such as the interaction between two electrons. The likeliest sequence of events in such an encounter is that one electron emits a single virtual photon and the other electron absorbs it. Many more complicated exchanges are also possible, however; indeed, their number is infinite. For example, the electrons could interact by exchanging two photons, or three, and so on. The total probability of the interaction is determined by the sum of the contributions of all the possible events... Perhaps the best defense of the theory is simply that it works very well. It has yielded results that are in agreement with experiments to a n accuracy of about one part in a billion, which makes quantum electrodynamics the most accurate physical theory ever de vised," Gerard 't Hooft's "Gauge Theories of the Forces between Elementary Particles" (Scientific American, vol. 242, n. 6, 1980, pp. 104-141);
"If an electron enters a medium composed of molecules that have positively and negatively charged ends, for example, it will polarize the molecules. The electron will repel their negative ends and attract their positive ends, in effect screening itself in positive charge. The result of the polarization is to reduce the electron's effective charge by an amount that in creases with distance... The uncertainty principle of Werner Heisenberg suggests... that the vacuum is not empty. According to the principle, uncertainty about the energy of a system increases as it is examined on progressively shorter time scales. Particles may violate the law of the conservation of energy for unobservably brief instants; in effect, they may materialize from nothingness. In QED [Quantum Electrodynamics] the vacuum is seen as a complicated and seething medium in which pairs of charged "virtual" particles, particularly electrons and positrons, have a fleeting existence. These ephemeral vacuum fluctuations are polarizable just as are the molecules of a gas or a liquid. Accordingly QED predicts that in a vacuum too electric charge will be screened and effectively reduced at large distances," Chris Quigg's Elementary Particles and Forces (Scientific American, vol. 252, n. 4, 1985, pp. 84-95);
"A nuvem de probabilidades que acompanha os elétrons entre uma interação e outra é um pouco parecida com um campo. Mas os campos de Faraday e Maxwell, por sua vez, são feitos de grãos: os fótons. Não apenas as partículas estão em certo sentido difusas no espaço como campos, mas também os campos interagem como partículas. As noções de campo e de partícula, separadas por Faraday e Maxwell, acabam convergindo na mecânica quântica. A forma como isso acontece na teoria é elegante: as equações de Dirac determinam quais valores cada variável pode assumir. Aplicadas à energia das linhas de Faraday, dizem-nos que essa energia pode assumir apenas certos valores e não outros... As ondas eletromagnéticas são de fato vibrações das linhas de Faraday, mas também, em pequena escala, enxames de fótons... Por outro lado, também os elétrons e todas as partículas de que é feito o mundo são 'quanta' de um campo... semelhante ao de Faraday e Maxwell," Carlo Rovelli's A realidade não é o que parece (Objetiva, 2014, p. 125);
"A 'nuvem' que representa os pontos do espaço onde é provável encontrar o elétron é descrita por um objeto matemático chamado 'função de onda.'O físico austríaco Erwin Schrödinger escreveu uma equação que mostra como essa função de onda evolui no tempo. Schrödinger esperava que a 'onda' explicasse as estranhezas da mecânica quântica... Ainda hoje alguns tentam entender a mecânica quântica pensando que a realidade é a onda de Schrödinger. Mas Heisenberg e Dirac logo compreenderam que esse caminho é equivocado. A função [de onda] não está no espaço físico, está em um espaço abstrato formado por todas as possíveis [virtuais!] configurações do sistema... A realidade do elétron não é uma onda [?]: é esse aparecer intermitente nas colisões," Carlo Rovelli's A realidade não é o que parece (Objetiva, 2014, p. 271);
"When we say that we wish to make sense of something we meant o put it into spacetime terms, the terms of Euclidean geometry, clock time, etc. The Fourier transform domain is potential to this sensory domain. The waveforms which compose the order present in the electromagnetic sea which fills the universe make up an interpenetrating organization similar to that which characterizes the waveforms "broadly cast" by our radio and television stations. Capturing a momentary cut across these airwaves would constitute their hologram. The broadcasts are distributed and at any location they are enfolded among one another. In order to make sense of this cacophany of sights and sounds, one must tune in on one and tune out the others. Radios and television sets provide such tuners. Sense organs provide the mechanisms by which organisms tune into the cacophany which constitutes the quantum potential organization of the elecromagnetic energy which fills the universe," Karl Pribram's "The Implicate Brain";
"... the cloud chamber photograph does not reveal a “solid” particle leaving a track. Rather it reveals the continual unfolding of process with droplets forming at the points where the process manifests itself. Since in this view the particle is no longer a point-like entity, the reason for quantum particle interference becomes easier to understand. When a particle encounters a pair of slits, the motion of the particle is conditioned by the slits even though they are separated by a distance that is greater than any size that could be given to the particle. The slits act as an obstruction to the unfolding process, thus generating a set of motions that gives rise to the interference pattern," Basil J. Hiley's "Mind and matter: aspects of the implicate order described through algebra" (in K. H. Pribram's and J. King's Learning as Self-Organisation, New Jersey, Lawrence Erlbaum Associates, 1996, pp. 569-86);
"Let us... ask what the algebraic structure tells you about the underlying phase space. Because the algebra is non-commutative there is no single underlying manifold. That is a mathematical result. Thus if we take the algebra as primary then there is no underlying manifold we can call the phase space. But we already know this. At present we say this arises because of the 'uncertainty principle,' but nothing is 'uncertain,'" Basil Hiley's "From the Heisenberg Picture to Bohm: a New Perspective on Active Information and its relation to Shannon Information" (in A. Khrennikov, Proc. Conf. Quantum Theory: reconsideration of foundations, Sweden, Växjö University Press, pp. 141-162, 2002).
"What Gelfand showed was that you could either start with an a priori given manifold and construct a commutative algebra of functions upon it or one could start with a given commutative algebra and deduce the properties of a unique underlying manifold. If the algebra is non-commutative it is no longer possible to find a unique underlying manifold. The physicist’s equivalent of this is the uncertainty principle when the eigenvalues of operators are regarded as the only relevant physical variables. What the mathematics of non-commutative geometry tells us is that in the case of a non-commutative algebra all we can do is to find a collection of shadow manifolds... The appearance of shadow manifolds is a necessary consequence of the non-commutative structure of the quantum formalism," Basil Hiley's "Phase Space Descriptions of Quantum Phenomena" (in A. Khrennikov, Quantum theory: Reconsiderations of Foundations, Vaxjo University Press, 2003).
the odd transformation of Der Herr Warum (Gödel with Resnais);
the only three types of ingenuity;
why self-help books are not to be dismissed;
the most auspicious tetrahedron;
what is REAL space? what is REAL number?
Timothy Leary in the 1990s;
5G?! Get real...
list of charming scientists/engineers;
pick a soul (ass you wish);
- en profane: Orsay & Centre Pompidou;
view from Berthe Trépat's apartment;
list des déclencheurs musicaux;
Dark Consciousness;
The Doors of Perception;
Structuralism, Poststructuralism;
List des figures du chaos primordial (Deleuze);
Brazilian Perspectivism;
Piano Playing (Kochevitsky);
- L'Affirmation de l'âne (review of Smolin/Unger's The Singular Universe);
And also: |
53140565e9192559 | All Issues
Volume 6, 2019
Volume 3, 2016
Volume 2, 2015
Volume 1, 2014
Journal of Computational Dynamics
December 2019 , Volume 6 , Issue 2
Special issue in honor of Reinout Quispel
Select all articles
Preface Special issue in honor of Reinout Quispel
Elena Celledoni and Robert I. McLachlan
2019, 6(2): ⅰ-ⅴ doi: 10.3934/jcd.2019007 +[Abstract](276) +[HTML](101) +[PDF](194.73KB)
Efficient time integration methods for Gross-Pitaevskii equations with rotation term
Philipp Bader, Sergio Blanes, Fernando Casas and Mechthild Thalhammer
2019, 6(2): 147-169 doi: 10.3934/jcd.2019008 +[Abstract](243) +[HTML](93) +[PDF](3361.15KB)
The objective of this work is the introduction and investigation of favourable time integration methods for the Gross-Pitaevskii equation with rotation term. Employing a reformulation in rotating Lagrangian coordinates, the equation takes the form of a nonlinear Schrödinger equation involving a space-time-dependent potential. A natural approach that combines commutator-free quasi-Magnus exponential integrators with operator splitting methods and Fourier spectral space discretisations is proposed. Furthermore, the special structure of the Hamilton operator permits the design of specifically tailored schemes. Numerical experiments confirm the good performance of the resulting exponential integrators.
Deep learning as optimal control problems: Models and numerical methods
Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren and Carola-Bibiane Schönlieb
2019, 6(2): 171-198 doi: 10.3934/jcd.2019009 +[Abstract](591) +[HTML](131) +[PDF](13381.36KB)
We consider recent work of [18] and [9], where deep learning neural networks have been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. We review the first order conditions for optimality, and the conditions ensuring optimality after discretisation. This leads to a class of algorithms for solving the discrete optimal control problem which guarantee that the corresponding discrete necessary conditions for optimality are fulfilled. The differential equation setting lends itself to learning additional parameters such as the time discretisation. We explore this extension alongside natural constraints (e.g. time steps lie in a simplex). We compare these deep learning algorithms numerically in terms of induced flow and generalisation ability.
Algebraic structure of aromatic B-series
Geir Bogfjellmo
2019, 6(2): 199-222 doi: 10.3934/jcd.2019010 +[Abstract](218) +[HTML](94) +[PDF](239.67KB)
Aromatic B-series are a generalization of B-series. Some of the algebraic structures on B-series can be defined analogically for aromatic B-series. This paper derives combinatorial formulas for the composition and substitution laws for aromatic B-series.
A new class of integrable Lotka–Volterra systems
Helen Christodoulidi, Andrew N. W. Hone and Theodoros E. Kouloukas
2019, 6(2): 223-237 doi: 10.3934/jcd.2019011 +[Abstract](269) +[HTML](91) +[PDF](12453.66KB)
A parameter-dependent class of Hamiltonian (generalized) Lotka–Volterra systems is considered. We prove that this class contains Liouville integrable as well as superintegrable cases according to particular choices of the parameters. We determine sufficient conditions which result in integrable behavior, while we numerically explore the complementary cases, where these analytically derived conditions are not satisfied.
Solving the wave equation with multifrequency oscillations
Marissa Condon, Arieh Iserles, Karolina Kropielnicka and Pranav Singh
2019, 6(2): 239-249 doi: 10.3934/jcd.2019012 +[Abstract](188) +[HTML](86) +[PDF](298.49KB)
We explore a new asymptotic-numerical solver for the time-dependent wave equation with an interaction term that is oscillating in time with a very high frequency. The method involves representing the solution as an asymptotic series in inverse powers of the oscillation frequency. Using the new scheme, high accuracy is achieved at a low computational cost. Salient features of the new approach are highlighted by a numerical example.
Principal symmetric space analysis
Charles Curry, Stephen Marsland and Robert I McLachlan
2019, 6(2): 251-276 doi: 10.3934/jcd.2019013 +[Abstract](227) +[HTML](100) +[PDF](5084.81KB)
Principal Geodesic Analysis is a statistical technique that constructs low-dimensional approximations to data on Riemannian manifolds. It provides a generalization of principal components analysis to non-Euclidean spaces. The approximating submanifolds are geodesic at a reference point such as the intrinsic mean of the data. However, they are local methods as the approximation depends on the reference point and does not take into account the curvature of the manifold. Therefore, in this paper we develop a specialization of principal geodesic analysis, Principal Symmetric Space Analysis, based on nested sequences of totally geodesic submanifolds of symmetric spaces. The examples of spheres, Grassmannians, tori, and products of two-dimensional spheres are worked out in detail. The approximating submanifolds are geometrically the simplest possible, with zero exterior curvature at all points. They can deal with significant curvature and diverse topology. We show that in many cases the distance between a point and the submanifold can be computed analytically and there is a related metric that reduces the computation of principal symmetric space approximations to linear algebra.
Integrable reductions of the dressing chain
Charalampos Evripidou, Pavlos Kassotakis and Pol Vanhaecke
2019, 6(2): 277-306 doi: 10.3934/jcd.2019014 +[Abstract](204) +[HTML](85) +[PDF](478.78KB)
In this paper we construct a family of integrable reductions of the dressing chain, described in its Lotka-Volterra form. For each \begin{document}$ k, n\in \mathbb N $\end{document} with \begin{document}$ n \geqslant 2k+1 $\end{document} we obtain a Lotka-Volterra system \begin{document}$ \hbox{LV}_b(n, k) $\end{document} on \begin{document}$ \mathbb {R}^n $\end{document} which is a deformation of the Lotka-Volterra system \begin{document}$ \hbox{LV}(n, k) $\end{document}, which is itself an integrable reduction of the \begin{document}$ 2m+1 $\end{document}-dimensional Bogoyavlenskij-Itoh system \begin{document}$ \hbox{LV}({2m+1}, m) $\end{document}, where \begin{document}$ m = n-k-1 $\end{document}. We prove that \begin{document}$ \hbox{LV}_b(n, k) $\end{document} is both Liouville and non-commutative integrable, with rational first integrals which are deformations of the rational first integrals of \begin{document}$ \hbox{LV}({n}, {k}) $\end{document}. We also construct a family of discretizations of \begin{document}$ \hbox{LV}_b(n, 0) $\end{document}, including its Kahan discretization, and we show that these discretizations are also Liouville and superintegrable.
Locally conservative finite difference schemes for the modified KdV equation
Gianluca Frasca-Caccia and Peter E. Hydon
2019, 6(2): 307-323 doi: 10.3934/jcd.2019015 +[Abstract](195) +[HTML](80) +[PDF](1446.74KB)
Finite difference schemes that preserve two conservation laws of a given partial differential equation can be found directly by a recently-developed symbolic approach. Until now, this has been used only for equations with quadratic nonlinearity.
In principle, a simplified version of the direct approach also works for equations with polynomial nonlinearity of higher degree. For the modified Korteweg-de Vries equation, whose nonlinear term is cubic, this approach yields several new families of second-order accurate schemes that preserve mass and either energy or momentum. Two of these families contain Average Vector Field schemes of the type developed by Quispel and co-workers. Numerical tests show that each family includes schemes that are highly accurate compared to other mass-preserving methods that can be found in the literature.
Re-factorising a QRT map
Nalini Joshi and Pavlos Kassotakis
2019, 6(2): 325-343 doi: 10.3934/jcd.2019016 +[Abstract](184) +[HTML](87) +[PDF](408.22KB)
A QRT map is the composition of two involutions on a biquadratic curve: one switching the \begin{document}$ x $\end{document}-coordinates of two intersection points with a given horizontal line, and the other switching the \begin{document}$ y $\end{document}-coordinates of two intersections with a vertical line. Given a QRT map, a natural question is to ask whether it allows a decomposition into further involutions. Here we provide new answers to this question and show how they lead to a new class of maps, as well as known HKY maps and quadrirational Yang-Baxter maps.
The Lie algebra of classical mechanics
Robert I. McLachlan and Ander Murua
2019, 6(2): 345-360 doi: 10.3934/jcd.2019017 +[Abstract](233) +[HTML](83) +[PDF](379.81KB)
Classical mechanical systems are defined by their kinetic and potential energies. They generate a Lie algebra under the canonical Poisson bracket. This Lie algebra, which is usually infinite dimensional, is useful in analyzing the system, as well as in geometric numerical integration. But because the kinetic energy is quadratic in the momenta, the Lie algebra obeys identities beyond those implied by skew symmetry and the Jacobi identity. Some Poisson brackets, or combinations of brackets, are zero for all choices of kinetic and potential energy, regardless of the dimension of the system. Therefore, we study the universal object in this setting, the 'Lie algebra of classical mechanics' modelled on the Lie algebra generated by kinetic and potential energy of a simple mechanical system with respect to the canonical Poisson bracket. We show that it is the direct sum of an abelian algebra \begin{document}$ \mathfrak{X} $\end{document}, spanned by 'modified' potential energies and isomorphic to the free commutative nonassociative algebra with one generator, and an algebra freely generated by the kinetic energy and its Poisson bracket with \begin{document}$ \mathfrak{X} $\end{document}. We calculate the dimensions \begin{document}$ c_n $\end{document} of its homogeneous subspaces and determine the value of its entropy \begin{document}$ \lim_{n\to\infty} c_n^{1/n} $\end{document}. It is \begin{document}$ 1.8249\dots $\end{document}, a fundamental constant associated to classical mechanics. We conjecture that the class of systems with Euclidean kinetic energy metrics is already free, i.e., that the only linear identities satisfied by the Lie brackets of all such systems are those satisfied by the Lie algebra of classical mechanics.
A structure-preserving Fourier pseudo-spectral linearly implicit scheme for the space-fractional nonlinear Schrödinger equation
Yuto Miyatake, Tai Nakagawa, Tomohiro Sogabe and Shao-Liang Zhang
2019, 6(2): 361-383 doi: 10.3934/jcd.2019018 +[Abstract](216) +[HTML](88) +[PDF](1316.05KB)
We propose a Fourier pseudo-spectral scheme for the space-fractional nonlinear Schrödinger equation. The proposed scheme has the following features: it is linearly implicit, it preserves two invariants of the equation, its unique solvability is guaranteed without any restrictions on space and time step sizes. The scheme requires solving a complex symmetric linear system per time step. To solve the system efficiently, we also present a certain variable transformation and preconditioner.
Discrete gradients for computational Bayesian inference
Sahani Pathiraja and Sebastian Reich
2019, 6(2): 385-400 doi: 10.3934/jcd.2019019 +[Abstract](173) +[HTML](104) +[PDF](793.78KB)
In this paper, we exploit the gradient flow structure of continuous-time formulations of Bayesian inference in terms of their numerical time-stepping. We focus on two particular examples, namely, the continuous-time ensemble Kalman–Bucy filter and a particle discretisation of the Fokker–Planck equation associated to Brownian dynamics. Both formulations can lead to stiff differential equations which require special numerical methods for their efficient numerical implementation. We compare discrete gradient methods to alternative semi-implicit and other iterative implementations of the underlying Bayesian inference problems.
Geometry of the Kahan discretizations of planar quadratic Hamiltonian systems. Ⅱ. Systems with a linear Poisson tensor
Matteo Petrera and Yuri B. Suris
2019, 6(2): 401-408 doi: 10.3934/jcd.2019020 +[Abstract](192) +[HTML](82) +[PDF](589.36KB)
Kahan discretization is applicable to any quadratic vector field and produces a birational map which approximates the shift along the phase flow. For a planar quadratic Hamiltonian vector field with a linear Poisson tensor and with a quadratic Hamilton function, this map is known to be integrable and to preserve a pencil of conics. In the paper "Three classes of quadratic vector fields for which the Kahan discretization is the root of a generalised Manin transformation" by P. van der Kamp et al. [5], it was shown that the Kahan discretization can be represented as a composition of two involutions on the pencil of conics. In the present note, which can be considered as a comment to that paper, we show that this result can be reversed. For a linear form \begin{document}$ \ell(x,y) $\end{document}, let \begin{document}$ B_1,B_2 $\end{document} be any two distinct points on the line \begin{document}$ \ell(x,y) = -c $\end{document}, and let \begin{document}$ B_3,B_4 $\end{document} be any two distinct points on the line \begin{document}$ \ell(x,y) = c $\end{document}. Set \begin{document}$ B_0 = \tfrac{1}{2}(B_1+B_3) $\end{document} and \begin{document}$ B_5 = \tfrac{1}{2}(B_2+B_4) $\end{document}; these points lie on the line \begin{document}$ \ell(x,y) = 0 $\end{document}. Finally, let \begin{document}$ B_\infty $\end{document} be the point at infinity on this line. Let \begin{document}$ \mathfrak E $\end{document} be the pencil of conics with the base points \begin{document}$ B_1,B_2,B_3,B_4 $\end{document}. Then the composition of the \begin{document}$ B_\infty $\end{document}-switch and of the \begin{document}$ B_0 $\end{document}-switch on the pencil \begin{document}$ \mathfrak E $\end{document} is the Kahan discretization of a Hamiltonian vector field \begin{document}$ f = \ell(x,y)\begin{pmatrix}\partial H/\partial y \\ -\partial H/\partial x \end{pmatrix} $\end{document} with a quadratic Hamilton function \begin{document}$ H(x,y) $\end{document}. This birational map \begin{document}$ \Phi_f:\mathbb C P^2\dashrightarrow\mathbb C P^2 $\end{document} has three singular points \begin{document}$ B_0,B_2,B_4 $\end{document}, while the inverse map \begin{document}$ \Phi_f^{-1} $\end{document} has three singular points \begin{document}$ B_1,B_3,B_5 $\end{document}.
Chains of rigid bodies and their numerical simulation by local frame methods
Nicolai Sætran and Antonella Zanna
2019, 6(2): 409-427 doi: 10.3934/jcd.2019021 +[Abstract](145) +[HTML](108) +[PDF](1664.33KB)
We consider the dynamics and numerical simulation of systems of linked rigid bodies (chains). We describe the system using the moving frame method approach of [18]. In this framework, the dynamics of the \begin{document}$ j $\end{document}th body is described in a frame relative to the \begin{document}$ (j-1) $\end{document}th one. Starting from the Lagrangian formulation of the system on \begin{document}$ {{\rm{SO}}}(3)^{N} $\end{document}, the final dynamic formulation is obtained by variational calculus on Lie groups. The obtained system is solved by using unit quaternions to represent rotations and numerical methods preserving quadratic integrals.
Study of adaptive symplectic methods for simulating charged particle dynamics
Yanyan Shi, Yajuan Sun, Yulei Wang and Jian Liu
2019, 6(2): 429-448 doi: 10.3934/jcd.2019022 +[Abstract](198) +[HTML](99) +[PDF](3261.54KB)
In plasma simulations, numerical methods with high computational efficiency and long-term stability are needed. In this paper, symplectic methods with adaptive time steps are constructed for simulating the dynamics of charged particles under the electromagnetic field. With specifically designed step size functions, the motion of charged particles confined in a Penning trap under three different magnetic fields is studied, and also the dynamics of runaway electrons in tokamaks is investigated. The numerical experiments are performed to show the efficiency of the new derived adaptive symplectic methods.
Linear degree growth in lattice equations
Dinh T. Tran and John A. G. Roberts
2019, 6(2): 449-467 doi: 10.3934/jcd.2019023 +[Abstract](161) +[HTML](82) +[PDF](380.73KB)
We conjecture recurrence relations satisfied by the degrees of some linearizable lattice equations. This helps to prove linear degree growth of these equations. We then use these recurrences to search for lattice equations that have linear growth and hence are linearizable.
Strange attractors in a predator–prey system with non-monotonic response function and periodic perturbation
Johan Matheus Tuwankotta and Eric Harjanto
2019, 6(2): 469-483 doi: 10.3934/jcd.2019024 +[Abstract](231) +[HTML](86) +[PDF](2220.88KB)
A system of ordinary differential equations of a predator–prey type, depending on nine parameters, is studied. We have included in this model a nonmonotonic response function and time periodic perturbation. Using numerical continuation software, we have detected three codimension two bifurcations for the unperturbed system, namely cusp, Bogdanov-Takens and Bautin bifurcations. Furthermore, we concentrate on two regions in the parameter space, the region where the Bogdanov-Takens and the region where Bautin bifurcations occur. As we turn on the time perturbation, we find strange attractors in the neighborhood of invariant tori of the unperturbed system.
Using Lie group integrators to solve two and higher dimensional variational problems with symmetry
Michele Zadra and Elizabeth L. Mansfield
2019, 6(2): 485-511 doi: 10.3934/jcd.2019025 +[Abstract](173) +[HTML](81) +[PDF](829.7KB)
The theory of moving frames has been used successfully to solve one dimensional (1D) variational problems invariant under a Lie group symmetry. In the one dimensional case, Noether's laws give first integrals of the Euler–Lagrange equations. In higher dimensional problems, the conservation laws do not enable the exact integration of the Euler–Lagrange system. In this paper we use the theory of moving frames to help solve, numerically, some higher dimensional variational problems, which are invariant under a Lie group action. In order to find a solution to the variational problem, we need first to solve the Euler Lagrange equations for the relevant differential invariants, and then solve a system of linear, first order, compatible, coupled partial differential equations for a moving frame, evolving on the Lie group. We demonstrate that Lie group integrators may be used in this context. We show first that the Magnus expansions on which one dimensional Lie group integrators are based, may be taken sequentially in a well defined way, at least to order 5; that is, the exact result is independent of the order of integration. We then show that efficient implementations of these integrators give a numerical solution of the equations for the frame, which is independent of the order of integration, to high order, in a range of examples. Our running example is a variational problem invariant under a linear action of \begin{document}$ SU(2) $\end{document}. We then consider variational problems for evolving curves which are invariant under the projective action of \begin{document}$ SL(2) $\end{document} and finally the standard affine action of \begin{document}$ SE(2) $\end{document}.
Email Alert
[Back to Top] |
095369f65599d582 |
DOE/ER/40561-205-INT95-00-92 UW/PT 95-05
Effective Field Theories
Lectures given at the Seventh Summer School in Nuclear Physics: “Symmetries”
Institute for Nuclear Theory, June 19 - 30, 1995
David B. Kaplan Email:
Institute for Nuclear Theory, NK-12
University of Washington
Seattle WA 98195, USA
Three lectures in which I give an introduction to effective field theories in particle physics.
1. Introduction
Physics, chemistry and biology are all sciences describing different aspects of the same world, yet their practice differs enormously…why is that? To a large extent the cultural differences between the fields can be explained in terms of fundamental difference between the physical systems being analyzed: A biological system typically has many unrelated energy scales of comparable magnitude, such as the low lying excitation spectra of various different enzymes collaborating in a biological process. While it is possible to crudely explain why the body temperature of a human is in terms the fine structure constant and the masses of the proton and the electron, you will not be able to explain why a human is healthy with a temperature of but not with a temperature of without a thorough understanding of human physiology.
In contrast, a physics problem tends to involve energy scales that are widely separated, which allows one — with care — to determine many of the properties of a system using the tool of dimensional analysis. To see how this works, we first choose physical units with the speed of light and Planck’s constant set to unity:
Then all physical parameters can be said to have the dimension of mass to some power. In particular, if some quantity has dimensions , we will just say “ has dimension ” or . You should be able to convince yourself that
and so forth For practical purposes it is conventional to express everything in units of energy () rather than mass (). Thus GeV, MeV, , , etc.. Of particular interest to us will be the dimension of a Lagrange density; since is an action — which comes in units of and is dimensionless — it follows that
Let us use dimensional analysis to discuss properties of the hydrogen atom. To a first approximation, the system is described in terms of one dimensionful parameter, the electron mass , and one dimensionless number, the fine structure constant . If we want to estimate the size of a hydrogen atom, since length has dimension it follows that , where the proportionality constant is dimensionless. What is it? We would guess it is some number of order unity, times some power of . Alas, dimensional analysis doesn’t tell us the power…we have to look at the dynamics to realize that the appropriate power is . We arrive at , which in fact is the exact expression for the Bohr radius. What about the ground state binding energy of the hydrogen atom? has the dimensions of , and dynamics gives us a proportionality factor of , so we estimate , which is in fact only a factor of 2 off from the correct value.
We can go on and ask what wavelength of photon allows one to examine crystal structure by means of diffraction. Since atomic sizes are given by , the atomic spacing in a crystal is expected to be similar. It follows that to see crystal structure, we need photons with wavelength , or equivalently energy — visible light won’t do, we need X-rays. On the other hand, if we wish to estimate the energy of light emitted from an atomic transition in hydrogen, get , corresponding to a wavelength , several orders of magnitude larger than the atom itself.
The above analysis may seem familiar and unimpressive — after all, while it is nice that one can easily determine how enters quantities of physical interest, one also has to keep track of powers of which involves going back and examining the Schrödinger equation for hydrogen (which we know how to solve anyway). Yet there something remarkable about the analysis, and it lies in the sentence: ‘‘To a first approximation, the system is described in terms of one dimensionful parameter, the electron mass , and one dimensionless number, the fine structure constant .’’ Why should this be true, and what is the approximation we are making? Why is the system insensitive to the proton mass? Or the and boson masses? Or Newton’s constant ? Why don’t we need to take into account the bottom quark mass, GeV? The ratio is a dimensionless number; couldn’t the ground state energy of the hydrogen atom be some function of , namely , where could as easily equal as ?
The technique of constructing effective theories allows one to answer such questions The concept of effective field theory is mainly associated with Ken Wilson, although it is an edifice with many architects. For two quite dissimilar modern treatments see the reviews by J. Polchinski [1] and H. Georgi [2].. The basic idea is not to attempt to construct “a theory of everything”, but to construct an effective theory that is appropriate to the energy scale of the experiments one is interested in. A theory of everything is beyond our abilities to construct since we cannot probe everything experimentally, and even if we could, it would contain lots of information extraneous to any particular experiment.
Effective field theory techniques get interesting when we wish to look at effects over a wide range of energies: then we must understand how effective theories at different scales are related to each other. This is useful if one wishes to relate experiments over a large range of energy scales, or if one has a theory of high energy physics and wishes to predict the results of low energy experiments. In the example of the hydrogen atom and the quark, one can show that depends on the quark mass in the following way:
There is a small power law correction to the naive value , as well as a hidden dependence of on the quark mass. When one is only concerned with atomic physics, one can ignore the dependence of , since it is already incorporated in the measured physical value . Effective field theories however allow you to simply compute electromagnetic scattering of electrons at a center-of-mass energy GeV, where one finds that the appropriate value of for the fine structure “constant” is — a change due in part to the effects of the quark.
The outline of these lectures is as follows: 1. First I discuss how to construct effective theories as an expansion in operators consistent with low energy symmetries, and how to use dimensional analysis to extract the interesting physics; 2. Next I explain how the dimension of the operator determines whether it is irrelevant, relevant or “marginal” to low energy physics; 3. I then consider “matching”: how one relates the parameters of a low energy theory to those of a higher energy theory; 4. I then show how quantum corrections can sometimes change the dimension of an operator and therefore radically change low energy physics. 5. Finally I mention the application of some of these ideas to the strong interactions.
Each section is followed by some exercises (of widely varying difficulty); I encourage you to work through them.
Exercise 1. Estimate the energy scale of rotational excitations of water in terms of , and . Does your answer explain why microwaves are used to heat food?
2. Dimensional analysis, symmetries, and the separation of scales
The basic idea behind effective field theories is that a physical process typified by some energy can be described in terms of an expansion in , where the are various physical scales involved in the problem which have dimension 1 and which are bigger than . In this section we show how this simple idea can be incorporated into a predictive framework.
2.1. Example 1: Why the sky is blue.
Consider the question of why the sky is blue. More precisely, consider the problem of low energy light scattering from neutral atoms in their ground state, where by “low energy” I mean that the photon energy is much smaller than the excitation energy of the atom, which is of course much smaller than its inverse size or mass:
Thus the process is necessarily elastic scattering, and to a good approximation we can ignore that the atom recoils, treating it as infinitely heavy. Let’s construct an “effective Lagrangian” to describe this process. This means that we are going to write down a Lagrangian with all interactions describing elastic photon-atom scattering that are allowed by the symmetries of the world — namely Lorentz invariance and gauge invariance. Photons are described by a field which creates and destroys photons; a gauge invariant object constructed from is the field strength tensor . The atomic field is defined as , where destroys an atom with four-velocity (satisfying , with in the rest-frame of the atom), while creates an atom with four-velocity . So what is the most general form for ? Since the atom is electrically neutral, gauge invariance implies that can only be coupled to and not directly to . So is comprised of all local, Hermitian monomials in , ,, and . Certain combinations we needn’t consider for the problem at hand — for example for radiation (by Maxwell’s equations); also, if we define the energy of the atom at rest in it’s ground state to be zero, then , since in the rest frame, where . Similarly, . Thus we are led to consider the Lagrangian
The above expression involves an infinite number of operators and an infinite number of unknown coefficients! Nevertheless, dimensional analysis allows us to identify the leading contribution to low energy scattering of light by neutral atoms. It is straightforward to figure out that
The first follows from the fact that has the dimension of 1/length. The second is easily determined by noting that the Maxwell Lagrangian is , and that . Finally is determined by writing a state with no atom as , and one atom as , where , with being the normalized atomic wavefunction and . Since , it follows that .
Since the effective Lagrangian has dimension 4, the coefficients , etc. also have dimensions. It is easy to see that they all have negative mass dimensions:
and that operators involving higher powers of would have coefficients of even more negative dimension. It is crucial to note that these dimensions must be made from dimensionful parameters describing the atomic system — namely its size and the energy gap between the ground state and the excited states. The other dimensionful quantity, , is explicitly represented by the derivatives acting on the photon field. Thus for the dominant effect is going to be from the operator in which has the lowest dimension. There are in fact two leading operators, the first two in eq. (2.1), both of dimension 7. Thus low energy scattering is dominated by these two operators, and we need only compute and .
What are the sizes of the coefficients? To do a careful analysis one needs to go back to the full Hamiltonian for the atom in question interacting with light, and “match” the full theory to the effective theory. We will discuss this process of matching later, but for now we will just estimate the sizes of the coefficients. We first note that extremely low energy photons cannot probe the internal structure of the atom, and so the cross-section ought to be classical, only depending on the size of the scatterer. Since such low energy scattering can be described entirely in terms of the coefficients and , we conclude that
The effective Lagrangian for low energy scattering of light is therefore
where and are dimensionless, and expected to be . The cross-section (which goes as the amplitude squared) must therefore be proportional to . But a cross section has dimensions of area, or , while . Therefore the cross section must be proportional to
growing like the fourth power of the photon energy. Thus blue light is scattered more strongly than red, and the sky looks blue.
Is the expression (2.3) valid for arbitrarily high energy? No, because we left out terms in the effective Lagrangian we used. To understand the size of corrections to (2.3) we need to know the size of the operator (and the rest we ignored). Since , we expect the effect of the operator on the scattering amplitude to be smaller than the leading effects by a factor of , where is some energy scale. But does equal , or ? The latter is the smallest scale and hence the most important. We expect our approximations to break down as since for such energies the photon can excite the atom. Hence we predict
The Rayleigh scattering formula ought to work pretty well for blue light, but not very far into the ultraviolet. Note that eq. (2.4) contains a lot of physics even though we did very little work. More work is needed to compute the constant of proportionality.
2.2. Example 2: The binding energy of charmonium to nuclei.
Closely related to the above example is the calculation of the binding energy of charmonium (a bound state, where is the charm quark) to nuclei. In the limit that the charm quark mass is very heavy, the charmonium meson can be thought of as a Coulomb bound state, with size , where is a small number (more on this later). When inserted in a nucleus, it will interact with the nucleons by exchanging gluons with nearby quarks. Typical momenta for gluons in a nucleus is set by the QCD scale MeV. For large then, the wavelength of gluons will be much larger than the size of the charmonium meson, and so the relevant interaction is the gluon-charmonium analogue of photon-atom scattering considered above. The effective Lagrangian is just given by (2.2), where now destroys charmonium mesons, and is replaced by , the field strength for gluons of type . The coefficients may be computed from QCD. To compute the binding energy of charmonium we need to compute the matrix element
(as well as the matrix element of the other operator in (2.2)), which we do not know how to do precisely since the system is strongly interacting. We can estimate its size by dimensional analysis though, getting
This problem is discussed with greater sophistication in ref. [3].
2.3. Example 3: The cross section for low energy neutrino interactions.
The term “weak interactions” refers in general to any interaction mediated by the or bosons, whose masses are GeV and GeV respectively. Since their couplings are rather weak, it is usually a decent approximation to only consider first order perturbation theory, namely Feynman graphs fig. 1(a).
Fig. 1. (a) Tree level and exchange between four fermions. (b) The effective vertex in the low energy effective theory (Fermi interaction).
These interactions describe scattering of fermions, or decays. The and propagators (in a particular choice of gauge) are given by , where is the four-momentum transferred. For low energy processes, and one never has enough energy to make a physical or , so there is no reason to include them in the theory. Thus the low energy effective theory just has the contact interactions shown in figure 1b:
where represent fermion fields (for either quarks or leptons). Since the Lagrangian for a noninteracting fermion is , it follows that
and so the coupling in eq.(2.5) has dimension . You can estimate its size by equating the processes fig 1a and fig. 1b and it is roughly given by , where and are the dimensionless coupling constant and mass of the or . (This is “matching”, and you will do this more precisely in a later exercise).
Since neutrinos only interact through the weak force, it follows that low energy neutrinos () interact with matter through an operator of the form (2.5), where two of the ’s are neutrino fields, and the other two are either quark or lepton fields. Thus the neutrino cross-section , which has dimension -2, must be proportional to which has dimension -4. Therefore the cross-section must scale with energy as
for low energy neutrinos, where equals the square of the total energy in the center of momentum frame.
Exercise 2. Use the effective Lagrangian to explain why the force between to static neutral atoms at a separation scales like . You should be able to get this from dimensional analysis of the two photon exchange process. Can you explain why there isn’t any contribution from one photon exchange due to the operator ? Can you explain why the approximations made in the effective field theory are expected to be invalid for ? For a detailed discussion of why one finds instead of the nonrelativistic result , see ref. [4].
Exercise 3. The and the have the same weak interactions, and so the amplitudes for decay via exchange and are equal. Since the is heavier, it has more ways to decay than the . The mass and lifetimes of the two particles are
Given that the decays 100% of the time via , calculate the fraction of decays which are of the form . All you need to know is that in eq. (2.5). How does your answer compare with the observed branching ratio ?
Exercise 4. The partial mean lifetime of the proton in the decay is known to be greater than years. Suppose that new physics at a scale does give rise to this decay (for example, through the tree level exchange of a particle with mass , analogous to the interaction in fig. 1). What is an approximate lower bound on ? (Hint: Find the lowest dimension operators made up of quark and lepton fields that could give rise to this decay mode).
Exercise 5. Suppose that there are baryon violating operators due to new physics at a scale , but no operators, so that the proton is stable, but oscillations can occur. Such oscillations have not been seen, and the lower bound on the oscillation rate is sec. How does this translate into a bound on the scale ?
Exercise 6. Estimate the cross-section for photon-photon scattering at energies well below the electron mass, . Since , counting powers of matters!
3. The relevant, the irrelevant, and the marginal
So far I have only discussed examples where the operator has dimension greater than 4, so that the coefficient has negative dimension and the resulting cross-section or decay width therefore becomes smaller as the energy scale of the interaction gets smaller. Even though these are often the most interesting interactions — since they are harbingers of new physics at energies well above — these sorts of interactions are called irrelevant. The rationale is that at low energies, their effects are small (for example, see eqs. (2.4), (2.6).). In contrast, operators with dimension less than 4, whose coefficients have positive dimension, are called relevant operators because they become more relevant at lower . Ignoring quantum corrections, the only relevant operators one can write down in a relativistic field theory in four dimensions are The unit operator (whose coefficient is the cosmological constant) which is dimension 0; Boson mass terms, which are dimension 2; Fermion mass terms, which are dimension 3; 3-scalar () interactions, also dimension 3. (Terms linear in a scalar field can be removed by shifting its value).
An example is the electron mass, arising from the dimension 3 operator with coefficient . In high energy scattering () the effects of the electron mass are negligible. However, the effects of the electron mass are very important at energies comparable to . In fact, exercise 3. is only simple if one not only assumes that the momentum scales in and decay are low compared to , but also that they are high compared to , so that one could ignore the electron mass. As another example, consider two real scalar fields and with a Lagrangian of the form
We will assume
We can see that unlike fermion fields, scalar fields have dimension 1, which means that the coupling does as well:
By our definition above, the three scalar interaction is relevant. Consider scattering at tree level in this model. First take the case where the center of mass energy is much greater than , , and . Then the scattering amplitude from a graph like fig. 1a — with the replaced by and the replaced by — is proportional to and the cross section must go as
which goes rapidly to zero for large . Now look at the scattering cross section at an energy satisfying , so that the particles are still relativistic, but the propagator and be contracted to a point as in figure 1b. Now the cross section goes as
Contrasting this low energy cross section with that for neutrinos in §2.3 explains why interaction is said to be relevant at low energies, while Fermi interaction is called irrelevant.
Operators with dimension 4 lie between relevancy and irrelevancy and are called marginal. Examples of marginal interactions are interactions; Yukawa interactions (); Gauge interactions (interactions of a gauge boson with itself, a scalar, or a fermion). As we will see, marginality is an insecure position to be in, and quantum corrections will almost always change such operators from marginal to either relevant or irrelevant.
In each of the examples in the previous section we focussed on irrelevant interactions. The only reason why this was interesting was that in each case, irrelevant operators gave the leading contribution to the process… and because they weren’t too irrelevant. For example, neutrinos only interact with matter through irrelevant operators…so if one sees any evidence of low energy neutrino scattering, one is seeing irrelevant operators. In contrast, scattering has an electromagnetic contribution from photon exchange. Since the photon-electron coupling is a marginal operator, at low energies electromagnetic interactions dominate the weak interaction contribution. (No coincidence that these are called weak interactions!). Now imagine a world where the and masses were GeV. In this world there would be practically no discernible weak interaction effects. The neutron would have a lifetime greater than years, and there would be no radioactivity; no one would have guessed that the neutrino existed, because it would not interact with anything. All we would discern in particle collisions and spectra would be the strong and electromagnetic interactions.
In fact, in any situation where there is a large gap between the energy where one is doing experiments and the energy scale of new physics, the effective theory one constructs will only consist of marginal and relevant operators… such theories are called “renormalizable” and are a natural outcome when there is a large hierarchy of physical scales. This typically results in a vast simplification of the physics one needs to consider, as seen in the next example.
3.1. Example: the success of Landau liquid theory.
A condensed matter system can be a very complicated environment; there may be various types of ions arranged in some crystalline array, where each ion has a complicated electron shell structure and interactions with neighboring ions that allow electrons to wander around the lattice. Nevertheless, the low energy excitation spectrum for many diverse systems can be described pretty well as a “Landau liquid”, whose excitations are fermions with some complicated dispersion relation but no interactions. Why this is the case can be simply understood in terms of effective field theories, modifying the dimension counting used above to suit a nonrelativistic system with a Fermi surface The treatment here follows that of Polchinski in ref. [1]..
Let us assume that the low energy spectrum of the condensed matter system has fermionic excitations with arbitrary interactions above a Fermi surface characterized by the fermi energy ; call them “quasi-particles”. Ignoring interactions, the action can be written as
where an arbitrary dispersion relation has been assumed. Now let us consider higher dimension operators…but how should we count “dimension”? In the relativistic case, we defined mass dimension in a simple way, since we wanted to do an expansion in , where was the scale of the experiment and was a physical scale associated with the system being probed. In a nonrelativistic system we identify the scaling dimension with momentum, in which case energy scales like . Furthermore, it doesn’t make sense to expand around since an excitation cannot have a momentum vector inside the Fermi surface. So we write the momentum as
Fig. 2. The momentum of an excitation above the Fermi surface is divided into a component on the Fermi surface, and a component perpendicular to the surface. The length of is the quantity one wants to scale.
where lies on the Fermi surface and is perpendicular to it (fig. 2). Then is the quantity we vary in experiments and so we define the dimension of operators by how they must scale so that the theory is unchanged when we change . If an object scales as , then we say it has dimension . Then , , and . And if we define the Fermi velocity as , then for ,
and so and . Given that the action (3.2) isn’t supposed to change under this scaling,
Now consider an interaction of the form
This will be relevant, marginal or irrelevant depending on the dimension of . Apparently . So how does the function scale? For generic vectors, is a constraint on the vectors that doesn’t change much as one changes , so that . It follows that and that the four fermion interaction is irrelevant…and that the system is adequately described in terms of free fermions (with an arbitrarily screwy dispersion relation). This effect is known in nuclear physics, where Pauli blocking allows a strongly interacting system of nucleons to have single particle excitations.
It is amusing that when a pair of vectors are within of cancelling each other, then the scaling dimension of the delta function changes from 0 to . To see this, fix set the ’s to zero, and fix the incoming momenta and . The -function then generically constrains three out of the four degrees of freedom in the outgoing momenta and in terms of . However, if , then must equal zero, but that only constrains two of the four degrees of freedom (assuming a parity symmetric Fermi surface). Therefore the delta function must scale like , and so for these head-on collisions between particles at opposite sides of the Fermi sea, , and the interaction is marginal. Quantum corrections either make it either irrelevant of relevant; it turns out that for attractive, the interaction becomes relevant, and if it is repulsive it becomes irrelevant. In the former case, the interaction between such quasiparticles becomes strong near the Fermi surface, and can lead to pairing and superconductivity. See ref. [1] for more about this.
Exercise 7. How would you couple phonons to the fermions in a Landau liquid? Would the phonon - fermion coupling be relevant, irrelevant, or marginal?
4. Quantum corrections and renormalization
It is fine to call a higher dimension operator irrelevant when one is computing amplitudes at tree level, and the momenta flowing through the vertices is small. But what happens when one calculates quantum corrections (loop graphs) involving these irrelevant interactions and integrates over intermediate states of all energies? Do the irrelevant operators become important? A field theory with irrelevant operators used to fill field theorists with horror, since they were “nonrenormalizable”. This meant that rather than having a finite number of counterterms that had to be fixed by some experimental measurement, one needed an infinite number. Such theories were thought to be unpredictive. QED is a good example of a renormalizable theory: Only two measurements are needed to fix the counterterms, namely and . Once these quantities are measured in one set of experiments, all other QED processes can be predicted. In a theory with irrelevant operators, however, extra insertions of the operator in a graph makes it more divergent. In a theory with a Fermi interaction, for example — — one finds one needs counterterms for all operators. Furthermore, these operators can in general renormalize relevant operators, such as the fermion mass, so it seems that all of these infinite number of interactions must be fit to experiment and nothing can be predicted.
This quandary is avoided if one uses a mass independent renormalization scheme (dimensional regularization), and thinks of the effective theory not as an expansion in operators, but as an expansion in inverse powers of some large physical scale . Let us assume that we wish to do experiments at some momentum scale and that the relevant operators have coefficients set by a scale . In contrast, the irrelevant operators have coefficients which are inverse powers of . For example, a theory of a fermion with mass and higher order interactions:
Now consider the divergent graph in fig. 3.
Fig. 3. A divergent one-loop radiative correction to the fermion mass and kinetic term in a theory with a interaction. This graph gives a divergent contribution to the mass operator proportional to
When Wick rotated into Euclidian space and defined by dimensional regularization, the above integral equals (see eq. (A.1) in the Appendix)
where we are in dimensions, and is the renormalization scale that creeps into the problem For a discussion of dimensional regularization, see for example refs. [5], [6]. For those familiar with the concepts, some useful formulas are included as an appendix to these lecture notes.. In a mass independent subtraction scheme we put in a one-loop counterterm that cancels the infinite part of this graph, as well as a mass independent finite part. For example, in the scheme, we subtract the part proportional to
We are left with a finite contribution to the fermion mass equal to (up to an numerical factor which I have dropped)
We choose a convenient scale and fit to experiment (once one has also calculated the one-loop wave function renormalization).
The important point I wish to make is that
it is small — it needs to be taken into account when probing effects proportional to , but not otherwise. Note that this would not have been the case if we had simply taken to be a physical momentum cutoff and not renormalized…then, since the fermion loop graph is quadratically divergent, we would have found . This would be a ludicrous state of affairs — we would have to understand quantum gravity, for example, to compute radiative corrections to scattering.
The above example has several important features which I wish to draw your attention to: (i) The correction to the electron mass is suppressed by ; (ii) has a logarithmic dependence on the fermion mass and the renormalization scale ; (iii) The corrections to the fermion mass are proportional to the fermion mass. Each of these three points is worth commenting on:
4.1. The size of radiative corrections
Concerning the first point: it is an obvious and general result that in a mass independent subtraction scheme, corrections to low dimension operators due to high dimension operators are always suppressed by powers of and . This is not what one would finds simply putting in as a momentum cutoff for one’s integrals. It is an obvious result because the only new mass scale induced by dimensional regularization is , and that can be seen to only enter logarithms. Thus an integral with dimension will be proportional to the power of the physical scales in the problem and/or . The scale only enters the problem raised to negative powers at the vertices. Thus the graph is always proportional to where is the combined powers from the vertices. No positive power of is generated by the loop integral.
The fermion mass in our theory does receive an infinite number of corrections from the infinite number of higher dimension operators, and they are only computable if I measure all of the coefficients of these operators. However the theory remains predictive, since at any finite order in there are a finite number of contributions to .
4.2. Radiative logarithms and the scale
The renormalization scale enters through logarithms of or . If we could sum up all orders in perturbation theory, all our answers would be independent. However, we stop at finite order, and our choice of can affect how quickly the perturbative expansion converges, since higher loop graphs yield higher powers of . Thus we should optimize perturbation theory by choosing to minimize the logarithm. When comparing experiments at widely different physical scales, we may run across large logarithms then of the form since the same cannot make the logs in the two processes simultaneously small. These large logs can be resummed using the renormalization group, discussed in a later section.
4.3. Symmetry and naturalness
I noted that in eq. (4.1). This is because increases the symmetry of the theory: in the above example the symmetry , is a symmetry of the Fermi interaction (and kinetic term), but not the mass term. If it follows that must also vanish. Therefore it is natural that the fermion mass might be small compared to other physical scales in the problem. In contrast, a scalar mass term does not usually break a symmetry — the only exceptions are if the theory is supersymmetric, or if the scalar is a Goldstone boson. The latter is important for pion physics and chiral perturbation theory. Even if the tree level scalar mass is zero, it will get radiatively corrected by other fields it couples to. Thus it is unnatural for there to be a light scalar coupled to high energy fields. Since scalars presumably couple to gravity, typified by the Planck scale GeV, one has to wonder why the Higgs boson in the standard model has a mass in the to GeV mass range. (It has been suggested that in fact either there is no scalar Higgs boson, or that it is a Goldstone boson, or that it is a member of a supersymmetric multiplet of particles).
It is ironic that it used to be that people were worried about theories with irrelevant operators being sick. In fact what we see is that irrelevant operators cause no problems; it is the relevant operators that we must worry about. If relevant operators appear in the effective field theory, then they must be set by a scale much less than (else they wouldn’t be in the effective theory below ). But if their coefficients are much smaller than without a symmetry reason, then we are baffled. The prime example is the cosmological constant, namely the dimension 4 coefficient of the operator 1, otherwise known as the vacuum energy density. There is no known symmetry that appears relevant to our world that is increased by setting the vacuum energy density to zero, yet from cosmological observations, the vacuum energy is known to be [7]. The smallness of the cosmological constant should be taken as a warning: it appears contrary to effective field theory dogma, so the dogma may be flawed.
Exercise 8. Compute both the wavefunction and mass corrections from the graph in fig. 3, using the scheme. See the Appendix for dimensional regularization formulas.
5. Matching
Consider doing experiments with photons and electrons entirely within the context of QED. There are three different regimes for scattering experiments which one might consider: 1. Either photons or electrons or both in the incoming state, with momentum transfer large compared to ; 2. electrons in the incoming state, but at momentum transfer much smaller than ; 3. photons in the incoming state, but momentum transfer much less than .
In the first case one needs to compute the relevant amplitude in the full QED theory, although at high energy one might make the approximation that the electron is massless. Furthermore, the fine structure constant has to be adjusted from its low energy value , and effect due to quantum corrections which we discuss in a later section. The second case is a little funny — we can ignore much of the complexity of QED since we do not have enough energy to produce positron-electron pairs, yet we still need to include both electrons and photons in the theory; I briefly mention the techniques one uses in this case in §7. For the third case one need only consider an effective theory with photons…why include electrons if one never sees any?
The low energy theory of photons alone looks like
the most general local, hermitian theory invariant under Lorentz, gauge, charge conjugation and parity transformations. (Can you show why there are no irrelevant operators of dimension 6?). This is not QED because it distorts high energy physics…but we do care that it correctly reproduces low energy phenomenology If we did not know about QED, we could treat this as a phenomenological theory and try to fit and to measured scattering cross sections. However, we do know QED, and so we can compute and . To do this we simply require that and give us the same physical predictions at low energy. In general, ensuring that the effective theory agrees in its predictions with the full theory to any desired order of accuracy is called “matching”. What we are matching is the value of Green’s functions in the two theories. Effective field theories are designed to reproduce all of the infrared (light particle) physics of the full theory, while distorting the high energy behavior to make calculations simpler. All of the interesting infrared effects in the full theory due to light particles are explicitly included; only the effects of the heavy particles or high energy modes must be mocked up. So the correct thing to do is match all the “one light particle irreducible” (1LPI) diagrams (diagrams that do not fall apart when one light particle line is cut), since these are the graphs that contain either a heavy particle, or high energy modes of a light particle. We cannot do this exactly of course, but we can do it systematically in a “loop” expansion, which is an expansion in powers of the numbers of loops in a diagram, or equivalently, powers of It may seem funny expanding in a dimensionful quantity we set to unity! However the loop expansion can be seen to be consistent with a perturbative expansion in coupling constants — see Coleman’s lecture “Secret Symmetries” in ref. [8]..
5.1. Example: the interaction.
Rather than discussing QED, I will consider a toy model that exhibits nicely the matching procedure. It is the theory in eq. (3.1) with a light scalar coupled to a heavy scalar via the interaction . (Never mind that the vacuum energy is unbounded below; one won’t see this in perturbation theory). Suppose we are interested in scattering at energies much below the mass . The graphs we have to match to order are those in fig. 4.
Fig. 4. Matching conditions for the theory of eq. (3.1). Diagrams on the left are in the full theory, while those on the right are in the effective theory. Heavy lines correspond to the heavy scalar propagator; numbers beneath the vertices count the loop order of the matching condition. The first row is the complete tree level matching condition; second and third rows are the one-loop matching conditions for the two- and four-point vertices respectively. Note that matching conditions are not simply the contraction of heavy propagators to contact interactions.
At tree level, exchange generates a interaction in the effective theory, so we find that
where the number is dimensionless and , and computable from the graphs. The refers to operators such as that one finds expanding the one- exchange diagram to order . If I had included a interaction in the full theory, there would have been more complicated tree diagrams leading to operators with higher powers of in . The tree level matching condition is shown at the top of fig. 4. The graphs on the left are exchange graphs in the full theory, while the contact interaction on the right is a local operator in the effective theory. For nonrelativistic particles, the procedure is equivalent to replacing the short range Yukawa potential due to exchange with a potential with a suitably matched coefficient.
Now consider matching at . We must consider graphs with both 2 and 4 external fields. First consider the ones with two external fields. The mass renormalization graphs are divergent in both theories and are computed in To avoid large logarithms in the matching conditions of the form we choose the renormalization scale . Then the loop graphs in the second line of fig. 4 are well defined, finite objects, and the equation defines the interactions of the effective theory, labeled by a “1” on the right side of the equation. Including these terms, the kinetic term of becomes
where and are again dimensionless, , and computable from the graphs. I have explicitly pulled out of the graphs the dimensionful quantities and the factors of that arise from the loop integration.
Some of the graphs with four external ’s are shown on the third line in fig. 4. With zero external momentum, the graphs are approximately equal to times logarithms. The logarithms blow up in the limit that the mass goes to zero (an “infrared divergence”). However, the loop graphs in the effective theory have the exact same infrared divergence. Therefore the contribution to a interaction in the effective theory (labelled by a “1” on the last line of fig. 4) does not blow up as . After 1-loop matching has been performed, the effective theory looks like:
where the coefficients , and are . In addition there are higher dimension operaotrs, such as , , etc. This Lagrangian can be used to compute scattering up to 1 loop. One can perform an -dependent rescaling of the field to return to a conventionally normalized kinetic term.
Let me close this section with several comments about the above example: Notice that the loop expansion is equivalent to an expansion in . To the extent that this is a small number, perturbation theory and the loop expansion makes sense. We only computed relevant operators. There are in addition effects that are suppressed by powers of in an experiment with energy (irrelevant operators). These may be as important as a subleading correction to a relevant operator’s coefficient. We see an example of naturalness: the matching correction to the scalar mass is not proportional to , so that it is “unnatural” for the physical mass to be — that would require a finely tuned conspiracy between and . For and both very small there is a symmetry regained in the full theory, namely the shift symmetry , which explains why can be naturally light in this limit. The coefficients of operators in the effective field theory are regularization scheme dependent. Their values differ for different schemes, but physical predictions do not (e.g, the relative cross sections for at two different energies). The coefficients of operators in the effective field theory are dependent, where is the renormalization scale. (More on this below). In the matching conditions the graphs in both theories have pieces depending nonanalytically on light particle masses and momenta (eg, or )…these terms cancel on both sides of the matching condition so that the interactions in have a local expansion in inverse powers of . This is an important and generic property of effective field theories.
Exercise 9. Compute the graphs in fig. 4, using the scheme, and determine the coefficients , , and in eq. (5.1).
Exercise 10. Draw a graph in the full theory that is not 1LPI (“one light particle irreducible”) and convince yourself that that it is included in the effective theory, provided one matches all 1LPI graphs.
6. Quantum corrections: the myth of marginality
We have seen that relevant interactions — those with dimension (or in dimensions) — dominate physics at low energies. Marginal interactions (dimension 4) would appear to be equally important at all scales. In fact, quantum corrections change the scaling dimension of operators from their classical value. This doesn’t usually have a dramatic effect on relevant or irrelevant operators, but for marginal operators it means that they become either relevant or irrelevant.
6.1. Renormalization group and theory
To be concrete, consider theory with the Lagrangian
Consider the calculation of a the 1PI Green’s functions , which are one particle irreducible graphs that have had the external propagators amputated. They can be directly related to scattering amplitudes. Ignoring the issues of renormalization, one would expect to express these Green’s functions in terms of the external momenta, the particle mass , and the coupling constant :
The dimension of this object is the time ordered product of scalar fields () Fourier transformed to momentum space () with external propagators removed () and a factor of factored out ()…this gives . is . Therefore if one scales all of the external momenta by a factor , one expects
This expresses precisely what I was saying earlier about how the scalar mass is a relevant operator — note that its effects become large for small momentum scales, corresponding to . On the other hand, the interaction’s marginality is the observation that the importance of the coupling is independent of scale.
This analysis is incorrect when quantum corrections are taken into account, due to the introduction of a new scale . When we compute in perturbation theory, we must include counterterms and define the renormalized Lagrangian See Ramond’s book [6] for details; also see David Gross’ 1975 Les Houches lecture [9].
Both and must be regulated; here I have chosen dimensional regularization, and a factor of is inserted to keep dimensionless, where is the arbitrary renormalization scale. The Lagrangian is written in terms of finite parameters, but gives infinite results; gives the counterterms , , which all have poles in dimensional regularization and blow up in the limit. Computing graphs with the sum , which is written in terms of “bare” couplings and fields, yields finite answers. The obvious correspondence between bare and renormalized parameters is:
We can treat , , and as independent parameters, and express and in terms of them.
We can now define either bare or renormalized Green’s functions, and respectively. The relation between the two is
where is finite as . Using the fact that is independent of , so that , one can derive the renormalization group (RG) equation
where , , . One can compute these functions in perturbation theory by relating , and to , and and . For theory one finds to leading nonzero order in perturbation theory
The reason why the RG equation is useful is because it tells one what happens if one scales the external momenta, given that there is a new scale in the problem, . On rescaling momenta by , eq. (6.2) must be modified to read
or equivalently
This can be combined with the renormalization group equation (6.3) to yield an equation which relates the scaling of to changes in and alone, and not :
If one uses a mass independent subtraction scheme such as , then the coefficients , and depend only on and not on the other dimensionless quantity, . In this case, one can solve eq. (6.7), and one finds
where and satisfy the differential equations
First look at this solution at tree level, where and is independent of . Then the solution (6.8) is equivalent to the simple scaling property (6.5). If only is nonzero, and it is constant, then the exponential in eq. (6.8) gives an overall factor of to the scaling of …the engineering dimension is modified by an additional factor of for each of the fields, hence the name “anomalous dimension” for . Finally, if and are nonzero, then changing the momentum scale means one lets the mass and coupling “run”. Using the function in eq. (6.4), one finds
See fig. 5.
Fig. 5. The solution for the running coupling as a function of . The one-loop expression becomes infinite at finite , but the result is not to be trusted since the perturbative expansion breaks down.
We see that the interaction is an example of a marginal interaction that becomes irrelevant due to quantum corrections: the lower the energy scale probed in a scattering experiment, the weaker the effect of the interaction. QED is another example — the gauge interaction becomes irrelevant due to quantum corrections. There is a simple physical explanation for this: the vacuum acts as a dielectric, with virtual particle-antiparticle pairs which screen charges. The greater the impact parameter in a scattering experiment, the more screened the charge is and the weaker the interaction. This can be parametrized by a scale dependent fine structure constant, . As , . In QED, the screening ceases over distances longer than the Compton wavelength of the electron, and so for . Theories such as QED and all by themselves are called “asymptotically unfree”. they are thought to be meaningless as theories because of what happens in the ultraviolet: In theory one finds nonperturbatively (ie, on the lattice) that for for a finite . QED probably behaves similarly, although people debate whether may approach a constant for sufficiently large (a “nontrivial fixed-point”).
6.2. Renormalization group and QCD
In contrast, Yang-Mills theories such as QCD have a negative -function and are asymptotically free: the gauge interactions, which are marginal at tree level, become relevant. The important physical difference between QED and Yang-Mills theories that accounts for the different sign of the -function is that Yang-Mills gauge bosons carry charge, while photons do not. For QCD, the function at one loop order with flavors of (Dirac) quarks is
For this is negative, and so it is negative in the standard model where (u,d,s,c,b,t). Defining , eq. (6.9) can be integrated to give
Notice that a new scale has crept into the theory — |
cabf0fee14825b8a | Friday, June 28, 2019
Quantum Supremacy: What is it and what does it mean?
Rumors are that later this year we will see Google’s first demonstration of “quantum supremacy”. This is when a quantum computer outperforms a conventional computer. It’s about time that we talk about what this means.
Before we get to quantum supremacy, I have to tell you what a quantum computer is. All conventional computers work with quantum mechanics because their components rely on quantum behavior, like electron bands. But the operations that a conventional computer performs are not quantum.
Conventional computers store and handle information in form of bits that can take on two values, say 0 and 1, or up and down. A quantum computer, on the other hand, stores information in form of quantum-bits or q-bits that can take on any combination of 0 and 1. Operations on a quantum computer can then entangle the q-bits, which allows a quantum computer to solve certain problems much faster than a conventional computer.
Calculating the properties of molecules or materials, for example, is one of those problem that quantum computers can help with. In principle, properties like conductivity or rigidity, or even color, can be calculated from the atomic build-up of a material. We know the equations. But we cannot solve these equations with conventional computers. It would just take too long.
To give you an idea of how much more a quantum computer can do, think about this: One can simulate a quantum computer on a conventional computer just by numerically solving the equations of quantum mechanics. If you do that, then the computational burden on the conventional computer increases exponentially with the number of q-bits that you try to simulate. You can do 2 or 4 q-bits on a personal computer. But already with 50 q-bits you need a cluster of supercomputers. Anything beyond 50 or so q-bits cannot presently be calculated, at least not in any reasonable amount of time.
So what is quantum supremacy? Quantum supremacy is the event in which a quantum computer outperforms the best conventional computers on a specific task. It needs to be a specific task because quantum computers are really special-purpose machines whose powers help with particular calculations.
However, to come back to the earlier example, if you want to know what a molecule does, you need millions of q-bits and we are far away from that. So how then do you test quantum supremacy? You let a quantum computer do what it does best, that is being a quantum computer.
This is an idea proposed by Scott Aaronson. If you set up a quantum computer in a suitable way, it will produce probabilistic distributions of measurable variables. You can try and simulate those measurement outcomes on a conventional computer but this would take a very long time. So by letting a conventional computer compete with a quantum computer on this task, you can demonstrate that the quantum computer does something a classical computer just is not able to do.
Exactly at which point someone will declare quantum supremacy is a little ambiguous because you can always argue that maybe one could have used better conventional computers or a better algorithm. But for practical purposes this really doesn’t matter all that much. The point is that it will show quantum computers really do things that are difficult to calculate with a conventional computer.
But what does that mean? Quantum supremacy sounds very impressive until you realize that most molecules have quantum processes that also exceed the computational capacities of present-day supercomputers. That is, after all, the reason we want quantum computers. And the generation of random variables that can be used to check quantum supremacy is not good to actually calculate anything useful. So that makes it sound as if the existing quantum computers are really just new toys for scientists.
What would it take to calculate anything useful with a quantum computer? Estimates about this vary between half a million and a billion q-bits, depending on just exactly what you think is “useful” and how optimistic you are that algorithms for quantum computers will improve. So let us say, realistically it would take a few million q-bits.
When will we get to see a quantum computer with a few million q-bits? No one knows. The problem is that the presently most dominant approaches are unlikely to scale. These approaches are superconducting q-bits and ion traps. In neither case does anyone have any idea how to get beyond a few hundred. This is both an engineering problem and a cost-problem.
And this is why, in recent years, there has been a lot of talk in the community about NISQ computers, that are the “noisy intermediate scale quantum computers”. This is really a term invented to make investors believe that quantum computing will have practical applications in the next decades or so. The trouble with NISQs is that while it is plausible that they soon will be practically feasible, no one knows how to calculate something useful with them.
As you have probably noticed, I am not very optimistic that quantum computers will have practical applications any time soon. In fact, I am presently quite worried that quantum computing will go the same way as nuclear fusion, that it will remain forever promising but never quite work.
Nevertheless, quantum supremacy is without doubt going to be an exciting scientific milestone.
Update June 29: Video now with German subtitles. To see those, click CC in the YouTube toolbar and chose language under settings/gear icon.
Wednesday, June 26, 2019
Win a free copy of "Lost in Maths" in French
My book “Lost in Math: How Beauty Leads Physics Astray” was recently translated to French. Today is your chance to win a free copy of the French translation! The first three people who submit a comment to this blogpost with a brief explanation of why they are interested in reading the book will be the lucky winners.
The only entry requirement is that you must be willing to send me a mailing address. Comments submitted by email or left on other platforms do not count because I cannot compare time-stamps.
Update: The books are gone.
Monday, June 24, 2019
30 years from now, what will a next larger particle collider have taught us?
The year is 2049. CERN’s mega-project, the Future Circular Collider (FCC), has been in operation for 6 years. The following is the transcript of an interview with CERN’s director, Johanna Michilini (JM), conducted by David Grump (DG).
DG: “Prof Michilini, you have guided CERN through the first years of the FCC. How has your experience been?”
JM: “It has been most exciting. Getting to know a new machine always takes time, but after the first two years we have had stable performance and collected data according to schedule. The experiments have since seen various upgrades, such as replacing the thin gap chambers and micromegas with quantum fiber arrays that have better counting rates and have also installed… Are you feeling okay?”
DG: “Sorry, I may have briefly fallen asleep. What did you find?”
JM: “We have measured the self-coupling of a particle called the Higgs-boson and it came out to be 1.2 plus minus 0.3 times the expected value which is the most amazing confirmation that the universe works as we thought in the 1960s and you better be in awe of our big brains.”
DG: “I am flat on the floor. One of the major motivations to invest into your institution was to learn how the universe was created. So what can you tell us about this today?”
JM: “The Higgs gives mass to all fundamental particles that have mass and so it plays a role in the process of creation of the universe.”
DG: “Yes, and how was the universe created?”
JM: “The Higgs is a tiny thing but it’s the greatest particle of all. We have built a big thing to study the tiny thing. We have checked that the tiny thing does what we thought it does and found that’s what it does. You always have to check things in science.”
JM: “You already said that.”
DG: “Well isn’t it correct that you wanted to learn how the universe was created?”
JM: “That may have been what we said, but what we actually meant is that we will learn something about how nuclear matter was created in the early universe. And the Higgs plays a role in that, so we have learned something about that.”
DG: “I see. Well, that is somewhat disappointing.”
JM: “If you need $20 billion, you sometimes forget to mention a few details.”
DG: “Happens to the best of us. All right, then. What else did you measure?”
JM: “Ooh, we measured many many things. For example we improved the precision by which we know how quarks and gluons are distributed inside protons.”
DG: “What can we do with that knowledge?”
JM: “We can use that knowledge to calculate more precisely what happens in particle colliders.”
DG: “Oh-kay. And what have you learned about dark matter?”
JM: “We have ruled out 22 of infinitely many hypothetical particles that could make up dark matter.”
DG: “And what’s with the remaining infinitely many hypothetical particles?”
JM: “We are currently working on plans for the next larger collider that would allow us to rule out some more of them because you just have to look, you know.”
DG: “Prof Michilini, we thank you for this conversation.”
Thursday, June 20, 2019
Away Note
I'll be in the Netherlands for a few days to attend a workshop on "Probabilities in Cosmology". Back next week. Wish you a good Summer Solstice!
Wednesday, June 19, 2019
No, a next larger particle collider will not tell us anything about the creation of the universe
LHC magnets. Image: CERN.
A few days ago, Scientific American ran a piece by a CERN physicist and a philosopher about particle physicists’ plans to spend $20 billion on a next larger particle collider, the Future Circular Collider (FCC). To make their case, the authors have dug up a quote from 1977 and ignored the 40 years after this, which is a truly excellent illustration of all that’s wrong with particle physics at the moment.
I currently don’t have time to go through this in detail, but let me pick the most egregious mistake. It’s right in the opening paragraph where the authors claim that a next larger collider would tell us something about the creation of the universe:
“[P]article physics strives to push a diverse range of experimental approaches from which we may glean new answers to fundamental questions regarding the creation of the universe and the nature of the mysterious and elusive dark matter.
Such an endeavor requires a post-LHC particle collider with an energy capability significantly greater than that of previous colliders.”
We previously encountered this sales-pitch in CERN’s marketing video for theFCC, which claimed that the collider would probe the beginning of the universe.
But neither the LHC nor the FCC will tell us anything about the “beginning” or “creation” of the universe.
What these colliders can do is create nuclear matter at high density by slamming heavy atomic nuclei into each other. Such matter probably also existed in the early universe. However, even collisions of large nuclei create merely tiny blobs of such nuclear matter, and these blobs fall apart almost immediately. In case you prefer numbers over words, they last about 10-23 seconds.
This situation is nothing like the soup of plasma in the expanding space of the early universe. It is therefore highly questionable already that these experiments can tell us much about what happened back then.
Even optimistically, the nuclear matter that the FCC can produce has a density about 70 orders of magnitude below the density at the beginning of the universe.
And even if you are willing to ignore the tiny blobs and their immediate decay and the 70 orders of magnitude, then the experiments still tell us nothing about the creation of this matter, and certainly not about the creation of the universe.
The argument that large colliders can teach us anything about the beginning, origin, or creation of the universe is manifestly false. The authors of this article either knew this and decided to lie to their readers, or they didn’t know it, in which case they have begun to believe their own institution’s marketing. I’m not sure which is worse.
And as I have said many times before, there is no reason to think a next larger collider would find evidence of dark matter particles. Somewhat ironically, the authors spend the rest of their article arguing against theoretical arguments, but of course the appeal to dark matter is a bona-fide theoretical argument.
In any case, it pains me to see not only that particle physicists are still engaging in false marketing, but that Scientific American plays along with it.
How about sticking with the truth? The truth is that a next larger collider costs a shitload lot of money and will most likely not teach us much. If progress in the foundations of physics is what you want, this is not the way forward.
Tuesday, June 18, 2019
Brace for the oncoming deluge of dark matter detectors that won’t detect anything
Imagine an unknown disease spreads, causing temporarily blindness. Most patients recover after a few weeks, but some never regain eyesight. Scientists rush to identify the cause. They guess the pathogen’s shape and, based on this, develop test-stripes and antigens. If one guess doesn’t work, they’ll move on to the next.
Doesn’t quite sound right? Of course it does not. Trying to identifying pathogens by guesswork is sheer insanity. The number of possible shapes is infinite. The guesses will almost certainly be wrong. No funding agency would pour money into this.
Except they do. Not for pathogen identification, but for dark matter searches.
In the past decades, the searches for the most popular dark matter particles have failed. Neither WIMPs nor axions have shown up in any detector, of which there have been dozens. Physicists have finally understood this is not a promising method. Unfortunately, they have not come up with anything better.
Instead, their strategy is now to fund any proposed experiment that could plausibly be said to maybe detect something that could potentially be a hypothetical dark matter particle. And since there are infinitely many such hypothetical particles, we are now well on the way to building infinitely many detectors. DNA, carbon nanotubes, diamonds, old rocks, atomic clocks, superfluid helium, qubits, Aharonov-Bohm, cold atom gases, you name it. Let us call it the equal opportunity approach to dark matter search.
As it should be, everyone benefits from the equal opportunity approach. Theorists invent new particles (papers will be written). Experimentalists use those invented particles as motivation to propose experiments (more papers will be written). With a little luck they get funding and do the experiment (even more papers). Eventually, experiments conclude they didn’t find anything (papers, papers, papers!).
In the end we will have a lot of papers and still won’t know what dark matter is. And this, we will be told, is how science is supposed to work.
Let me be clear that I am not strongly opposed to such medium scale experiments, because they typically cost “merely” a few million dollars. A few millions here and there don’t put overall progress at risk. Not like, say, building a next larger collider would.
So why not live and let live, you may say. Let these physicists have some fun with their invented particles and their experiments that don’t find them. What’s wrong with that?
What’s wrong with that (besides the fact that a million dollars is still a million dollars) is that it will almost certainly lead nowhere. I don’t want to wait another 40 years for physicists to realize that falsifiability alone is not sufficient to make a hypothesis promising.
My disease analogy, as any analogy, has its shortcomings of course. You cannot draw blood from a galaxy and put it under a microscope. But metaphorically speaking, that’s what physicists should do. We have patients out there: All those galaxies and clusters which are behaving in funny ways. Study those until you have good reason to think you know what’s the pathogen. Then, build your detector.
Not all types of dark matter particles do an equally good job to explain structure formation and the behavior of galaxies and all the other data we have. And particle dark matter is not the only explanation for the observations. Right now, the community makes no systematic effort to identify the best model to fit the existing data. And, needless to say, that data could be better, both in terms of sky coverage and resolution.
The equal opportunity approach relies on guessing a highly specific explanation and then setting out to test it. This way, null-results are a near certainty. A more promising method is to start with highly non-specific explanations and zero in on the details.
The failures of the past decades demonstrate that physicists must think more carefully before commissioning experiments to search for hypothetical particles. They still haven’t learned the lesson.
Sunday, June 16, 2019
Book review: “Einstein’s Unfinished Revolution” by Lee Smolin
Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum
By Lee Smolin
Penguin Press (April 9, 2019)
[Disclaimer: free review copy.]
Thursday, June 13, 2019
Physicists are out to unlock the muon’s secret
Fermilab g-2 experiment.
[Image Glukicov/Wikipedia]
Physicists count 25 elementary particles that, for all we presently know, cannot be divided any further. They collect these particles and their interactions in what is called the Standard Model of particle physics.
But the matter around us is made of merely three particles: up and down quarks (which combine to protons and neutrons, which combine to atomic nuclei) and electrons (which surround atomic nuclei). These three particles are held together by a number of exchange particles, notably the photon and gluons.
What’s with the other particles? They are unstable and decay quickly. We only know of them because they are produced when other particles bang into each other at high energies, something that happens in particle colliders and when cosmic rays hit Earth’s atmosphere. By studying these collisions, physicists have found out that the electron has two bigger brothers: The muon (μ) and the tau (τ).
The muon and the tau are pretty much the same as the electron, except that they are heavier. Of these two, the muon has been studied closer because it lives longer – about 2 x 10-6 seconds.
The muon turns out to be... a little odd.
Physicists have known for a while, for example, that cosmic rays produce more muons than expected. This deviation from the predictions of the standard model is not hugely significant, but it has stubbornly persisted. It has remained unclear, though, whether the blame is on the muons, or the blame is on the way the calculations treat atomic nuclei.
Next, the muon (like the electron and tau) has a partner neutrino, called the muon-neutrino. The muon neutrino also has some anomalies associated with it. No one currently knows whether those are real or measurement errors.
The Large Hadron Collider has seen a number of slight deviations from the predictions of the standard model which go under the name lepton anomaly. They basically tell you that the muon isn’t behaving like the electron, which (all other things equal) really it should. These deviations may just be random noise and vanish with better data. Or maybe they are the real thing.
And then there is the gyromagnetic moment of the muon, usually denoted just g. This quantity measures how muons spin if you put them into a magnetic field. This value should be 2 plus quantum corrections, and the quantum corrections (the g-2) you can calculate very precisely with the standard model. Well, you can if you have spent some years learning how to do that because these are hard calculations indeed. Thing is though, the result of the calculation doesn’t agree with the measurement.
This is the so-called muon g-2 anomaly, which we have known about since the 1960s when the first experiments ran into tension with the theoretical prediction. Since then, both the experimental precision as well as the calculations have improved, but the disagreement has not vanished.
The most recent experimental data comes from a 2006 experiment at Brookhaven National Lab, and it placed the disagreement at 3.7σ. That’s interesting for sure, but nothing that particle physicists get overly excited about.
A new experiments is now following up on the 2006 result: The muon g-2 experiment at Fermilab. The collaboration projects that (assuming the mean value remains the same) their better data could increase the significance to 7σ, hence surpassing the discovery standard in particle physics (which is somewhat arbitrarily set to 5σ).
For this experiment, physicists first produce muons by firing protons at a target (some kind of solid). This produces a lot of pions (composites of two quarks) which decay by emitting muons. The muons are then collected in a ring equipped with magnets in which they circle until they decay. When the muons decay, they produce two neutrinos (which escape) and a positron that is caught in a detector. From the direction and energy of the positron, one can then infer the magnetic moment of the muon.
The Fermilab g-2 experiment, which reuses parts of the hardware from the earlier Brookhaven experiment, is already running and collecting data. In a recent paper, Alexander Keshavarzi, on behalf of the collaboration reports they successfully completed the first physics run last year. He writes we can expect a publication of the results from the first run in late 2019. After some troubleshooting (something about an underperforming kicker system), the collaboration is now in the second run.
Another experiment to measure more precisely the muon g-2 is underway in Japan, at the J-PARC muon facility. This collaboration too is well on the way.
While we don’t know exactly when the first data from these experiements will become available, it is clear already that the muon g-2 will be much talked about in the coming years. At present, it is our best clue for physics beyond the standard model. So, stay tuned.
Wednesday, June 12, 2019
Guest Post: A conversation with Lee Smolin about his new book "Einstein’s Unfinished Revolution"
[Tam Hunt sent me another lengthy interview, this time with Lee Smolin. Smolin is a faculty member at the Perimeter Institute for Theoretical Physics in Canada and adjunct professor at the University of Waterloo. He is one of the founders of loop quantum gravity. In the past decades, Smolin’s interests have drifted to the role of time in the laws of nature and the foundations of quantum mechanics.]
TH: You make some engaging and bold claims in your new book, Einstein’s Unfinished Revolution, continuing a line of argument that you’ve been making over the course of the last couple of decades and a number of books. In your latest book, you argue essentially that we need to start from scratch in the foundations of physics, and this means coming up with new first principles as our starting point for re-building. Why do you think we need to start from first principles and then build a new system? What has brought us to this crisis point?
LS: The claim that there is a crisis, which I first made in my book, Life of the Cosmos (1997), comes from the fact that it has been decades since a new theoretical hypothesis was put forward that was later confirmed by experiment. In particle physics, the last such advance was the standard model in the early 1970s; in cosmology, inflation in the early 1980s. Nor has there been a completely successful approach to quantum gravity or the problem of completing quantum mechanics.
I propose finding new fundamental principles that go deeper than the principles of general relativity and quantum mechanics. In some recent papers and the book, I make specific proposals for new principles.
TH: You have done substantial work yourself in quantum gravity (loop quantum gravity, in particular) and quantum theory (suggesting your own interpretation called the “real ensemble interpretation”), and yet in this new book you seem to be suggesting that you and everyone else in foundations of physics needs to return to the starting point and rebuild. Are you in a way repudiating your own work or simply acknowledging that no one, including you, has been able to come up with a compelling approach to quantum gravity or other outstanding foundations of physics problems?
LS: There are a handful of approaches to quantum gravity that I would call partly successful. These each achieve a number of successes, which suggest that they could plausibly be at least part of the story of how nature reconciles quantum physics with space, time and gravity. It is possible, for example that these partly successful approaches model different regimes or phases of quantum gravity phenomena. These partly successful approaches include loop quantum gravity, string theory, causal dynamical triangulations, causal sets, asymptotic safety. But I do not believe that any approach to date, including these, is fully successful. Each has stumbling blocks that after many years remain unsolved.
TH: You part ways with a number of other physicists in recent years who have railed against philosophy and philosophers of physics as being largely unhelpful for actual physics. You argue instead that philosophers have a lot to contribute to the foundations of physics problems that are your focus. Have you found philosophy helpful in pursuing your physics for most of your career or is this a more recent finding in your own work? Which philosophers, in particular, do you think can be helpful in this area of physics?
LS: I would first of all suggest we revive the old idea of a natural philosopher, which is a working scientist who is inspired and guided by the tradition of philosophy. An education and immersion in the philosophical tradition gives them access to the storehouse of ideas, positions and arguments that have been developed over the centuries to address the deepest questions, such as the nature of space and time.
Physicists who are natural philosophers have the advantage of being able to situate their work, and its successes and failures, within the long tradition of thought about the basic questions.
Most of the key figures who transformed physics through its history have been natural philosophers: Galileo, Newton, Leibniz, Descartes, Maxwell, Mach, Einstein, Bohr, Heisenberg, etc. In more recent years, David Finkelstein is an excellent example of a theoretical physicist who made important advances, such as being the first to untangle the geometry of a black hole, and recognize the concept of an event horizon, who was strongly influenced by the philosophical tradition. Like a number of us, he identified as a follower of Leibniz, who introduced the concepts of relational space and time.
The abstract of Finkelstein’s key 1958 paper on what were soon to be called black holes explicitly mentions the principle of sufficient reason, which is the central principle of Leibniz’s philosophy. None of the important developments of general relativity in the 1960s and 1970s, such as those by Penrose, Hawking, Newmann, Bondi, etc., would have been possible without that groundbreaking paper by Finkelstein.
I asked Finkelstein once why it was important to know philosophy to do physics, and he replied, “If you want to win the long jump, it helps to back up and get a running start.”’
In other fields, we can recognize people like Richard Dawkins, Daniel Dennett, Lynn Margulis, Steve Gould, Carl Sagan, etc. as natural philosophers. They write books that argue the central issues in evolutionary theory, with the hope of changing each other’s minds. But we the lay public are able to read over their shoulders, and so have front row seats to the debates.
There are also working now a number of excellent philosophers of physics, who contribute in important ways to the progress of physics. One example of these is a group, centred originally at Oxford, of philosophers who have been doing the leading work on attempting to make sense of the Many Worlds formulation of quantum mechanics. This work involves extremely subtle issues such as the meaning of probability. These thinkers include Simon Saunders, David Wallace, Wayne Mhyrvold; and there are equally good philosophers who are skeptical of this work, such as David Albert and Tim Maudlin.
It used to be the case, half a century ago, that philosophers, such as Hilary Putnam, who opined about physics, felt qualified to do so with a bare knowledge of the principles of special relativity and single particle quantum mechanics. In that atmosphere my teacher Abner Shimony, who had two Ph.D’s – one in physics and one in philosophy – stood out, as did a few others who could talk in detail about quantum field theory and renormalization, such as Paul Feyerabend. Now the professional standard among philosophers of physics requires a mastery of Ph.D level physics, as well as the ability to write and argue with the rigour that philosophy demands. Indeed, a number of the people I just mentioned have Ph.D’s in physics.
TH: One of your suggested hypotheses, the next step you take after stating your first principles, is an acknowledgment that time is fundamental, real and irreversible, effectively goring one of the sacred cows of modern physics. You made your case for this approach in your book Time Reborn and I'm curious if you've seen a softening over the last few years in terms of physicists and philosophers beginning to be more open to the idea that the passage of time is truly fundamental? Also, why wouldn't this hypothesis be instead a first principle, if time is indeed fundamental?
LS: In my experience, there have always been physicists and philosophers open to these ideas, even if there is no consensus among those who have carefully thought the issues through.
When I thought carefully about how to state a candidate set of basic principles, it became clear that it was useful to separate principles from hypotheses about nature. Principles such as sufficient reason and the identity of the indiscernible can be realized in formulations of physics in which time is either fundamental or secondary and emergent. Hence those principles are prior to the choice of a fundamental or emergent time. So I think it clarifies the logic of the situation to call the latter choice a hypothesis rather than a principle.
TH: How does viewing time as irreversible and fundamental mesh with your principle of background independence? Doesn’t a preferred spacetime foliation, which would provide an irreversible and fundamental time, provide a background?
LS: Background independence is an aspect of the two principles of Leibniz I just referred to: 1) sufficient reason (PSR) and 2) the identity of the indiscernible (PII). Hence it is deeper than the choice of whether time is fundamental or emergent. Indeed, there are theories which rest on both hypotheses about time (fundamental or emergent). Julian Barbour, for example, is a relationalist who develops background-independent theories in which time is emergent. I am also a relationalist, but I make background-independent models of physics in which time and its passage are fundamental.
Viewing time as fundamental and irreversible doesn’t necessarily imply a preferred foliation; by the latter you mean a foliation of a pre-existing spacetime, specified kinematically in advance of the dynamical evolution. In our energetic causal set models there does arise a notion of the present, but this is determined dynamically by the evolution of the model and so is consistent with what we mean by background independence.
The point is that the solutions to background-independent theories can have preferred frames, so long as they are generated by solving the dynamics. This is, for example, the case with cosmological solutions to general relativity.
TH: You and many other physicists have focused for many years on finding a theory of quantum gravity, effectively unifying quantum mechanics and general relativity. In describing your preferred approach to achieving a theory of quantum gravity worthy of the name you describe why you think quantum mechanics is incomplete and why general relativity is in some key ways likely wrong. Let’s look first at quantum mechanics, which you describe as “wrong” and “incomplete.” Why is the Copenhagen (still perhaps the most popular version of quantum theory) school of quantum mechanics wrong and incomplete?
LS: Copenhagen is incomplete because it is based on an arbitrarily chosen division of the world into a classical realm and a quantum realm. This reflects our practice as experimenters, and corresponds to nothing in nature. This means it is an operational approach which conflicts with the expectations that physics should offer a complete description of individual phenomena, with no reference to our existence, knowledge or measurements.
TH: Your objections just stated (what’s known generally as the “measurement problem”) seem to me, even as an obvious non-expert in this area, to be fairly apparent and accurate objections to Copenhagen. If that’s the case, why is Copenhagen still with us today? Why was it ever considered a serious theory?
LS: I don’t think there are many proponents of the Copenhagen view among people working in quantum foundations, or who have otherwise thought about the issues carefully. I don’t think there are many enthusiastic followers of Bohr left alive.
Meanwhile, what most physicists who are not specialists in quantum foundations practice and teach is a very pragmatic, operational set of rules, which suffices because it closely parallels the practice of actual experimenters. They can get on with the physics without having to take a stand on realism.
What Bohr had in mind was a much more radical rejection of realism and its replacement by a view of the world in which nature and us co-create phenomena. My sense is that most living physicists haven’t read Bohr’s actual writings. There are of course some exceptions, like Chris Fuch’s QBism, which is, to the extent that I understand it, an even more radical view. Even if I disagree, I very much admire Chris for the clarity of his thinking and his insistence on taking his view to its logical conclusions. But, in the end, as a realist who sees the necessity of completing quantum mechanics by the discovery of new physics, the intellectual contortions of anti-realists are, however elegant, no help for my projects.
TH: Could this be a good example of why philosophical training could actually be helpful for physicists?
LS: I would agree, in some cases it could be helpful for some physicists to study philosophy, especially if they are interested in discovering deeper foundational laws. But I would never say anyone should study philosophy, because it can be very challenging reading, and if someone is not inclined to think “philosophically” they are unlikely to get much from the effort. But I would say that if someone is receptive to the care and depth of the writing, it can open doors to new ideas and to a highly critical style of thinking, which could greatly aid someone’s research.
The point I would like to make here is rather different. As I discussed in my earlier books, there are different periods in the development of science during which different kinds of problems present themselves. These require different strategies, different educations and perhaps even different styles of research to move forward.
There are pragmatic periods where the laws needed to understand a wide range of phenomena are in place and the opportunities of greatly advancing our understanding of diverse physical phenomena dominate. These kinds of periods require a more pragmatic approach, which ignores whatever foundational issues may be present (and indeed, there are always foundational issues lurking in the background), and focuses on developing better tools to work out the implications of the laws as they stand.
Then there are (to follow Kuhn) revolutionary periods in science, when the foundations are in question and the priority is to discover and express new laws.
The kinds of people and the kinds of education needed to succeed are different in these two kinds of periods. Pragmatic times require pragmatic scientists, and philosophy is unlikely to be important. But foundational periods require foundational people, many of whom will, as in past foundational periods, find inspiration from philosophy. Of course, what I just said is an oversimplification. At all times, science needs a diverse mix of research styles. We always need pragmatic people who are very good at the technical side of science. And we always need at least a few foundational thinkers. But the optimal balance is different in different periods.
The early part of the 20th Century, through around 1930, was a foundational period. That was followed by a pragmatic period during which the foundational issues were ignored and many applications of the quantum mechanics were developed.
Since the late 1970s, physics has been again in a foundational period, facing deep questions in elementary particle physics, cosmology, quantum foundations and quantum gravity. The pragmatic methods which got us to that point no longer suffice; during such a period we need more foundational thinkers and we need to pay more attention to them.
TH: Turning to general relativity, you also don’t mince your words and you describe the notion of reversible time, thought to be at the core of general relativity, as “wrong.” What does general relativity look like with irreversible and fundamental time?
LS: We posed exactly this question: can we invent an extension of general relativity in which time evolution is asymmetric under a transformation that reverses a measure of time. We found two ways to do this.
TH: You touched on consciousness as a physical phenomenon and a necessary ingredient in our physics in your book, Time Reborn (as have many other physicists over the last century, of course). You spend less time on consciousness in your new book — stating “Let us tiptoe past the hard question of consciousness to simpler questions” — but I’m curious if you’ve considered including as a first principle the notion that consciousness is a fundamental aspect of nature (or not) in your ruminations on these deep topics?
LS: I am thinking slowly about the problems of qualia and consciousness, in the rough direction set out in the epilogue of Time Reborn. But I haven’t yet come to conclusions worth publishing. An early draft of Einstein’s Unfinished Revolution had an epilogue entirely devoted to these questions, but I decided it was premature to publish; it also would have distracted attention from the central themes of that book.
TH: David Bohm, one of the physicists you discuss with respect to alternative versions of quantum theory, delved deeply into philosophy and spirituality in relation to his work in physics, as you discuss briefly in your new book. Do you find Bohm’s more philosophical notions such as the Implicate Order (the metaphysical ground of being in which the “explicate” manifest world that we know in our normal every day life is enfolded, and thus “implicate”) helpful for physics?
LS: I am afraid I’ve not understood what Bohm was aiming for in his book on the implicate order, or his dialogues with Krishnamurti, but it is also true that I haven’t tried very hard. I think one can admire greatly the practical and psychological knowledge of Buddhism and related traditions, while remaining skeptical of their more metaphysical teachings.
TH: Bohm’s Implicate Order has much in common with physical notions such as the (nonluminiferous) ether, which has been revived in today’s physics by some heavyweights such as Nobel Prize winner Frank Wilczek (The Lightness of Being: Mass, Ether, and the Unification of Forces) as another term for the set of space-filling fields that underlie our reality. Do you take the idea of reviving some notion of the ether as a physical/metaphysical background at all seriously in your work?
LS: The important part of the idea of the ether was that it is a smooth, fundamental, physical substance, which had the property that vibrations and stresses within it reproduced the phenomena described by Maxwell’s field theory of electromagnetism. It was also important that there was a preferred frame of reference associated with being at rest with respect to this substance.
We no longer believe any part of this. The picture we now have is that any such substance is made of a large collection of atoms. Therefore the properties of any substance are emergent and derivative. I don’t think Frank Wilczek disagrees with this, I suspect he is just being metaphorical.
TH: He doesn’t seem to be metaphorical, writing in a 1999 article:“Quite undeservedly, the ether has acquired a bad name. There is a myth, repeated in many popular presentations and textbooks, that Albert Einstein swept it into the dustbin of history. The real story is more complicated and interesting. I argue here that the truth is more nearly the opposite: Einstein first purified, and then enthroned, the ether concept. As the 20th century has progressed, its role in fundamental physics has only expanded. At present, renamed and thinly disguised, it dominates the accepted laws of physics. And yet, there is serious reason to suspect it may not be the last word.” In his 2008 book mentioned above, he reframes the set of accepted physical fields as “the Grid” (which is “the primary world-stuff”) or ether. Sounds like you don’t find this re-framing very compelling?
LS: What is true is that quantum field theory (QFT) treats all propagating particles and fields as excitations of a (usually unique) vacuum state. This is analogized to the ether, but in my opinion it’s a bad analogy. One big difference is that the vacuum of a QFT is invariant under all the symmetries of nature, whereas the ether breaks many of them by defining a preferred state of at rest.
TH: You consider Bohm’s alternative quantum theory in some depth, and say that “it makes complete sense,” but after further discussion you consider it inadequate because it is generally considered to be incompatible with special relativity, among other problems.
LS: This is not the main reason I don’t think pilot wave theory describes nature.
Pilot wave theory is based on two equations. One, which is the same as in ordinary QM-the Schrödinger equation, propagates the wave-function, while the second-the guidance equation, guides the “particles.” The first can be made compatible with special relativity, while the second cannot. But when one adds an assumption about probabilities, the averages of the guided particles follow the waves and so agree with both ordinary QM and special relativity. In this way you can say that pilot wave theory is “weakly compatible” with special relativity, in the sense that, while there is a preferred sense of rest, it can’t be measured.
TH: If one considers time to be fundamental and irreversible, isn’t there a relativistic version of Bohmian mechanics readily available by adopting some version of Lorentzian or neo-Lorentzian relativity (which are background-dependent)?
LS: Maybe — you are describing research to be done.
TH: Last, how optimistic are you that your view, that today’s physics needs some really fundamental re-thinking, will catch on with the majority of today’s physicists in the next decade or so?
LS: I’m not but I wouldn’t expect any such call for a reconsideration of the basic principles would be popular until it has results which make it hard to avoid thinking about.
Monday, June 10, 2019
Sometimes giving up is the smart thing to do.
[likely image source]
A few years ago I signed up for a 10k race. It had an entry fee, it was a scenic route, and I had qualified for the first group. I was in best shape. The weather forecast was brilliant.
Two days before the race I got a bad cold. But that wouldn’t deter me. Oh, no, not me. I’m not a quitter. I downed a handful of pills and went nevertheless. I started with a fever, a bad cough, and a banging head.
It didn’t go well. After half a kilometer I developed a chest pain. After one kilometer it really hurt. After two kilometers I was sure I’d die. Next thing I recall is someone handing me a bottle of water after the finish line.
Needless to say, my time wasn’t the best.
But the real problem began afterward. My cold refused to clear out properly. Instead I developed a series of respiratory infections. That chest pain stayed with me for several months. When the winter came, each little virus the kids brought home knocked me down.
I eventually went to see a doctor. She sent me to have a chest X-ray taken on the suspicion of tuberculosis. When the X-ray didn’t reveal anything, she put me on a 2 week regime of antibiotics.
The antibiotics indeed finally cleared out whatever lingering infection I had carried away. It took another month until I felt like myself again.
But this isn’t a story about the misery of aging runners. It’s a story about endurance sport of a different type: academia.
In academia we write Perseverance with capital P. From day one, we are taught that pain is normal, that everyone hurts, and that self-motivation is the highest of virtues. In academia, we are all over-achievers.
This summer, as every summer for the past two decades, I receive notes about who is leaving. Leaving because they didn’t get funding, because they didn’t get another position, or because they’re just no longer willing to sacrifice their life for so little in return.
And this summer, as every summer for the past two decades, I find myself among the ones who made it into the next round, find myself sitting here, wondering if I’m worthy and if I’m in the right place doing the right thing at the right time. Because, let us be honest. We all know that success in academia has one or two elements of luck. Or maybe three. We all know it’s not always fair.
I’m writing this for the ones who have left and the ones who are about to leave. Because I have come within an inch of leaving half a dozen times and I have heard the nasty, nagging voice in the back of my head. “Quitter,” it says and laughs, “Quitter.”
Don’t listen. From the people I know who left academia, few have regrets. And the few with regrets found ways to continue some research along with their new profession. The loss isn’t yours. The loss is one for academia. I understand your decision and I think you choose wisely. Just because everyone you know is on a race to nowhere doesn’t mean going with them makes sense. Sometimes, giving up is the smart thing to do.
A year after my miserable 10k experience, I signed up for a half-marathon. A few kilometers into the race, I tore a muscle.
I don’t get a runner’s high, but running increases my pain tolerance to unhealthy levels. After a few kilometers, you could probably stab me in the back and I wouldn’t notice. I could well have finished that race. But I quit.
Saturday, June 08, 2019
Book Review: “Beyond Weird” by Philip Ball
Beyond Weird: Why Everything You Thought You Knew about Quantum Physics Is Different
By Philip Ball
University of Chicago Press (October 18, 2018)
I avoid popular science articles about quantum mechanics. It’s not that I am not interested, it’s that I don’t understand them. Give me a Hamiltonian, a tensor-product expansion, and some unitary operators, and I can deal with that. But give me stories about separating a cat from its grin, the many worlds of Wigner’s friend, or suicides in which you both die and not die, and I admit defeat on paragraph two.
Ball is guilty of some of that. I got lost half through his explanation how a machine outputs plush cats and dogs when Alice and Bob put in quantum coins, and still haven’t figured out why the seer’s daughter wanted to be wed to a man evidently more stupid than she.
But then, clearly, I am not the book’s intended audience, so let me instead tell you something more helpful.
Ball knows what he writes about, that’s obvious from page one. For all I can tell the science in his book is flawless. It is also engagingly told, with some history but not too much, with some reference to current research, but not too much, with some philosophical discourse but not too much. Altogether, it is a well-balanced mix that should be understandable for everyone, even those without prior knowledge of the topic. And I entirely agree with Ball that calling quantum mechanics “weird” or “strange” isn’t helpful.
In “Beyond Weird,” Ball does a great job sorting out the most common confusions about quantum mechanics, such as that it is about discretization (it is not), that it defies the speed of light limit (it does not), or that it tells you something about consciousness (huh?). Ball even cleans up with the myth that Einstein hated quantum mechanics (he did not), Feynman dubbed the Copenhagen interpretation “Shut up and calculate” (he did not, also, there isn’t really such a thing as the Copenhagen interpretation), and, best of all, clears out the idea that many worlds solves the measurement problem (it does not).
In Ball’s book, you will learn just what quantum mechanics is (uncertainty, entanglement, superpositions, (de)coherence, measurement, non-locality, contextuality, etc), what the major interpretations of quantum mechanics are (Copenhagen, QBism, Many Worlds, Collapse models, Pilot Waves), and what the currently discussed issues are (epistemic vs ontic, quantum computing, the role of information).
As someone who still likes to read printed books, let me also mention that Ball’s is just a pretty book. It’s a high quality print in a generously spaced and well-readable font, the chapters are short, and the figures are lovely, hand-drawn illustrations. I much enjoyed reading it.
It is also remarkable that “Beyond Weird” has little overlap with two other recent books on quantum mechanics which I reviewed: Chad Orzel’s “Breakfast With Einstein” and Anil Ananthaswamy’s “Through Two Doors At Once.” While Ball focuses on the theory and its interpretation, Orzel’s book is about applications of quantum mechanics, and Ananthaswamy’s is about experimental milestones in the development and understanding of the theory. The three books together make an awesome combination.
And luckily the subtitle of Philip Ball’s book turned out to be wrong. I would have been disturbed indeed had everything I thought I knew about quantum physics been different.
[Disclaimer: Free review copy.]
Related: Check out my list of 10 Essentials of Quantum Mechanics.
Wednesday, June 05, 2019
If we spend money on a larger particle collider, we risk that progress in physics stalls.
[Image: CERN]
Particle physicists have a problem. For 40 years they have been talking about new particles that never appeared. The Large Hadron Collider was supposed to finally reveal them. It didn’t. This $10 billion machine has found the Higgs-boson, thereby completing the standard model of particle physics, but no other fundamentally new particles.
With this, the Large Hadron Collider (LHC) has demonstrated that arguments used by particle physicists for the existence of new particles beyond those in the standard model were wrong. With these arguments now falsified, there is no reason to think that a next larger particle collider will do anything besides measuring the parameters of the standard model to higher precision. And with the cost of a next larger collider estimated at $20 billion or so, that’s a tough sell.
Particle physicists have meanwhile largely given up spinning stories about discovering dark matter or recreating the origin of the universe, because it is clear to everyone now that this is marketing one cannot trust. Instead, they have a new tactic which works like this.
First, they will refuse to admit anything went wrong in the past. They predicted all these particles, none of which was seen, but now they won’t mention it. They hyped the LHC for two decades, but now they act like it didn’t happen. The people who previously made wrong predictions cannot be bothered to comment. Except for those like Gordon Kane and Howard Baer, who simply make new predictions and hope you have forgotten they ever said anything else.
Second, in case they cannot get away with outright denial, they will try to convince you it is somehow interesting they were wrong. Indeed, it is interesting – if you are a sociologist. A sociologist would be thrilled to see such an amazing example of groupthink, leading a community of thousands of intelligent people to believe that relying on beauty is a good method to make predictions. But as far as physics is concerned, there’s nothing to learn here, except that beauty isn’t a scientific criterion, which is hardly a groundbreaking insight.
Third, they will sure as hell not touch the question whether there might be better ways to invest the money, because that can only work to their disadvantage. So they will tell you vague tales about the need to explore nature, but not ever discuss whether other methods to explore nature would advance science more.
But fact is, building a large particle collider presently has a high cost for little expected benefit. This money would be better invested into less costly experiments with higher discovery potential, such as astrophysical searches for dark matter (I am not talking about direct detection experiments), table-top searches for quantum gravity, 21cm astronomy, gravitational wave interferometers, high-precision but low-energy measurements, just to mention a few.
And that is only considering the foundations of physics, leaving aside the overarching question of societal benefit. $20 billion that go into a particle collider are $20 billion that do not go into nuclear fusion, drug development, climate science, or data infrastructure, all of which can be reasonably expected to have a larger return on investment. At the very least it is a question one should discuss.
Add to this that the cost for a larger particle collider could dramatically go down in the next 20-30 years with future technological advances, such as wake-field acceleration or high-temperature superconductors. In the current situation, with colliders so extremely costly, it makes economically more sense to wait if one of these technologies reaches maturity. Who wants to spend some billions digging a 100km tunnel when that tunnel may no longer be necessary by the time the collider could be be in operation?
Anyone who talks about building a larger particle collider, but who does not mention the above named issues demonstrates that they neither care about progress in physics nor about social responsibility. They do not want to have a sincere discussion. Instead, they are presenting a one-sided view. They are merely lobbying.
If you encounter any such person, I recommend you ask them the following: Why were all these predictions wrong and what have particle physicists learned from it? Why is a larger particle collider a good way to invest such large amounts of money in the foundations of physics now? What is the benefit of such an investment for society?
And do not take as response arguments about benefiting collaborations, scientific infrastructure, or education, because such arguments can be made in favor of any large investment into science. Such generic arguments do not explain why a particle collider in particular is the thing to do. I have a handy list with responses to further nonsense arguments here.
A prediction. If you give particle physicists money for a next larger collider this is what will happen: This money will be used to hire more people who will tell you that particle physics is great. They will continue to invent new particles according to some new fad, and then claim they learned something when their expensive machine falsifies these inventions. In 40 years, we will still not know what dark matter is made of or how to quantize gravity. We will still not have a working fusion reactor, will still not have quantum computers, and will still have group-think in science. Particle physicists will then begin to argue they need a larger collider. Rinse and repeat.
Of course it is possible that a larger collider will find something new. The only way to find out with certainty is to build it and look. But the same “Just Look” argument can be made about any experiment that explores new frontiers. Point is: Particle physicists have so far failed to come up with any reason why going to higher energies is currently a promising route forward. The conservative expectation therefore is that the next larger collider would be much like the LHC, but for twice the price and without the Higgs.
Particle physics is a large and very influential community. Do not fall for their advertisements. Ask the hard questions.
Monday, June 03, 2019
The multiverse hypothesis: Are there other universes besides our own?
You are one of some billion people on this planet. This planet is one of some hundred billion planets in this galaxy. This galaxy is one of some hundred billion galaxies in the universe. Is our universe the only one? Or are there other universes?
In the past decades, the idea that our universe is only one of many, has become popular among physicists. If there are several universes, their collection is called the “multiverse”, and physicists have a few theories for this that I want to briefly tell you about.
1. Eternal Inflation.
We do not know how our universe was created and maybe we will never know. But according to a presently popular theory, called “inflation”, our universe was created from a quantum fluctuation of a field called the “inflaton”. In this case, there would be infinitely many such fluctuations giving rise to infinitely many universes. This process of universe-creation never stops, which is why it is called eternal inflation.
These other universes may contain the same matter as ours, but in different arrangements, or they may contain different types of matter. They may have the same laws of nature, or entirely different laws. Really, pretty much anything goes, as long as you have space, time, and matter.
2. The String Theory Landscape
The string theory landscape came out of the realization that string theory does not, as originally hoped, uniquely predict the laws of nature we observe. Instead, the theory allows for many different laws of nature, that would give rise to universes different from our own. The idea that all of them exist goes together well with eternal inflation, and so, the two theories are often lumped together.
3. Many Worlds
Many Worlds is an interpretation of quantum mechanics. In quantum mechanics, we can make predictions only for probabilities. We can say, for example, a particle goes left or right, each with 50% probability. But then, when we measure it, we find it either left or right. And then we know where it is with 100% probability. So what happened with the other option?
The most common attitude you find among physicists is who cares? We are here and that’s what we have measured, now let’s move on.
The many worlds interpretation, however, postulates that all possible outcomes of an experiment exist, each in a separate universe. It’s just that we happen to live in only one of those universes, and never see the other ones.
4. The Simulation Hypothesis
Video games are getting better by the day, and it’s easy to imagine that maybe one day they will be so good we can no longer tell apart the virtual world and the real world.
This brings up the question whether maybe we already live in a virtual world, one that is programmed by some being more intelligent than us and technologically ahead? If that is so, there is no reason to think that our universe is the only simulation that is going on. There may be many other universe simulations, programmed by superintelligent beings. This, too, is a variant of the multiverse.
5. The Mathematical Universe
Finally, let me briefly mention the idea, popularized by Max Tegmark, that all of mathematics exists, and that we merely observe a very small part of it. It is this small part of mathematics that we call our universe.
Are these theories science? Or are they fiction? Let me know what you think.
Does God exist? Science does not have an answer.
I know that some of you have been wondering what has happened to me that I go on about the existence of gods, but if you make it to the end of this blogpost, I am sure it will all make sense!
Before we can talk about whether God exists, I have to be clear what kind of god I am talking about. I am talking about the old-fashioned personal god, the one who listens to prayers, and tells you how to be a good person, and who sorts the good from the bad in afterlife, and so on.
Some variants of this god are in actual conflict with evidence. Say, if you believe that evolution does not happen, or that praying cures cancer, and so on. If you want to defend such beliefs, you are in the wrong channel, good bye. I will assume that you are here because, as I, you want to understand what we can learn from nature, so ignoring evidence is not an option.
What we have then is a god who is consistent with all our observations, but who himself does not result in any additional observable consequences. If you want to explain observations, then the scientific theories of the day are the best you can do. Adding god on top does not make the theories any more useful. By useful I mean concretely that a theory allows you to calculate patterns in data in a way that is quantifiably simpler than just collecting the data.
Example: The standard model of particle physics. It allows you to calculate what happens in particle collisions at the Large Hadron Collider. Now you can say, I take the standard model plus the hypothesis that it was made by god. But adding god does not simplify the calculations. So, god is superfluous.
The scientific approach is then to prefer the standard model without god. This, of course, is nothing else but Occam’s razor. You make a theory as simple as possible. Without this requirement, science just becomes dysfunctional, because you would be allowed to add all kinds of unnecessary clutter.
Now, as we discussed previously, scientists say something “exists” if it is an element of a theory that is useful to explain observations. The Higgs-boson exists in this very sense. So do black holes and gravitational waves.
On the other hand, if something is not useful to explain observations, as it is the case with god, science does not say it does not exist. Instead, it doesn’t say anything about whether it exists or not. It cannot say anything, because science is about what’s observable.
Personally, I am not sure what sense it makes to postulate the existence of something that has no observable consequences. But it is certainly something you can believe if you wish. It’s just that science cannot say anything about it.
So, as some of you have pointed out correctly, God could be said to exist in a different way than, say, elementary particles. Some have suggested to call it “immaterial existence”. But I find this misleading because space and time are also immaterial, yet they do exist in the scientific sense.
Some have suggested to call it “non-physical existence”, but this raises the impression it has something to do with physics in particular, which is also misleading.
What it really is, is a non-scientific type of existence. Or, let us call it what it is, it’s a religious existence. God exists in the religious way. An element of a hypothesis that does not result in observable consequences exist in the religious way.
So, here is my next homework assignment: Does the multiverse exist?
I think most of you will understand now what I am getting at. If you are not sure just what the multiverse is, I have another blogpost/video upcoming in a few hours that briefly summarizes what this is all about. So, stay, tuned. |
f2b01806b903a51c |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
What is the reason for the observation that across the board fields in physics are generally governed by second order (partial) differential equations?
If someone on the street would flat out ask me that question, then I'd probably mumble something about physicists wanting to be able to use the Lagrangian approach. And to allow a positive rotation and translation invariant energy term, which allows for local propagation, you need something like $-\phi\Delta\phi$.
I assume the answer goes in this direction, but I can't really justify why more complex terms in the Lagrangian are not allowed or why higher orders are a physical problem. Even if these require more initial data, I don't see the a priori problem.
Furthermore you could come up with quantities in the spirit of $F\wedge F$ and $F \wedge *F$ and okay yes... maybe any made up scalar just doesn't describe physics or misses valuable symmetries. On there other hand in the whole renormalization business, they seem to be allowed to use lots and lots of terms in their Lagrangians. And if I understand correctly, supersymmetry theory is basically a method of introducing new Lagrangian densities too.
Do we know the limit for making up these objects? What is the fundamental justification for order two?
share|cite|improve this question
First of all, it's not true that all important differential equations in physics are second-order. The Dirac equation is first-order.
The number of derivatives in the equations is equal to the number of derivatives in the corresponding relevant term of the Lagrangian. These kinetic terms have the form $$ {\mathcal L}_{\rm Dirac} = \bar \Psi \gamma^\mu \partial_\mu \Psi $$ for Dirac fields. Note that the term has to be Lorentz-invariant – a generalization of rotational invariance for the whole spacetime – and for spinors, one may contract them with $\gamma_\mu$ matrices, so it's possible to include just one derivative $\partial_\mu$.
However, for bosons which have an integer spin, there is nothing like $\gamma_\mu$ acting on them. So the Lorentz-invariance i.e. the disappearance of the Lorentz indices in the terms with derivatives has to be achieved by having an even number of them, like in $$ {\mathcal L}_{\rm Klein-Gordon} = \frac{1}{2} \partial^\mu \Phi \partial_\mu \Phi $$ which inevitably produce second-order equations as well. Now, what about the terms in the equations with fourth or higher derivatives?
They're actually present in the equations, too. But their coefficients are powers of a microscopic scale or distance scale $L$ – because the origin of these terms are short-distance phenomena. Every time you add a derivative $\partial_\mu$ to a term, you must add $L$ as well, not to change the units of the term. Consequently, the coefficients of higher-derivative terms are positive powers of $L$ which means that these coefficients including the derivatives, when applied to a typical macroscopic situation, are of order $(L/R)^k$ where $1/R^k$ comes from the extra derivatives $\partial_\mu^k$ and $R$ is a distance scale of the macroscopic problem we are solving here (the typical scale where the field changes by 100 percent or so).
Consequently, the coefficients with higher derivatives may be neglected in all classical limits. They are there but they are negligible. Einstein believed that one should construct "beautiful" equations without the higher-derivative terms and he could guess the right low-energy approximate equations as a result. But he was wrong: the higher derivative terms are not really absent.
Now, why don't we encounter equations whose lowest-order derivative terms are absent? It's because their coefficient in the Lagrangian would have to be strictly zero but there's no reason for it to be zero. So it's infinitely unlikely for the coefficient to be zero. It is inevitably nonzero. This principle is known as Gell-Mann's anarchic (or totalitarian) principle: everything that isn't prohibited is mandatory.
share|cite|improve this answer
Thanks for the answer. What is the reason that "their coefficients are powers of a microscopic scale or distance scale $L$"? In the last paragraph you use this again, where it's implied that the lower order derivatives are a priori related to a bigger scale, which then outweighs the later ones associated with higher orders. Is there a justification, which goes back to axiomatic assumptions or is it "just" an empirical insight from dealing with effective field theories? – NikolajK Dec 21 '11 at 15:08
Dear @Nikolaj, $L$ determining the coefficients is microscopic because microscopic scales are the natural ones for the formulation of the laws of physics. By definition, microscopic scales are the scales associated with the elementary particles. These general discussions talk about many things at the same moment. For example, in GR, the typical scale is the Planck length, $10^{-35}$ meters, which is the shortest one. In other theories, the typical scale is longer. But it's always microscopic because it determines the internal structure/behavior of the fields and particles which are small. – Luboš Motl Dec 21 '11 at 16:06
The comment that the derivatives are not just related, they produce long scale was meant to be as a self-evident tautology. What I mean is that if we consider a field that is changing in space, e.g. as a wave with wavelength $R$, then the derivative will pick a factor of order $1/R$, too. For example, the derivative of $\sin(x/R)$, the wave of length $2\pi R$, is $\cos(x/R)/R$. Cos and sin is almost the same thing, of the same order 1, and we therefore picked an extra factor of $1/R$. All these things are order-of-magnitude estimates. Macroscopic usage of field theory has a macroscopic $R$. – Luboš Motl Dec 21 '11 at 16:09
I'm not sure if I successfully pointed out my problem in the comment. My question is: What is the justification for assuming the coefficient of smaller orders would describe a bigger scale? What speaks against a situation, where the fourth order term has a small coefficient, but the second order term has an even smaller one? Then in the classical limit, just the fourth order expression would survive. – NikolajK Dec 21 '11 at 18:36
Dear @Nikolaj, it's likely that I don't understand your continued confusion at all. Whether a term may be neglected depends on the relative magnitude of the two terms, the neglected one and the surviving one. So I am estimating the ratio of higher-derivative terms and two-derivative terms and it scales like $(L/R)^k$, a small number, so the higher-derivative terms may be neglected if the two-derivative terms are there. It doesn't matter how you normalize both of these terms in an "absolute way". What matters for being able to neglect one term is the ratio of the two terms. – Luboš Motl Dec 21 '11 at 18:53
up vote 14 down vote
One can rewrite any pde of any order as a system of first order pde's, hence the assumption behind question is somewhat questionable. Also there exist first order PDE's of relevance to physics (Dirac equation, Burgers equation, to name just two).
However, it is common that quantities in physics appear in conjugate pairs of potential fields and their associated field strength, defined by the potential gradient. Now the gradients of field strength act as generalized forces that try to move the system to an equilibrium state at which these gradients vanish. (They will succeed only if there is sufficient friction and no external force.)
In a formulation where only one half of each conjugate pair is explicit in the equations, a second order differential equation results.
share|cite|improve this answer
Here we will for simplicity limit ourselves to systems that have an action principle. (For fundamental and quantum mechanical systems, this are often the case.) Let us reformulate OP's question as follows:
Why does the Euler-Lagrange equations of motion for a relativistic (non-relativistic) system have at most two spacetime-derivatives (time-derivatives), respectively?
(Here the precise number of derivatives depends on whether one considers the Lagrangian or the Hamiltonian formulation, which are related via Legendre transformation. In case of a singular Legendre transformation, one should use the Dirac-Bergmann or the Jackiw-Faddeev method to go back and forth between the two formalisms. See also this Phys.SE post.)
The higher-derivative terms are in certain theories suppressed for dimensional reasons by the natural scales of the problem. This may e.g. happen in renormalizable theories.
But the generic answer is that the equations of motion actually doesn't have to be of order $\leq 2$.
However, for a generic higher-order quantum theory, if higher-derivative terms are not naturally suppressed, this typically leads to ghosts of the so-called bad type with wrong sign of the kinetic term, negative norm states and unitarity violation.
At the naive level, explicit appearances of higher time-derivatives may be removed in formulas by introducing more variables, either via the Ostrogradsky method, or equivalently, via the Lagrange multiplier method. However, the positivity problem is not cured by such rewritings due to the Ostrogradsky instability, and the quantum system remains ill-defined. See also e.g. this and this Phys.SE answer.
Hence one can often not make consistent sense of higher-order theories, and this may be why OP seldom faces them.
Finally, let us mention that it is nowadays popular to study effective higher-derivative field theory, with the possibly unfounded hope, that an underlying, supposedly well-defined, unitary description, e.g. string theory, will cure all pathologies.
share|cite|improve this answer
The reason for equations of physics, being of at most second order, is due to the so-called Ostrogradskian instability. (see paper by Woodard). This is a theorem, which states that equations of motion with higher-order derivatives are in principle unstable or non-local. This is easily shown using the Lagrangian and Hamiltonian formalism.
The key point is that in order to get an equation of motion of third order in the derivatives, we need a Lagrangian that depends on the coordinates and the generalized velocities and accelerations: $L(q,\dot{q},\ddot{q})$. By performing a Legendre transformation to obtain the Hamiltonian, this implies that we need two generalized momenta. The Hamiltonian results to be linear in at least one of the momenta and therefore it is unbounded from below (it can become negative). This corresponds to a phase space in which there are no stable orbits.
I would like to write the proof here, but it was already answered in this post. There the question is why Lagrangians only have one derivative, but it is actually closely related, since one can always find the equations of motion from a Lagrangian and viceversa.
Citing Woodard: "It has long seemed to me that the Ostrogradskian instability is the most powerful, and the least recognized, fundamental restriction upon Lagrangian field theory. It rules out far more candidate Lagrangians than any symmetry principle. Theoretical physicists dislike being told they cannot do something and such a bald no-go theorem provokes them to envisage tortuous evasions. ... The Ostrogradskian instability should not seem surprising. It explains why every single system we have so far observed seems to be described, on the fundamental level, by a local Lagrangian containing no higher than first time derivatives. The bizarre and incredible thing would be if this fact was simply an accident."
share|cite|improve this answer
This is correct. However, physical evolution equations are second (in time) order hyperbolic equations. In fact, each component of Dirac spinor follows a second order equation, namely, Klein-Gordon equation.
They're actually present in the equations, too.
Neither the Standard Model (SM) Lagrangian nor the Einstein-Hilbert (EH) action contain higher than second order temporal derivatives. These are the actions which are experimentally tested and these two theories are the most fundamental scientific theories we have. We know that there are physics beyond these two theories and people have good candidates to the underlying theories, but physics is an experimental science and these theories are not experimentally verified. The effective SM Lagrangian (a Lorentz invariant theory with the gauge symmetries of the SM but with irrelevant operators) does contain higher than second order temporal derivatives. Equally for the EH action plus higher order scalars. Two clarifications are however in order:
• These irrelevant terms are not experimentally verified. Almost everyone is sure that neutrino mass terms (which are irrelevant operators but do not contain higher order derivatives) exist in order to explain neutrino oscillations, but so far we do not have direct measurements of neutrino masses thus we are not allowed to claim that these terms exist. Summarizing: the effective SM is not a verified theory.
• The origin of these irrelevant terms is a consequence of integrating out fields with a mass much greater than the energy scale we are interested in. This could be the case of the neutrino mass term and a right-handed neutrino. For instance, in quantum electrodynamics, if one is interested in the physics at much lower energies than the electron mass, one can integrate (or nature integrates-out) out the electron field obtaining an effective Lagrangian (Euler-Heisenberg Lagrangian) with terms with higher order derivatives like $\frac{\alpha ^2}{m_e^4}~F_{\mu\nu}~F^{\mu\nu}~F_{\rho\sigma}~F^{\rho\sigma}$ (which contains four derivatives). These are terms suppressed by coupling constants ($\alpha$) and high-energy scales ($m_e$). There are terms with a number of derivates arbitrarily high, and they come from inverses of differential operators. This makes that the higher order derivatives do not enter in the zeroth-order equation of motion.
However, in a fundamental theory (in contrast to an effective one), finite higher order derivatives are not allowed in interactive theories (there are some exceptions with gauge fields, but for example a generic $f(R)$ theory of gravity is inconsistent). The reason is that those theories are not bounded from bellow (see Why are only derivatives to the first order relevant?) or, in some quantizations, contain negative norm states. These terms are among the forbidden operators in Gell-Mann's totalitarian principle.
In summary, evolution equations are order two because of existence of a normalizable vacuum state and unitarity (including here the fact that physical states must have positive norm). Newton was right when he wrote $$\ddot x=f(x,\dot x)$$
share|cite|improve this answer
Weinberg gives a pretty good answer for this in Volume 1 of his QFT opus: 2nd order differential equations appear in the field theories relevant to particle physics because of the relativistic mass-shell condition $p^2 = m^2$.
If we have a quantum field $\phi$, and we think of its fourier modes $\phi(p)$ as creating particles with 4-momentum $p$, then the mass-shell condition provides a constraint: $(p^2 - m^2)\phi(p) = 0$, because we don't want particle creation off-shell. Fourier-transform this back to position space, and you find that $\phi$ has to obey a 2nd order differential equation.
share|cite|improve this answer
This doesn't apply to general relativity, where nevertheless equations are of second order. – Arnold Neumaier Nov 12 '12 at 16:39
It does tell you that the linearized Einstein equations should be second order. And it explains why the renormalization flow should be defined in such a way that the kinetic term is fixed, which is a important assumption implicit in Lubos' answer. – user1504 Nov 12 '12 at 16:42
Actually, evolution equations are even more than just second order in time : they don't depend naively on first order derivative, that is, on "velocity". This can be easily understood as the fact that there exists no privileged inertial frames. The change (that is, what is absolute) is given by acceleration and not velocity. If it depended naively on some velocity terms, then it would implies that there's a privileged frame.
Let us make some analogy with Newtonian mechanics. If we were living in an Aristotle universe with privileged frame of reference, then $F = mv$. Motion would therefore be absolute and so would be velocity. Because there is no such privileged frame of reference, but a whole class of privileged ones (the inertial ones), $F = ma$. Why couldn't it be that we live in a universe where $F = m \dot a$ ? Simply because of Galilean principles.
If you believe that acceleration and velocities are "cancellable", and that real change is given by the derivative of acceleration, then you would have to believe in a second order Galilean principle of invariance and inertia. Second order principle of invariance would tell you that the laws of physics has to be the same in all inertial frames and all uniformly accelerated frames, otherwise it would mean that there is a way to discriminate them, and thus, that there is no equivalence between being inertial or uniformly accelerated. This, in particular implies that if you're inside one of these frames and you see someone that is uniformly accelerated with respect to your $x$ axis, that is, $x_1(t) = gt^2/2$, and you also see someone accelerated in the opposite direction, that is, $x_2(t) = -gt^2/2$, then from the point of view of $x_2$, the first object will be described by $x_2(t) = g t^2$. This implies that you would be able to see objects with arbitrary high acceleration, and this without the need to consume any "energy".
This is not what we observe in this universe, you don't uniformly accelerate an object "for free". So it looks like nature choosed to be as simple as possible in order to keep a symmetry between all inertial frames : its second order in time, not third or even worse. Note that one could say that its Machian, that is, that it is symmetric up to all order in acceleration. This would implies that there is no difference at all between rotation and being inertial. That is to say, that if I look at a guy spinning with a ball in his hands that will eventually let it go, the ball will then make a spiral movement and its angular velocity will keep increasing as far as it goes further from the guy who launched it (indeed, the latter has to see it going into straight line by Galileo principle of inertia). Universe is therefore not Machian either.
Then why does Schrödinger's equation depends on first order in time ? Because it is a modal equation : it needs an observer to makes sense and to make measurement. Hence, there is one Schrödinger equation per observer (the Hamiltonian depends on the observer and the system he is looking at, see the relational interpretations). At least, this is my interpretation of it.
share|cite|improve this answer
It was already noted in other answers that fields in physics are not always governed by second order partial differential equations (PDEs). It was said, e.g., that the Dirac equation is a first-order PDE. However, the Dirac equation is a system of PDEs for four complex functions - components of the Dirac spinor. It was also mentioned that any PDE is equivalent to a system of PDEs of the first order.
I mentioned previously that the Dirac equation in electromagnetic field is generally equivalent to a fourth-order partial differential equation for just one complex component, which component can also be made real by a gauge transform ( (my article published in the Journal of Mathematical Physics) or ). Let me also mention my article , where it is shown that the equations of spinor electrodynamics (the Dirac-Maxwell electrodynamics) are generally equivalent to a system of PDEs of the third order for complex four-potential of electromagnetic field (producing the same electromagnetic field as the usual real four-potential of electromagnetic field).
share|cite|improve this answer
(adding comment as answer)
Actually all classical mechanics (and quantum mechanics) can be formulated with only 1st-order derivatives (with the expense of adding extra dimensions, ie phase-space, Hamiltonian formalism).
This indeed makes for a dynamic description of a physical system. Furthermore any order of differential equations can be made into 1st order by the same token.
Non-linear dynamics (i.e chaos theory) makes heavy use of only 1st-order dynamical laws in their studies.
Adding more orders to dynamical laws, needs more information to be added (initial conditions) and becomes untractable to solve explicitly or algorithmically in most cases.
Even furthermore, first order dynamical laws, do provide (at least) good approximations or even complete coverage of the dynamical evolution of a system under study
share|cite|improve this answer
Your Answer
|
d9fd53210d00ed0d | Take the 2-minute tour ×
The Schrödinger equation describes the quantum mechanics of a single massive non-relativistic particle. The Dirac equation governs a single massive relativistic spin-½ particle. The photon is a massless, relativistic spin-1 particle.
What is the equivalent equation giving the quantum mechanics of a single photon?
share|improve this question
OK, posted as an answer with an additional note. – Igor Ivanov Nov 10 '10 at 20:01
related: physics.stackexchange.com/q/47105 – Ben Crowell Jun 3 '13 at 15:41
10 Answers 10
up vote 22 down vote accepted
There is no quantum mechanics of a photon, only a quantum field theory of electromagnetic radiation. The reason is that photons are never non-relativistic and they can be freely emitted and absorbed, hence no photon number conservation.
Still, there exists a direction of research where people try to reinterpret certain quantities of electromagnetic field in terms of the photon wave function, see for example this paper.
share|improve this answer
You can also say that the wavefunction of a photon is defined as long as the photon is not emitted or absorbed. The wavefunction of a single photons is used in single-photon interferometry, for example. In a sense, it is not much different from the electron, where the wave-function start to be problematic when electrons start to be created or annihilated... – Frédéric Grosshans Nov 17 '10 at 10:19
I agree. For the electrons there is a possibility to slow them down to non-relativistic speeds, but there is no such possibility for photons. I would also add that there is an interesting discussion about photons and electrons in the Peierls's book "Surprises in theoretical physics". – Igor Ivanov Nov 18 '10 at 21:46
Igor, I can't reconcile your wording with Frédéric's comment. Yes, there is no possibility for photons to slow down relativistically, but so what? Unless I misunderstand, there is still a spatial wavefunction (complexly valued over R^3) for the photon which obeys a relativistic Schrodinger equation. Yes, we have to assume the photon is not emitted or absorbed, but the same is true with electrons! The description of the latter in terms of a spatial wavefunction also breaks down when they are emitted or absorbed. – Jess Riedel Jun 3 '13 at 20:36
You can describe an individual photon in a 2D system since then they will gain an effective mass. The 2D system can be constructed in real-life using Bragg mirrors. Search around for polaritons (=photon+exciton(=electron+hole)) if you want to know more. – Didii Apr 30 '14 at 10:53
The maxwell equations, just like in classical electrodynamics. You'll need to use quantum field theory to work with them though.
share|improve this answer
There is a slight confusion in this question. In quantum field theory, the Dirac equation and the Schrödinger equation have very different roles. The Dirac equation is an equation for the field, which is not a particle. The time evolution of a particle, ie, a quantum state, is always given by the Schrödinger equation. The hamiltonian for this time evolution is written in terms of fields which obey a certain equation themselves. So, the proper answer is: Schrödinger equation with a hamiltonian given in terms of a massless vector field whose equation is nothing else but Maxwell's equation.
share|improve this answer
The general concept of quantum mechanics is that particles are waves. On of hand-waveing "derivations" of quantum mechanics is assumption that phase of particles behaves in the same way as phase of light $\exp( i \vec{k}\cdot \vec{x} - iE t / \hbar)$ (see Feynman Lectues on Physics, Volume 3, Chapter 7-2).
For light that is monochromatic (or almost monochromatic), just take Maxwell Equations plus add assumption that one photon can't be partially absorbed. Most of the time it suffices to use the paraxial approximation, or even - plane wave approximation. It works for standard quantum mechanics setups like Elitzur–Vaidman bomb-tester.
For nonmonochronatic light its much more complicated. More on nature of quantum mechanics of one photon: Iwo Bialynicki-Birula, On the Wave Function of the Photon, Acta Physica Polonica 86, 97-116 (1994).
share|improve this answer
A single photon is described quantum mechanically by the Maxwell equations, where the solutions are taken to be complex. The Maxwell equations can be written in the form of the matrix Dirac equation, where the Pauli two-component matrices, corresponding to spin 1/2 electrons, are replaced by analogous three-component matrices, corresponding to spin 1 photons. Since the Dirac equation and corresponding Maxwell equation are fully relativistic, there is no problem with the mass of the photon being zero, as there would be for a Schroedinger-like equation. See http://www.nist.gov/pml/div684/fcdc/upload/preprint.pdf.
share|improve this answer
According to Wigner's analysis, the single photon Hilbert space is spanned by a basis parameterized by energy-momenta on the forward light cone boundary, and a helicity of $\pm 1$.
However, a manifestly Lorentz covariant description in position space has to include a fictitious longitudinal photon with a helicity of 0. This degree of freedom is pure gauge, and decouples. Interestingly enough, the state norm is now positive semidefinite, instead of positive definite, with the transverse modes having positive norm and the longitudinal ones having zero norm.
share|improve this answer
you're on fire ! – user346 Dec 31 '10 at 7:49
There are several different waves associated with a photon. In QED the photon is associated with a classical solution of the (4-)vector potential. The vector potential contains features that are not physical, as a change of gauge is not reflected in any change of physical properties. Thus its role as a wave function might be somewhat questionable.But still, there must be a wave which explains the well known interference and diffraction patterns.
When we see a screen illuminated by laser light that have passed through a double slit, our eyes receives photons scattered by the atoms on the surface of the screen. Atoms absorb and emit photons as quantum electric dipole antennas. This implies that the atoms are sensitive to the electric field. From the vector field associated with the photon an electric field can be calculated. This field is gauge independent thus a physical field. This field is a solution to Maxwell's equations and will describes the usual interference and diffraction patterns.
share|improve this answer
Yup. Like this? http://www.nist.gov/pml/div684/fcdc/upload/preprint.pdf
share|improve this answer
You should include more than a bare link. Especially a bare link to a PDF. Copying the abstract, for instance, would make this a much better answer. – dmckee Jun 1 '11 at 4:46
@kaonix , a +1 because I learned something, but dmckee is right that you should quote the abstract for a good answer. Also use the link tab on the editor to insert a link, or at least put the http:// in front so that one does not have to copy and paste to see the link. I will edit it for you. – anna v Jun 1 '11 at 4:54
My answer is more of comment on other correct answers: you cannot build a delta-function for the photon in 3D becase the longitudinal component of a massless vector field is missing. But that does not mean there is no useful and meaningful concept of a wave function in the single-photon sector. This is just a peculiar fact about free electromagnetic field, you basically cannot localize light to a region smaller that the characteristic wave-length. Maxwell equations for the source-less (solenoidal) component of the vector potential field $\bf{A}$ play the role of the Schrodinger equation.
I recommend the book of Rodney Loudon "The Quantum Theory of Light" as good resource to really understand the quantum level of desription for light.
share|improve this answer
far field seems straightforward. near field is where the energy is passed from the massless photon to the massive electron. This is the part that requires clarification. Impedance concept seems helpful. http://vixra.org/abs/1207.0022 discusses how quantum impedances are related to the unstable particle spectrum, and http://vixra.org/abs/1303.0039 presents the role of impedances in entanglement and state reduction
share|improve this answer
Dear @peter cameron: Welcome to Phys.SE. For your information, Physics.SE has a policy that it is OK to cite oneself, but it should be stated clearly and explicitly in the answer itself, not in attached links. – Qmechanic Jun 1 '13 at 18:39
Your Answer
|
3ac7ce6583434e63 | Heat equation
From Wikipedia, the free encyclopedia
(Redirected from Heat diffusion)
Jump to: navigation, search
In this example, the heat equation in two dimensions predicts that if one area of an otherwise cool metal plate has been heated, say with a torch, over time the temperature of that area will gradually decrease, starting at the edge and moving inward. Meanwhile the part of the plate outside that region will be getting warmer. Eventually the entire plate will reach a uniform intermediate temperature. In this animation, both height and color are used to show temperature.
The heat equation is a parabolic partial differential equation that describes the distribution of heat (or variation in temperature) in a given region over time.
Statement of the equation[edit]
The behaviour of temperature when the sides of a 1D rod are at fixed temperatures (in this case, 0.8 and 0 with initial Gaussian distribution). The temperature becomes linear function, because that is the stable solution of the equation: wherever temperature has a nonzero second spatial derivative, the time derivative is nonzero as well.
For a function u(x,y,z,t) of three spatial variables (x,y,z) (see cartesian coordinates) and the time variable t, the heat equation is
More generally in any coordinate system:
where α is a positive constant, and Δ or ∇2 denotes the Laplace operator. In the physical problem of temperature variation, u(x,y,z,t) is the temperature and α is the thermal diffusivity. For the mathematical treatment it is sufficient to consider the case α = 1.
Note that the state equation, given by the first law of thermodynamics (i.e. conservation of energy), is written in the following form (assuming no mass transfer or radiation). This form is more general and particularly useful to recognize which property (e.g. cp or ) influences which term.
where is the volumetric heat flux.
The heat equation is of fundamental importance in diverse scientific fields. In mathematics, it is the prototypical parabolic partial differential equation. In probability theory, the heat equation is connected with the study of Brownian motion via the Fokker–Planck equation. In financial mathematics it is used to solve the Black–Scholes partial differential equation. The diffusion equation, a more general version of the heat equation, arises in connection with the study of chemical diffusion and other related processes.
General description[edit]
Solution of a 1D heat partial differential equation. The temperature (u) is initially distributed over a one-dimensional, one-unit-long interval (x = [0,1]) with insulated endpoints. The distribution approaches equilibrium over time.
Suppose one has a function u that describes the temperature at a given location (x, y, z). This function will change over time as heat spreads throughout space. The heat equation is used to determine the change in the function u over time. The rate of change of u is proportional to the "curvature" of u. Thus, the sharper the corner, the faster it is rounded off. Over time, the tendency is for peaks to be eroded, and valleys filled in. If u is linear in space (or has a constant gradient) at a given point, then u has reached steady-state and is unchanging at this point (assuming a constant thermal conductivity).
The image to the right is animated and describes the way heat changes in time along a metal bar. One of the interesting properties of the heat equation is the maximum principle that says that the maximum value of u is either earlier in time than the region of concern or on the edge of the region of concern. This is essentially saying that temperature comes either from some source or from earlier in time because heat permeates but is not created from nothingness. This is a property of parabolic partial differential equations and is not difficult to prove mathematically (see below).
Another interesting property is that even if u has a discontinuity at an initial time t = t0, the temperature becomes smooth as soon as t > t0. For example, if a bar of metal has temperature 0 and another has temperature 100 and they are stuck together end to end, then very quickly the temperature at the point of connection will become 50 and the graph of the temperature will run smoothly from 0 to 50.
The heat equation is used in probability and describes random walks. It is also applied in financial mathematics for this reason.
It is also important in Riemannian geometry and thus topology: it was adapted by Richard S. Hamilton when he defined the Ricci flow that was later used by Grigori Perelman to solve the topological Poincaré conjecture.
The physical problem and the equation[edit]
Derivation in one dimension[edit]
The heat equation is derived from Fourier's law and conservation of energy (Cannon 1984). By Fourier's law, the rate of flow of heat energy per unit area through a surface is proportional to the negative temperature gradient across the surface,
where k is the thermal conductivity and u is the temperature. In one dimension, the gradient is an ordinary spatial derivative, and so Fourier's law is
In the absence of work done, a change in internal energy per unit volume in the material, ΔQ, is proportional to the change in temperature, Δu (in this section only, Δ is the ordinary difference operator with respect to time, not the Laplacian with respect to space). That is,
where cp is the specific heat capacity and ρ is the mass density of the material. Choosing zero energy at absolute zero temperature, this can be rewritten as
The increase in internal energy in a small spatial region of the material
over the time period
is given by[1]
where the fundamental theorem of calculus was used. If no work is done and there are neither heat sources nor sinks, the change in internal energy in the interval [x−Δx, xx] is accounted for entirely by the flux of heat across the boundaries. By Fourier's law, this is
again by the fundamental theorem of calculus.[2] By conservation of energy,
This is true for any rectangle [t −Δt, t + Δt] × [x − Δx, x + Δx]. By the fundamental lemma of the calculus of variations, the integrand must vanish identically:
Which can be rewritten as:
which is the heat equation, where the coefficient (often denoted α)
is called the thermal diffusivity.
An additional term may be introduced into the equation to account for radiative loss of heat, which depends upon the excess temperature u = TTs at a given point compared with the surroundings. At low excess temperatures, the radiative loss is approximately μu, giving a one-dimensional heat-transfer equation of the form
At high excess temperatures, however, the Stefan–Boltzmann law gives a net radiative heat-loss proportional to , and the above equation is inaccurate. For large excess temperatures, , giving a high-temperature heat-transfer equation of the form
where . Here, σ is Stefan's constant, ε is a characteristic constant of the material, p is the sectional perimeter of the bar and A is its cross-sectional area. However, using T instead of u gives a better approximation in this case.
Three-dimensional problem[edit]
In the special cases of wave propagation of heat in an isotropic and homogeneous medium in a 3-dimensional space, this equation is
• u = u(x, y, z, t) is temperature as a function of space and time;
• is the rate of change of temperature at a point over time;
• uxx, uyy, and uzz are the second spatial derivatives (thermal conductions) of temperature in the x, y, and z directions, respectively;
• is the thermal diffusivity, a material-specific quantity depending on the thermal conductivity k, the mass density ρ, and the specific heat capacity cp.
The heat equation is a consequence of Fourier's law of conduction (see heat conduction).
If the medium is not the whole space, in order to solve the heat equation uniquely we also need to specify boundary conditions for u. To determine uniqueness of solutions in the whole space it is necessary to assume an exponential bound on the growth of solutions.[3]
Solutions of the heat equation are characterized by a gradual smoothing of the initial temperature distribution by the flow of heat from warmer to colder areas of an object. Generally, many different states and starting conditions will tend toward the same stable equilibrium. As a consequence, to reverse the solution and conclude something about earlier times or initial conditions from the present heat distribution is very inaccurate except over the shortest of time periods.
The heat equation is the prototypical example of a parabolic partial differential equation.
Using the Laplace operator, the heat equation can be simplified, and generalized to similar equations over spaces of arbitrary number of dimensions, as
where the Laplace operator, Δ or ∇2, the divergence of the gradient, is taken in the spatial variables.
The heat equation governs heat diffusion, as well as other diffusive processes, such as particle diffusion or the propagation of action potential in nerve cells. Although they are not diffusive in nature, some quantum mechanics problems are also governed by a mathematical analog of the heat equation (see below). It also can be used to model some phenomena arising in finance, like the Black–Scholes or the Ornstein-Uhlenbeck processes. The equation, and various non-linear analogues, has also been used in image analysis.
The heat equation is, technically, in violation of special relativity, because its solutions involve instantaneous propagation of a disturbance. The part of the disturbance outside the forward light cone can usually be safely neglected, but if it is necessary to develop a reasonable speed for the transmission of heat, a hyperbolic problem should be considered instead – like a partial differential equation involving a second-order time derivative. Some models of nonlinear heat conduction (which are also parabolic equations) have solutions with finite heat transmission speed.[4][5]
Internal heat generation[edit]
The function u above represents temperature of a body. Alternatively, it is sometimes convenient to change units and represent u as the heat density of a medium. Since heat density is proportional to temperature in a homogeneous medium, the heat equation is still obeyed in the new units.
Suppose that a body obeys the heat equation and, in addition, generates its own heat per unit volume (e.g., in watts/litre - W/L) at a rate given by a known function q varying in space and time.[6] Then the heat per unit volume u satisfies an equation
For example, a tungsten light bulb filament generates heat, so it would have a positive nonzero value for q when turned on. While the light is turned off, the value of q for the tungsten filament would be zero.
Solving the heat equation using Fourier series[edit]
Idealized physical setting for heat conduction in a rod with homogeneous boundary conditions.
The following solution technique for the heat equation was proposed by Joseph Fourier in his treatise Théorie analytique de la chaleur, published in 1822. Let us consider the heat equation for one space variable. This could be used to model heat conduction in a rod. The equation is
where u = u(x, t) is a function of two variables x and t. Here
• x is the space variable, so x ∈ [0, L], where L is the length of the rod.
• t is the time variable, so t ≥ 0.
We assume the initial condition
where the function f is given, and the boundary conditions
Let us attempt to find a solution of (1) that is not identically zero satisfying the boundary conditions (3) but with the following property: u is a product in which the dependence of u on x, t is separated, that is:
This solution technique is called separation of variables. Substituting u back into equation (1),
Since the right hand side depends only on x and the left hand side only on t, both sides are equal to some constant value −λ. Thus:
We will now show that nontrivial solutions for (6) for values of λ ≤ 0 cannot occur:
1. Suppose that λ < 0. Then there exist real numbers B, C such that
From (3) we get X(0) = 0 = X(L) and therefore B = 0 = C which implies u is identically 0.
2. Suppose that λ = 0. Then there exist real numbers B, C such that X(x) = Bx + C. From equation (3) we conclude in the same manner as in 1 that u is identically 0.
3. Therefore, it must be the case that λ > 0. Then there exist real numbers A, B, C such that
From (3) we get C = 0 and that for some positive integer n,
This solves the heat equation in the special case that the dependence of u has the special form (4).
In general, the sum of solutions to (1) that satisfy the boundary conditions (3) also satisfies (1) and (3). We can show that the solution to (1), (2) and (3) is given by
Generalizing the solution technique[edit]
The solution technique used above can be greatly extended to many other types of equations. The idea is that the operator uxx with the zero boundary conditions can be represented in terms of its eigenvectors. This leads naturally to one of the basic ideas of the spectral theory of linear self-adjoint operators.
Consider the linear operator Δu = uxx. The infinite sequence of functions
for n ≥ 1 are eigenvectors of Δ. Indeed
Moreover, any eigenvector f of Δ with the boundary conditions f(0) = f(L) = 0 is of the form en for some n ≥ 1. The functions en for n ≥ 1 form an orthonormal sequence with respect to a certain inner product on the space of real-valued functions on [0, L]. This means
Finally, the sequence {en}nN spans a dense linear subspace of L2((0, L)). This shows that in effect we have diagonalized the operator Δ.
Heat conduction in non-homogeneous anisotropic media[edit]
In general, the study of heat conduction is based on several principles. Heat flow is a form of energy flow, and as such it is meaningful to speak of the time rate of flow of heat into a region of space.
• The time rate of heat flow into a region V is given by a time-dependent quantity qt(V). We assume q has a density , so that
• Heat flow is a time-dependent vector function H(x) characterized as follows: the time rate of heat flowing through an infinitesimal surface element with area dS and with unit normal vector n is
Thus the rate of heat flow into V is also given by the surface integral
where n(x) is the outward pointing normal vector at x.
• The Fourier law states that heat energy flow has the following linear dependence on the temperature gradient
where A(x) is a 3 × 3 real matrix that is symmetric and positive definite.
• By the divergence theorem, the previous surface integral for heat flow into V can be transformed into the volume integral
• The time rate of temperature change at x is proportional to the heat flowing into an infinitesimal volume element, where the constant of proportionality is dependent on a constant κ
Putting these equations together gives the general equation of heat flow:
• The coefficient κ(x) is the inverse of specific heat of the substance at x × density of the substance at x: κ=.
• In the case of an isotropic medium, the matrix A is a scalar matrix equal to thermal conductivity .
• In the anisotropic case where the coefficient matrix A is not scalar and/or if it depends on x, then an explicit formula for the solution of the heat equation can seldom be written down. Though, it is usually possible to consider the associated abstract Cauchy problem and show that it is a well-posed problem and/or to show some qualitative properties (like preservation of positive initial data, infinite speed of propagation, convergence toward an equilibrium, smoothing properties). This is usually done by one-parameter semigroups theory: for instance, if A is a symmetric matrix, then the elliptic operator defined by
is self-adjoint and dissipative, thus by the spectral theorem it generates a one-parameter semigroup.
Fundamental solutions[edit]
A fundamental solution, also called a heat kernel, is a solution of the heat equation corresponding to the initial condition of an initial point source of heat at a known position. These can be used to find a general solution of the heat equation over certain domains; see, for instance, (Evans 1998) for an introductory treatment.
In one variable, the Green's function is a solution of the initial value problem
where δ is the Dirac delta function. The solution to this problem is the fundamental solution
One can obtain the general solution of the one variable heat equation with initial condition u(x, 0) = g(x) for −∞ < x < ∞ and 0 < t < ∞ by applying a convolution:
In several spatial variables, the fundamental solution solves the analogous problem
The n-variable fundamental solution is the product of the fundamental solutions in each variable; i.e.,
The general solution of the heat equation on Rn is then obtained by a convolution, so that to solve the initial value problem with u(x, 0) = g(x), one has
The general problem on a domain Ω in Rn is
with either Dirichlet or Neumann boundary data. A Green's function always exists, but unless the domain Ω can be readily decomposed into one-variable problems (see below), it may not be possible to write it down explicitly. Other methods for obtaining Green's functions include the method of images, separation of variables, and Laplace transforms (Cole, 2011).
Some Green's function solutions in 1D[edit]
A variety of elementary Green's function solutions in one-dimension are recorded here; many others are available elsewhere.[7] In some of these, the spatial domain is (−∞,∞). In others, it is the semi-infinite interval (0,∞) with either Neumann or Dirichlet boundary conditions. One further variation is that some of these solve the inhomogeneous equation
where f is some given function of x and t.
Homogeneous heat equation[edit]
Initial value problem on (−∞,∞)
Comment. This solution is the convolution with respect to the variable x of the fundamental solution
and the function g(x). (The Green's function number of the fundamental solution is X00.) Therefore, according to the general properties of the convolution with respect to differentiation, u = g ∗ Φ is a solution of the same heat equation, for
so that, by general facts about approximation to the identity, Φ(⋅, t) ∗ gg as t → 0 in various senses, according to the specific g. For instance, if g is assumed bounded and continuous on R then Φ(⋅, t) ∗ g converges uniformly to g as t → 0, meaning that u(x, t) is continuous on R × [0, ∞) with u(x, 0) = g(x).
Initial value problem on (0,∞) with homogeneous Dirichlet boundary conditions
Comment. This solution is obtained from the preceding formula as applied to the data g(x) suitably extended to R, so as to be an odd function, that is, letting g(−x) := −g(x) for all x. Correspondingly, the solution of the initial value problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0. The Green's function number of this solution is X10.
Initial value problem on (0,∞) with homogeneous Neumann boundary conditions
Comment. This solution is obtained from the first solution formula as applied to the data g(x) suitably extended to R so as to be an even function, that is, letting g(−x) := g(x) for all x. Correspondingly, the solution of the initial value problem on R is an even function with respect to the variable x for all values of t > 0, and in particular, being smooth, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0. The Green's function number of this solution is X20.
Problem on (0,∞) with homogeneous initial conditions and non-homogeneous Dirichlet boundary conditions
Comment. This solution is the convolution with respect to the variable t of
and the function h(t). Since Φ(x, t) is the fundamental solution of
the function ψ(x, t) is also a solution of the same heat equation, and so is u := ψ ∗ h, thanks to general properties of the convolution with respect to differentiation. Moreover,
so that, by general facts about approximation to the identity, ψ(x, ⋅) ∗ hh as x → 0 in various senses, according to the specific h. For instance, if h is assumed continuous on R with support in [0, ∞) then ψ(x, ⋅) ∗ h converges uniformly on compacta to h as x → 0, meaning that u(x, t) is continuous on [0, ∞) × [0, ∞) with u(0, t) = h(t).
Inhomogeneous heat equation[edit]
Problem on (-∞,∞) homogeneous initial conditions
Comment. This solution is the convolution in R2, that is with respect to both the variables x and t, of the fundamental solution
and the function f(x, t), both meant as defined on the whole R2 and identically 0 for all t → 0. One verifies that
which expressed in the language of distributions becomes
where the distribution δ is the Dirac's delta function, that is the evaluation at 0.
Problem on (0,∞) with homogeneous Dirichlet boundary conditions and initial conditions
Comment. This solution is obtained from the preceding formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an odd function of the variable x, that is, letting f(−x, t) := −f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an odd function with respect to the variable x for all values of t, and in particular it satisfies the homogeneous Dirichlet boundary conditions u(0, t) = 0.
Problem on (0,∞) with homogeneous Neumann boundary conditions and initial conditions
Comment. This solution is obtained from the first formula as applied to the data f(x, t) suitably extended to R × [0,∞), so as to be an even function of the variable x, that is, letting f(−x, t) := f(x, t) for all x and t. Correspondingly, the solution of the inhomogeneous problem on (−∞,∞) is an even function with respect to the variable x for all values of t, and in particular, being a smooth function, it satisfies the homogeneous Neumann boundary conditions ux(0, t) = 0.
Since the heat equation is linear, solutions of other combinations of boundary conditions, inhomogeneous term, and initial conditions can be found by taking an appropriate linear combination of the above Green's function solutions.
For example, to solve
let u = w + v where w and v solve the problems
Similarly, to solve
let u = w + v + r where w, v, and r solve the problems
Mean-value property for the heat equation[edit]
Solutions of the heat equations
satisfy a mean-value property analogous to the mean-value properties of harmonic functions, solutions of
though a bit more complicated. Precisely, if u solves
where Eλ is a "heat-ball", that is a super-level set of the fundamental solution of the heat equation:
Notice that
as λ → ∞ so the above formula holds for any (x, t) in the (open) set dom(u) for λ large enough. Conversely, any function u satisfying the above mean-value property on an open domain of Rn × R is a solution of the heat equation. This can be shown by an argument similar to the analogous one for harmonic functions.
Steady-state heat equation[edit]
The steady-state heat equation is by definition not dependent on time. In other words, it is assumed conditions exist such that:
This condition depends on the time constant and the amount of time passed since boundary conditions have been imposed. Thus, the condition is fulfilled in situations in which the time equilibrium constant is fast enough that the more complex time-dependent heat equation can be approximated by the steady-state case. Equivalently, the steady-state condition exists for all cases in which enough time has passed that the thermal field u no longer evolves in time.
In the steady-state case, a spatial thermal gradient may (or may not) exist, but if it does, it does not change in time. This equation therefore describes the end result in all thermal problems in which a source is switched on (for example, an engine started in an automobile), and enough time has passed for all permanent temperature gradients to establish themselves in space, after which these spatial gradients no longer change in time (as again, with an automobile in which the engine has been running for long enough). The other (trivial) solution is for all spatial temperature gradients to disappear as well, in which case the temperature become uniform in space, as well.
The equation is much simpler and can help to understand better the physics of the materials without focusing on the dynamic of the heat transport process. It is widely used for simple engineering problems assuming there is equilibrium of the temperature fields and heat transport, with time.
Steady-state condition:
The steady-state heat equation for a volume that contains a heat source (the inhomogeneous case), is the Poisson's equation:
where u is the temperature, k is the thermal conductivity and q the heat-flux density of the source.
In electrostatics, this is equivalent to the case where the space under consideration contains an electrical charge.
The steady-state heat equation without a heat source within the volume (the homogeneous case) is the equation in electrostatics for a volume of free space that does not contain a charge. It is described by Laplace's equation:
Particle diffusion[edit]
One can model particle diffusion by an equation involving either:
In either case, one uses the heat equation
Both c and P are functions of position and time. D is the diffusion coefficient that controls the speed of the diffusive process, and is typically expressed in meters squared over second. If the diffusion coefficient D is not constant, but depends on the concentration c (or P in the second case), then one gets the nonlinear diffusion equation.
Brownian motion[edit]
Let the stochastic process be the solution of the stochastic differential equation
where is the Wiener process (standard Brownian motion). Then the probability density function of is given at any time by
which is the solution of the initial value problem
where is the Dirac delta function.
Schrödinger equation for a free particle[edit]
With a simple division, the Schrödinger equation for a single particle of mass m in the absence of any applied force field can be rewritten in the following way:
where i is the imaginary unit, ħ is the reduced Planck's constant, and ψ is the wave function of the particle.
This equation is formally similar to the particle diffusion equation, which one obtains through the following transformation:
Applying this transformation to the expressions of the Green functions determined in the case of particle diffusion yields the Green functions of the Schrödinger equation, which in turn can be used to obtain the wave function at any time through an integral on the wave function at t = 0:
Remark: this analogy between quantum mechanics and diffusion is a purely formal one. Physically, the evolution of the wave function satisfying Schrödinger's equation might have an origin other than diffusion.
Thermal diffusivity in polymers[edit]
A direct practical application of the heat equation, in conjunction with Fourier theory, in spherical coordinates, is the prediction of thermal transfer profiles and the measurement of the thermal diffusivity in polymers (Unsworth and Duarte). This dual theoretical-experimental method is applicable to rubber, various other polymeric materials of practical interest, and microfluids. These authors derived an expression for the temperature at the center of a sphere TC
where T0 is the initial temperature of the sphere and TS the temperature at the surface of the sphere, of radius L. This equation has also found applications in protein energy transfer and thermal modeling in biophysics.
Further applications[edit]
The heat equation arises in the modeling of a number of phenomena and is often used in financial mathematics in the modeling of options. The famous Black–Scholes option pricing model's differential equation can be transformed into the heat equation allowing relatively easy solutions from a familiar body of mathematics. Many of the extensions to the simple option models do not have closed form solutions and thus must be solved numerically to obtain a modeled option price. The equation describing pressure diffusion in a porous medium is identical in form with the heat equation. Diffusion problems dealing with Dirichlet, Neumann and Robin boundary conditions have closed form analytic solutions (Thambynayagam 2011). The heat equation is also widely used in image analysis (Perona & Malik 1990) and in machine-learning as the driving theory behind scale-space or graph Laplacian methods. The heat equation can be efficiently solved numerically using the implicit Crank–Nicolson method of (Crank & Nicolson 1947). This method can be extended to many of the models with no closed form solution, see for instance (Wilmott, Howison & Dewynne 1995).
An abstract form of heat equation on manifolds provides a major approach to the Atiyah–Singer index theorem, and has led to much further work on heat equations in Riemannian geometry.
See also[edit]
1. ^ Here we are assuming that the material has constant mass density and heat capacity through space as well as time, although generalizations are given below.
2. ^ In higher dimensions, the divergence theorem is used instead.
3. ^ Stojanovic, Srdjan (2003), " Uniqueness for heat PDE with exponential growth at infinity", Computational Financial Mathematics using MATHEMATICA®: Optimal Trading in Stocks and Options, Springer, pp. 112–114, ISBN 9780817641979 .
4. ^ The Mathworld: Porous Medium Equation and the other related models have solutions with finite wave propagation speed.
5. ^ Juan Luis Vazquez (2006-12-28), The Porous Medium Equation: Mathematical Theory, Oxford University Press, USA, ISBN 0-19-856903-3
6. ^ Note that the units of u must be selected in a manner compatible with those of q. Thus instead of being for thermodynamic temperature (Kelvin - K), units of u should be J/L.
7. ^ The Green's Function Library contains a variety of fundamental solutions to the heat equation.
• Crank, J.; Nicolson, P. (1947), "A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat-Conduction Type", Proceedings of the Cambridge Philosophical Society, 43: 50–67, Bibcode:1947PCPS...43...50C, doi:10.1017/S0305004100023197
• Einstein, Albert (1905), "Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen", Annalen der Physik, 322 (8): 549–560, Bibcode:1905AnP...322..549E, doi:10.1002/andp.19053220806
• Evans, L.C. (1998), Partial Differential Equations, American Mathematical Society, ISBN 0-8218-0772-2
• Cole, K.D.; Beck, J.V.; Haji-Sheikh, A.; Litkouhi, B. (2011), Heat Conduction Using Green's Functions (2nd ed.), CRC Press, ISBN 978-1-43-981354-6
• John, Fritz (1991), Partial Differential Equations (4th ed.), Springer, ISBN 978-0-387-90609-6
• Wilmott, P.; Howison, S.; Dewynne, J. (1995), The Mathematics of Financial Derivatives:A Student Introduction, Cambridge University Press
• Thambynayagam, R. K. M. (2011), The Diffusion Handbook: Applied Solutions for Engineers, McGraw-Hill Professional, ISBN 978-0-07-175184-1
• Perona, P; Malik, J. (1990), "Scale-Space and Edge Detection Using Anisotropic Diffusion", IEEE Transactions on Pattern Analysis and Machine Intelligence, 12 (7): 629–639, doi:10.1109/34.56205
• Unsworth, J.; Duarte, F. J. (1979), "Heat diffusion in a solid sphere and Fourier Theory", Am. J. Phys., 47 (11): 891–893, Bibcode:1979AmJPh..47..981U, doi:10.1119/1.11601
External links[edit] |
a74f5a729ef5b643 | The Many Worlds Interpretation of Quantum Mechanics and the Emperor’s New Clothes
Encountering the Many Worlds Interpretation
Several years ago, I looked into the Many Worlds Interpretation (MWI) of quantum mechanics and concluded that it was not on the right track. It seemed to be creating more conceptual and technical problems than it solved. However, I frequently come across mention of it in the physics literature and in documentaries. Several leading scientists refer to it as a ‘viable’ alternative to the canonical Copenhagen Interpretation (CI); some even calling it the ‘preferred’ interpretation. So, I recently decided to take another look at the MWI. Perhaps there was something I missed, or something important that I did not understand on the first go-around.
The Emperor’s New Clothes and the Many Worlds Interpretation of Quantum Mechanics: imageMy initial instincts have been validated. Reading about the MWI, including papers by its proponents as well as by its detractors, reminded me of the Hans Christian Andersen story called The Emperor’s New Clothes The Emperor and his ministers believe the hype about a fabric that is allegedly invisible to anyone who is unfit for their position. They pretend that they can see the fabric so as not to feel left out. While the Emperor is parading naked through the town, believing that he is wearing the best suit of clothes, a naïve young boy blurts out that the Emperor is naked! Perhaps I can be that naïve young boy when it comes to untestable ideas like the MWI. I may not be young, but bear with me.
So what is the Many Worlds Interpretation?
As advertised, the main advantage of the MWI is that it solves the measurement problem. I discussed the measurement problem in two previous posts: Quantum Weirdness: The unbridled ability of quantum physics to shock us and Contrary to Popular Belief, Einstein Was Not Mistaken About Quantum Mechanics. The measurement problem results from the apparent need for two distinct processes for the evolution of the state vector: (1) continuous and deterministic evolution according to the Schrödinger equation when no one is looking, followed by (2) spontaneous non-unitary evolution, or collapse, of the state vector upon measurement of an observable. What constitutes a measurement and the dynamics of wave function collapse are not defined in the CI. Additionally, special status is assigned to an intelligent observer who is treated as being outside the quantum system.
As an added bonus, proponents of MWI claim that it enables independent derivation of quantum probability distributions without assuming the Born rule. The Born rule for computing the probability of potential outcomes of a quantum event is an additional postulate of canonical quantum mechanics. According to this rule (which has enjoyed phenomenal experimental verification time and time again throughout the past roughly ninety years), the probability for each potential outcome to become the realized outcome is given by the amplitude squared from the applicable terms in the state vector.
Hugh Everett developed the relative state formulation in his dissertation and his subsequent publication of “Relative State” Formulation of Quantum Mechanics (also available at this link). It was later given new life by Bryce DeWitt in 1970, with his work applying rational decision theory and game theory to quantum mechanics; see Quantum Mechanics and reality. Since then, dozens of papers have been written attempting to patch holes in the theory, or to take it apart.
The Multiverse is not a paradigm and it’s not shifting anything: image
See the recent article by Sabine Hossenfelder, “The Multiverse is not a paradigm and it’s not shifting anything” for another perspective on multiverses in general.
The MWI hypothesis avoids the measurement problem by assuming that wave function collapse never happens. A single result never emerges from an interaction or quantum measurement. Instead, all possibilities are realized. Each possibility is manifested in a new branching universe. With each observation, measurement, or interaction, the observer state branches into a number of different states, each on a separate branch of a multiverse. All branches exist simultaneously and each branch is ‘equally real’. All potential outcomes are realized, regardless of how small their probabilities.
What is wrong with the Many Worlds Interpretation?
If you have read my earlier post Three Roads to What Lies Beyond Quantum Mechanics, you have already glimpsed my discontent with MWI. You will find statements in the literature that claim MWI solves the paradoxes of the CI, and that it derives quantum probabilities without the use of an ad hoc assumption (as in the case of the Born rule in the CI). Hugh Everett’s main goals when he gave birth to the ‘relative state formulation’, which subsequently became known as the MWI, were to get rid of non-unitary wave function collapse and to relegate the observer to just another part of the quantum system. Unfortunately, MWI and its many variants does not live up to the product’s claims.
The MWI hypothesis requires an unimaginably large, perhaps infinite, number of universes, each spawned essentially instantaneously in a fully evolved state from it’s parent. Your present universe is constantly branching, sprouting multiple universes at a fantastic rate. Each new universe is identical to its parent IN EVERY WAY, except for the record of a single quantum event. I don’t just mean in one you are the Queen or King of your senior prom, and in another you decide not to run for prom royalty. Every quantum interaction, every quantum measurement, a countless infinity of which happen every day in what we conventionally call the universe, leads to multiple new universes.
According to Bryce DeWitt in Quantum Mechanics and reality,
Cloning and quantum teleportation Star Trek-style should be a breeze if quantum mechanics allows cloning the entire universe a countless number of times each second! This may make for interesting and fun science fiction, but without testable predictions it is not physics.
This multiverse evolves in a continuous and deterministic way. The apparent randomness that an observer in a particular universe (branch) perceives is in his/her mind; a consequence of the particular branch he/she finds him/herself in. The emergence of macroscopic uniqueness, a consequence of state vector collapse in the CI, is just an illusion in the MWI. That sounds like progress, right? But wait.
There’s More
The different branches are incoherent; they do not interfere with each other and observers in one branch cannot detect the existence of any of the other branches (this is the “no-communication” hypothesis). The wave function collapse hypothesis has been replaced by the no-communication hypothesis. Quantum decoherence has been used to justify and explain the no-communication hypothesis, with varying success. But, it has also been used to justify and explain the wave function collapse hypothesis. So there is nothing gained here by postulating a countless number of universes branching out from all of the interactions occurring throughout our universe.
John Bell, Speakable and Unspeakable in Quantum Mechanics: imageAs John Bell stated (while writing about the MWI, see p. 133 of Speakable and Unspeakable in Quantum Mechanics):
“Now it seems to me that this multiplication of universes is extravagant, and serves no real purpose in the theory, and can simply be dropped without repercussions.”
Probabilities in the Many Worlds Interpretation
Everett sets out to show that the Born probability rule can be derived from within his model, as opposed to having to assume it. He does this by assuming that the square of the amplitudes (from the state vector, same values that the Born rule uses) represent the ‘measure’ that should be assigned to each of the branches. When an observer repeats the same experiment a large number of times, multiple branches appear corresponding to each of the possible outcomes for each performance of the experiment. A particular observer will traverse a particular series of branches out of all the possible combinations of outcomes from all the trials. By applying his weighting scheme, Everett shows that, in most cases, the observer is part of a branch where the relative frequency of the observed results agrees with the Born rule.
What exactly does it mean for different branches to have different weights, if each and every branch is ‘equally real’? Are we to assume that the number of realizations of branches associated with a particular outcome of a particular measurement or interaction is proportional to the branch weight? You may naively think that the probabilities of various outcomes should be related to the number of branches with that outcome (a simple counting measure). What would then happen if the probability was an irrational number? Combinatorial methods fail. Even if you could use simple combinatorial methods, many observers would see outcome distributions that conflict with the Born rule. The Born probability rule has been validated in countless experiments over the past 87 years. Why have we never witnessed a deviation from it in any of the uncountable combinations of branches we have traversed to get where we are today?
In Everett’s theorem, the observer is considered as a purely physical system. This is a central part of his relative state formulation. The observer is just one subsystem in the overall system under consideration. Once one state is chosen for one part of the overall system, then the rest of the system is in a relative state; state X given that the one subsystem is in state Y. This was, initially, an advantage of the MWI compared to the CI. However, attempts to patch some of the holes in the theory have relied heavily on rational decision theory and game theory, thrusting a conscious observer back into the spotlight.
Throwing in Rational Decision Theory and Game Theory
Unfortunately, Everett’s approach to deriving the Born rule has been taken apart due to its use of circular reasoning. David Deutsch used decision theory and game theory to derive the Born rule; see Quantum Theory of Probability and Decisions. He demonstrated that if the amplitude squared measure is applied to each branch, then this value is also the probability measure for those branches. He did this by arguing that it represents the preferences of a rational agent. He considered the behavior of a rational decision maker who is making decisions about future quantum measurements. By rational, he meant that the decision maker’s preferences must be transitive: if he/she prefers A to B, and B to C, then he/she must also prefer A to C. (On a side note – many psychology studies have shown that personal preferences of so-called rational agents in the macro world are often not transitive).
According to Deutsch, if a rational decision maker believes all of quantum theory with the exception of assuming a probability postulate, he/she necessarily will make decisions (behave) as if the canonical probability rule is true. I am not an expert on decision theory, but it seems to me that the strategy chosen by Deutsch’s rational observer is not unique; it just happens to be the one that correlates with the desired end point – the Born probability rule when the amplitude squared values are used as branch weights. Additionally, if you accept Deutsch’s reasoning, methodology, and assumptions, I should think his results could equally well be used to demonstrate why the Born probability rule works in the CI, as well as in the MWI.
Attempts to Make it Consistent
Many attempts to formulate a consistent and defensible version of Everett’s initial ideas have been discussed in the literature since Deutsch’s work. Adrian Kent addresses many of them in One world versus many: the inadequacy of Everettian accounts of evolution, probability, and scientific confirmation. Kent points out some of the inconsistencies and contradictions that these attempts fall victim to, either when compared to each other or within themselves. Given that every potential outcome is actually realized in a branch, regardless of likelihood, a rather tortured path has to be taken to explain the meaning of probability and uncertainty when applying decision theory. Additionally, Kent is concerned by the lack of uniqueness in the assumptions and conclusions that can be made about the so-called rational decision-maker. To apply decision theory or game theory reasoning to quantum mechanical events seems rather surreal to me. But regardless of whether you take the approach seriously, there is little gained from it, unless you want to get extremely metaphysical about the role of consciousness. Which I do not.
So Where Does This Leave Us With Respect to the Many Worlds Interpretation?
Comments of Alvan Stewart taken out of context are applicable to Many Worlds Interpretation: image
“…no matter how high you pile considerations upon nothing, and extend the boundaries of nothing, to nothing it must come at last”
Writings and speeches of Alvan Stewart, on slavery. Ed. by Luther Rawson Marsh. Stewart, Alvan, 1790-1849., Marsh, Luther Rawson, ed. 1813-1902.
The MWI does not deliver on its promises. In particular, it does not solve the measurement problem unless you ignore the extra baggage that comes with the theory, such as the no communication hypothesis, the song and dance concerning rational decision theory, and the surreal role of the observer. Nonetheless, the idea of countless multiple universes has mesmerized popular culture and theoretical physics. The image of an infinite number of copies of ourselves, with slight variations in each universe, is quite tempting. Some people claim that multiverses must be real because we are getting hints of one from multiple theories, including superstring theory, inflationary cosmology, and anthropic reasoning. But each of these predictions are perched upon a mountain of assumptions. And each posits a different cause for the multiverse. It is not at all clear to me that satisfying the multiverse hypothesis of one model would necessarily satisfy that of the others.
The idea that the MWI is the only viable alternative to the CI is a myth. Other viable alternatives already exist; and it is premature to assume no one will ever discover another. These alternatives, such as de Broglie-Bohm mechanics and the Transactional Interpretation, need more work. But at the very least, they serve as proof of concept that we should not be so eager to believe any wild idea offered to us, without evidence. So, if you come across someone endorsing the Many Worlds Interpretation of quantum mechanics, remember the story of the Emperor’s New Clothes. Let them know that you are aware the emperor is naked. MWI does not provide a unique and independent derivation of probability, it does not remove the special treatment of the observer, and it replaces the collapse hypothesis with run-away multiverse branching and the no-communication hypothesis.
My upcoming posts will include:
• Discussion of hydrodynamic quantum analogues. These experiments demonstrate how phenomenon and probability distributions normally associated only with the quantum world can be produced by macroscopic systems and classical dynamics.
• So-called weak measurements that are allowing physicists to directly measure the quantum wave function itself, and monitor its evolution.
• Introduction to de Broglie-Bohm mechanics. Incidentally, wave function collapse does not occur in de Broglie-Bohm mechanics, and it does not require an infinite number of universes (just empty waves…).
About Warren Huelsnitz
Tagged , , . Bookmark the permalink.
huelsnitz moderator
Hi David,
I have not read the papers or the book you refer to. But the Price paper ( is on my stack of things to get to. Lately, I have been pondering the significance of pre-selection and post-selection in weak measurements; and also how Transactional Interpretation and Two-state Vector formalism specify forward and backward evolving state vectors. It is as if nature always specifies the initial and final states before proceeding with a quantum “event” – we get a whiff of what is going on in weak measurements and it is what TSVF or TI stumbled into.
For indeterminism – this has always bothered me, as well. But, it is what protects non-locality from leading to a violation of causality (i.e. why non-locality does not allow faster-than-light transfer of information). So, it seems to be indispensable, at least for now.
For Gleason – what I recall reading about it is a general lack of enthusiasm for it because it does not explain why nature would use that particular probability measure (why the state vector, and how does nature convert that to the probability for outcomes?)
From: David S
Thanks for your reply.
Firstly: I don't mind the time symmetric philosophy, but I personally find it extremely hard to believe that there is fundamental indeterminism in the fabric of nature, as is required in the transactional interpretation.
For some reason it just seems a bit "cheap" to me. I know Huw Price has written a recent paper on time symmetry in QM foundations, but I haven't yet read it. Perhaps you have?
Thanks for the links, it refreshed my memory a bit. I have read or skimmed them a couple of years ago, but I am still very much undecided on the issue.
I have been in contact with Zurek in order to get more info, but he is extremely vague. In some comments he seem to be a MWI'er in others he seem to adopt some sort of "information is fundamental" view.
David Wallace argue that the problem of preferred basis is solved through what he calls "emergent functionalism". I wonder if you have read his Multiverse book (2012) ?
Additionally there was a paper last year that resurrected the problem: but nothing seems to have come of it, so it seems most people consider the preferred basis issue to be solved by Zurek, Zeh and Wallace.
As for the Born rule issue, yes it's a huge controversy, but quite a few seem to have accepted it despite all the papers arguing against it.
And besides, isn't Gleason's Theorem enough anyway? At least that seems to be the argument I hear most.
All the best,
huelsnitz moderator
Hi David,
Thanks for reading the article and your feedback.
My list of articles that I want to write includes expanding upon where I left off in the Decoherence article and in the Transactional Interpretation (TI) article.I don't know when I will get around to these, but when I do, it may address some of your concerns.
People (Zurek in particular) have tried to use the decoherence program to show how a preferred basis could naturally arise.I am not convinced that their approach is correct, but I am also not certain that it is wrong.I tend to think that a time-symmetric interpretation of quantum mechanics is necessary (such as TI or the Two-State Vector Formalism, or something along those lines).In a time-symmetric interpretation, the initial and final conditions (i.e. wave functions) are specified. Perhaps a preferred basis is a natural consequence of how the initial and final states are (physically) selected, and nothing more will need to be said about it.
You might enjoy chewing on these papers about decoherence:
Concerning the MWI: it's true that many smart people claim to like it.However, the reasons I often hear them cite are either (1) that it is the only alternative to the Copenhagen Interpretation, or (2) that it provides a derivation or proof of the Born rule.Both of those claims are false.There are other viable interpretations, such as de Broglie-Bohm pilot wave theory or the Transactional Interpretation.And, to attempt to prove the Born rule with MWI, people have to make circular assumptions, or poorly defined assumptions, or assumptions that are quite a stretch (like applying rational decision theory to quantum physics).So, I think some of these people heard or casually read some erroneous claim about MWI and believed it without question.
In the Transactional Interpretation, the Born rule arises naturally from the theoretical formalism:the link between amplitude squared and likelihood.
-----Original Message-----
Name: David
Comment: Hello,
Just came across this site and read the Many Worlds article.
I was wondering if you could do another one where you pick apart the Born Rule issue and also the Preferred basis issue?
Because the feeling I was left with after reading the article was basically that you reject MWI because it's mindboggling and that you are unsure of the Born Rule issue.
I am sure that's not the case, at least I hope that's not the case.
As a fellow realist who also have a gut feeling that there is a deeper theory, I still haven't been able to reject MWI completely because a lot of smart people adhere to it.
I know the Born Rule issue is a bit controversial, but is that really all there is against it? Has the preferred basis issue been solved thorouhgly in your opinion?
1. […] covers very similar topics than the ones here, and also shares a similar outlook. For instance, this article beautifully sums up why I never warmed up to Everett's Multiverse interpretation (although I have to admit reading Julian Barbour's End of Time softened my stance a bit – more on […]
2. […] Encountering the Many Worlds Interpretation Several years ago, I looked into the Many Worlds Interpretation (MWI) of quantum mechanics and concluded that it was not on the right track. It seemed to be creating more conceptual and technical problems… […] |
e94a518cfa9f41fa | Monday, July 15, 2013 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere
Bohmian mechanics, a ludicrous caricature of Nature
A constraint that defines Bohmian mechanics is simple: it should be a classical theory that emulates quantum mechanics as well as it can. The champions of the Bohmian theory know that getting the same predictions as quantum mechanics is the maximum goal they may dream about – they can never beat quantum mechanics – and they sort of realize that even this tie is too much to ask in general. Most of the Bohmian advocates seem to know that their theory can't be accurate, especially because of its fundamental conflict with relativity – but they don't seem to care. The fact that the Bohmian mechanics agrees with their fully discredited preconception that Nature is fundamentally classical is more important for them than the (in)accuracy of the predictions extracted from their pet theory.
It's straightforward to explain why it's possible to design a classical theory that parrots quantum mechanics when it comes to certain questions.
Bohmian mechanics is at least vaguely defensible in the non-relativistic quantum mechanical models only; in more general theories, it collapses completely. How does it rebuild non-relativistic quantum mechanics for one particle, for example?
Proper quantum mechanics of this system may be written down in Schrödinger's picture that dictates the following time evolution to the wave function:\[
i\hbar\frac{\partial}{\partial t}\psi(q,t)=-\sum_{i=1}^{N}\frac{\hbar^2}{2m_i}\nabla_i^2\psi(q,t) + V(q)\psi(q,t)
\] The way how this wave is evolved in agreement with the equation above contains all the "mathematical beef" of quantum mechanics for the given system and to get the right numbers, any classical caricature of quantum mechanics simply has to contain some objects that are pretty much equivalent to \(\psi(q,t)\). These objects are then assigned totally different, wrong interpretations in the caricatures but they must be there and they must evolve according to the same Schrödinger's equation.
Bohmian mechanics buys \(\psi(q,t)\) and incorrectly interprets it as a classical wave – a field that has objective values and is in principle measurable. Of course, we know from quantum mechanics as well as experiments that the value of the wave function simply shouldn't be and isn't measurable in a single repetition of an experiment. So the Bohmian apologists must also invent convoluted mechanisms to make the wave unmeasurable – because it is unmeasurable according to the experiments – despite the fact that the wave function is fundamentally measurable in their theory.
Bohmian Rhapsody, via Dilaton.
Is this the real life? Is this just fantasy?
Caught by the guiding wave. No escape from reality.
Open your eyes. Look up to the skies and see:
I'm just [a] state vector, I need no images.
Because I'm easy come, easy go.
A little high, little low.
Anyway the [pilot] wave blows, doesn't really matter to me, to me.
The pilot-wave theory adopts \(\psi(q,t)\) as an objective classical wave – which it gives a new name, the "guiding wave" or "pilot wave" – but in order to agree with the fact that particles may be observed at sharp locations despite the fuzziness of the wave functions associated with them, they must add some additional degrees of freedom: the actual classical position of the particle. The defining philosophy of Bohmian mechanics is that the actual, classical position of the particle is "guided" by a function of the classical field emulating the wave function so that the probability distribution for the particle's positions remains what it should be according to quantum mechanics. For example, the laws that guide the actual classical particle must be such that they repel the particle from the interference minima in a double-slit experiment:
The right end of the picture (the photographic plate) shows denser and less dense regions, the interference maxima and minima.
Can you find the appropriate rules for one non-relativistic spinless quantum particle that is able to do it in a way that imitates quantum mechanics? You bet. All the tools are available in conventional quantum mechanics for this system. Recall that in quantum mechanics, \(\rho=|\psi(q,t)|^2\) is the probability density that the particle is sitting near location \(q\) at time \(t\). But quantum mechanics also allows you to define the probability current\[
\bold j = \frac{1}{m} \mathrm{Re}\left ( \psi^*\bold{\hat{p}}\psi \right )
\] Note that it is again sesquilinear (bilinear with one star) in the wave function. We act on the wave function by the momentum operator \(\bold{\hat{p}}=-i\hbar\nabla\), multiply the result by \(\Psi^*\) just like when we calculated the probability density, take the real part, and divide it by the mass \(m\). You see that it only differs from the formula for the probability density by the extra operator \(\bold{\hat{p}}/m\), the operator of the velocity \(\bold{\hat v}\), inserted in the middle. The real part could have been added to the probability density as well because it was real to start with.
At any rate, if you define the probability density and the probability current correctly, they obey the continuity equation\[
\frac{\partial \rho}{\partial t} + \bold \nabla \cdot \bold j = 0.
\] The divergence of the probability current exactly agrees with the decrease of the probability density in the given region. It means that the probability current measures how the probability has to flow into/from a given infinitesimal volume if you want the probability density to change just like it should according to Schrödinger's equation.
Now it's easy to realize that if you define a classical "velocity field"\[
\bold{\hat v} = \frac{\bold{\hat j}}{\rho},
\] it will be very useful for emulating quantum mechanics. It's not hard to prove that if you define Bohmian mechanics as the "classicalized" wave function together with a classical position \(\bold{\hat q}(t)\) that evolves according to the "guiding equation"\[
\ddfrac{\bold{\hat q}}{t} = \bold{\hat v} (\bold{\hat q}(t)),
\] the trajectories of the classical particles will be repelled from the interference minima, attracted to the interference maxima, and will obey a more specific rule: If you imagine that the particles in the initial state are distributed according to the probability distribution given by \(\rho(\bold{\hat q},t)\), it will be true for the final state, too.
This trick may be generalized to the case of \(N\) non-relativistic particles. In this case, the wave function \(\psi\) becomes a classical wave that is a function of the \(3N\)-dimensional configuration space. This configuration space is larger than the ordinary space and is "multi-local" and because we have this "multi-local" old-fashioned classical field, the theory becomes explicitly non-local and a violation of the Lorentz symmetry, at least in principle, is inevitable.
I would like to emphasize that it's no surprise at all that it's possible to find the equation that evolves the probability distribution in the right way. Imagine that you start with a wave function \(\psi(\bold{\hat q})\) at some time \(t_0\). Throw a trillion of dots – particles – to the space that are distributed according to \(\rho = |\psi|^2\). Do the same thing for the final moment \(t_1\) when the wave function is different. You will have two configurations of trillions of particles. It's not shocking that you may "connect the dots" from the initial state to the final state in some way.
A way that is simple enough, one based on the probability current and described above, gives you one of the solutions. But it's not the only solution. In reality, the "initial dots" could be connected with the "final dots" in infinitely many ways (well, a "mere" trillion factorial if you only have a trillion of dots). In the continuous language, you could e.g. make the particles move along spirals inside the cylinders that surround the interference maxima. Is one way to connect the dots better than others?
Of course, it's not. All of them are equally good. Quantum mechanics commands you to learn something about the initial state – some wave function or density matrix that encode the initial probability distribution – and it allows you to predict the probabilities for the final state. But it doesn't tell you which of the initial particles is connected with which final particle, i.e. how to connect the dots. It doesn't inform you about any preferred classical trajectory that connects them (and Feynman's approach orders you to sum over all trajectories). If you could actually "measure" this permutation that determines how the dots are connected, quantum mechanics would be shown incomplete.
However, it's totally obvious that there's no way to measure the trajectories or permutations inside. The particles just don't have well-defined, in principle measurable trajectories between the measurements for the usual Heisenberg uncertainty principle-based reasons. If you tried to measure the trajectory before the final measurement, you would change the experiment and destroy or damage the final interference pattern. So all the precise lines on the "caricature of the double-slit experiment"
are pure fantasy. They're just crutches for the people who need some specific picture of the intermediate states to be drawn. But the specific picture we drew is in no way better than infinitely many other pictures we could draw that would predict the same interference pattern, the same probability distributions for the final state. Everything we added because we wanted the physical system to have objective properties prior to the measurement – because we're bigots who can't accept the fact that classical physics has died – is unphysical. The added value is purely negative. Everything we added to get from proper quantum mechanics to Bohmian mechanics is rubbish. And many things we're forced to lose when we switch from quantum mechanics to Bohmian mechanics are essential.
Because the wave function has a probabilistic interpretation in proper quantum mechanics (it is a ready-to-cook meal from which one may quickly prepare various probability distributions by a calculation), it doesn't matter that it spreads. The spreading of the wave function doesn't make the world more fuzzy. It only makes our knowledge about the world more uncertain. But once we learn the answer to a question – e.g. about the position of a particle – the world fully regains its sharp character it boasted at the beginning. If you only know that the probability of 1,2,3,4,5,6 are 1/6 for some dice in Las Vegas, it doesn't mean that the dice became structureless balls or that the digits written on their sides have become fuzzy or mixed or smeared. It just means that we have one equally sharp cubic die but we just don't know its orientation in space. The uncertainty coming from quantum wave functions are analogous – they only differ from the "classical uncertainty" by their inevitability.
That's not the case of Bohmian mechanics. The wave function is interpreted as a classical field of a sort and it is objectively spreading. So something objective is being diluted all over the Universe. That's terrible because this objectively makes the Universe increasingly more fuzzy and bizarre. The useless parts of the guiding wave – the "classicalized" wave function – should be killed in some way because they became useless. But Bohmian mechanics doesn't imply anything of the sort. If you want to clean the garbage of the no-longer-needed branches of the wave function, you will have to add another independent contrived mechanism. Such a mechanism will be a new source of a violation of the Lorentz invariance.
(You also need a special mechanism that prepares the guiding wave in a certain initial state and one more mechanism that distributes the "actual particle" inside the appropriate distribution with the right odds because these two things don't follow from Bohmian mechanics as we have defined it above, either. Most of these things are ignored by the Bohmists. Note that with the right probabilistic interpretation – quantum mechanics directly connects the knowledge about the past with the knowledge about the future, without any new crutches in between – we don't need to invent any new mechanisms.)
I think that a sane, critically thinking person must be able to realize what he is doing if he is doing such things. He is drawing a ludicrous caricature of Nature – a physical system that is actually governed by the laws of proper quantum mechanics – that reproduces some properties of the correct, quantum theory. The project of drawing the caricature is motivated by the desire to defend a philosophical dogma that the world is fundamentally classical even though it is clearly not. If he has at least some conscience, he must feel analogously as if he were counterfeiting a $100 banknote. He must know that what he is producing isn't the "real thing"; it is just a forgery that can bring him greater personal benefits than the actual banknotes but that's where the advantages stop.
But every change from the proper quantum mechanics to the pilot-wave theory is clearly wrong – the "added value" is unquestionably negative. Because the Bohmists don't like the probabilistic character of the wave function, they turn it into a classical wave – the guiding wave. But a classical wave that spreads objectively makes the world ever more fuzzy. So one has to introduce new tricks to have a chance that this increasing fuzziness doesn't spoil the world. All these tricks – tricks that can't really ever be defined in such a way to imitate quantum mechanics completely accurately – have to be considered and added just in order to mask the fact that the wave function is simply not a classical field.
It's fair to say that the claim by quantum mechanics that the wave function is not an objectively real wave or field that can be in principle measured is something that we have proven by direct experiments. Attempts to pretend that the wave function is a classical wave are just attempts to mask the truth. I am confident that every Bohmist must ultimately realize it is so and he must be dishonest if he claims that his efforts are more justifiable than the efforts of creationists who are trying to obscure the explicit evidence in favor of evolution: they are exactly equally unjustifiable.
Moreover, it's sometimes being said or thought that the perfect emulation of quantum mechanics can be done. Because the invalidated dogma that Nature is fundamentally classical is holy for these bigots, they think that it should be done, too. But the truth is that it can't be done for a general physical system and for a general choice of observables we may measure in actual experiments described by general enough quantum theories.
Try to add the spin to a particle. If the logic of Bohmian mechanics – the wave function "is" a classical field and we should also add some classical values of a maximum set of commuting observables – were universally valid, it's clear that aside from the spinor-valued wave function \((c_{\rm up},c_{\rm down})\), we should also assume that Nature "objectively knows" about the classical bit of information that tells you whether the spin is "actually" up or down.
However, even the Bohmists realize that if every electron "objectively knew" whether its spin is up or down with respect to the \(z\)-axis, then the laws of physics would break the rotational symmetry because the \(z\)-axis would play a privileged role. Roughly speaking, the ferromagnets would always be oriented vertically, to mention an example. If the \(z\)-component of the classical angular momentum is quantized, it's totally obvious that the other components can't be quantized. A nonzero vector can't have integer (or half-integer) coordinates in each (rotated) coordinate system.
Because they sort of realize that the rotational symmetry holds exactly and the hypothesis that the classical value exists with respect to one axis would break the symmetry kind of maximally, they decide that the Bohmian rules must be "skipped" in the case of the spin – they just manually omit some degrees of freedom that should be there according to the general prescription of Bohmian mechanics and hope that the spin measurements are ultimately reduced to position measurements so that it doesn't hurt if some degrees of freedom are not doubled in the usual Bohmian way.
The reason why the case of the spin is obvious even to them is the fact that different components of the spin are non-commuting observables none of which is more "natural" than others. After all, they are exactly equally natural because they are related by the rotational symmetry.
While the spin is an obvious problem, the pathological character of Bohmian mechanics is much more general. Every (qubit-like) discrete information in quantum mechanics – information labeling a finite-dimensional Hilbert space – is incompatible with the Bohmian philosophy. Recall that Bohmian mechanics added "classical trajectories" \(\bold{\hat q}(t)\) and these coordinates were functions of time that evolved according to some differential equations. But that was only possible because the spectrum of the coordinates was continuous. If you think about observables with a discrete spectrum, it just doesn't work because they would have to "jump to a different, sharply separated discrete eigenvalue" at some points and there can't be any deterministic laws that would govern such jumps.
Quantum mechanics tells you that a quantum computer composed of a very large number of qubits may perfectly emulate any quantum system. But that's not the case in Bohmian mechanics. An arbitrarily large quantum computer is composed of qubits, e.g. many electron spins, and because the spin isn't accompanied by a classical bit, Bohmian mechanics is forced to say that an arbitrarily large quantum computer only contains the "classicalized" wave function but no additional classical information analogous to the classical trajectories. So for a quantum computer, the whole "redundant superstructure" (which is how Albert Einstein called these extra coordinates – he was a foe of the pilot-wave theory, despite his being a disbeliever in quantum mechanics) has to be omitted. This is quite an inconsistency in the Bohmian treatment of different quantum systems. The actual reason behind the inconsistency is clear, of course: some physical systems may be caricatured by the pilot-wave trick, others can't. But in Nature, there actually isn't any qualitative difference (in principle observable difference) between these two classes of situations.
I said that Bohmian mechanics doesn't allow you to consistently treat the particles' spin or any other discrete degrees of freedom, for that matter. But the inadequacy of Bohmian mechanics is much worse than that. It really doesn't allow you to correctly deal with most observables in general quantum systems, not even with observables with a continuous spectrum. I have discussed similar problems in Bohmists and the segregation of primitive and contextual observables four years ago.
The problem is that Bohmian mechanics forces you to choose some observables that "really exist" – are encoded in the objective extra coordinates that are supplemented to the "classicalized" wave function. However, quantum mechanics implies that other observables just can't have a well-defined value at the same moment – because they don't commute with the first ones, stupid. That also means that Bohmian mechanics can't have any answers to questions about the value of these observables.
The Bohmian trajectories in the picture above pretend that a particle has an objective position and an objective velocity. But what about the orbital angular momentum \(\bold{\hat L} = \bold{\hat q}\times \bold{\hat p}\)? A basic result of quantum mechanics is that the spectrum of \(\bold{\hat L}_z\) is discrete; the eigenvalues are integer multiples of \(\hbar\). Already this elementary fact in quantum mechanics – even non-relativistic quantum mechanics – is completely inaccessible to Bohmian mechanics. The cross product of the classical position and the classical momentum of the "added Bohmian trajectories" isn't quantized at all. It has really nothing to do with the angular momentum that can be measured.
And be sure that the measurement of the angular momentum is often – e.g. for electrons in atoms – much more natural and "fundamental" than the measurement of the particles' positions or momenta. It's because its eigenstates are much closer to the energy eigenstates and those are the most natural basis of a Hilbert space because they describe stationary – and therefore lasting – states. But such a direct measurement of the discrete orbital angular momentum can't be done in Bohmian mechanics. Instead, Bohmian mechanics tells you that you have to continue the evolution of the wave function according to the laws stolen from proper quantum mechanics up to the moment when you can actually convert the original measurement to a measurement of a location, and hope that Bohmian mechanics knows how to emulate the measurements of positions. It isn't quite the case, either, but even if it were the case, Bohmian mechanics is just bringing an amazing degree of inconsistency into the way how different observables – different functions of the phase space – are treated. A sensible theory should treat all functions of the coordinates and momenta i.e. all functions in the phase space equally, following unified rules. Quantum mechanics obeys this criterion, Bohmian mechanics doesn't. We could say that just like the solipsists say that their own mind is the only physical system that may be claimed to be self-aware, Bohmian mechanics remains silent and reproducing the (accurately emulated) quantum evolution up to the moment when macroscopic positions are apparently being measured (those are the "conscious events" that are supposed to replace quantum mechanics with something else). But in the real world, there's nothing special about the minds of the solipsists (except that they belong to the set of crazy people) and there's also nothing special about the positions of macroscopic objects in comparison with many other observables we may define.
In quantum mechanics, you may directly construct operators for the angular momenta and ask about their possible values, eigenvalues, and about the predicted probabilities that the measured value will be one or the other. It doesn't matter whether the angular momenta belong to large or small or conscious or unconscious objects. Quantum mechanics allows you to deal with all observables equally. In Bohmian mechanics, those things matter. Effectively, any measurement has to be continued up to the moment when it imprints itself into a position of a macroscopic object which Bohmian mechanics claims to reproduce correctly.
A totally new minefield for Bohmian mechanics is relativity. The minimum consistent relativistic theories of quantum particles are quantum field theories (QFTs). They include the spin; I have already discussed the Bohmian problems with the spin. But there are infinitely many similar problems. For example, you may choose many different bases of the QFT Hilbert space. They may be eigenstates of the occupation number operators; eigenstates of field operator distributions \(\hat \phi(\bold{x})\), and so on. It is not clear at all which of these observables are added as the "extra classical trajectories" to Bohmian mechanics. In fact, it is totally obvious that none of the choices will behave correctly in all the experiments that may test a quantum field theory. Also, you can't add many of them or all of them (e.g. both positions and particles and classical values of the fields) because it would be clearly undetermined which of these "added", mutually conflicting classical degrees of freedom defines the "actual reality" that decides about a measurement.
Sometimes, the value of the field at a given point may be measured, especially when the frequencies are low. So it would seem like you need to add a "preferred classical field configuration" to the Bohmian version of a QFT. However, especially for high frequencies, the quantum field manifests itself as a collection of particles so you may want to add the trajectories of the particles instead. Moreover, even if you represent a QFT as a system describing many particles, your Bohmian theory won't be able to deal with the basic and most universal processes that must exist in a QFT or any other relativistic quantum theory such as the pair creation of a particle and an antiparticle and their destruction.
If individual particles evolve according to the "guiding wave" equations we discussed at the beginning, it's simply infinitely unlikely (the probability refers to the selection of the initial positions from the distribution) that they will ever collide with one another. Two random lines in a 3D space simply don't intersect one another. But if they don't directly collide, it means that they can't annihilate! To allow the particles to annihilate (and be pair-created) with the (experimentally proven) nonzero probability, you would need to introduce a totally non-local extra dynamics that sometimes allows the particles to jump to a completely different place; or you would have to allow the annihilation of particle pairs that don't coincide in space. Any such an extra mechanism would force you to change the original laws of physics in a way that would almost certainly contradict some other experiments because the unmodified quantum laws simply work and it was a healthy strategy for you to emulate them "perfectly" at the very beginning. Such modifications would especially contradict some experimental tests of relativity because these modifications are so horribly nonlocal.
So you have no chance to construct an operational Bohmian caricature of a quantum field theory. Needless to say, the problems become even more extreme once you switch to quantum gravity i.e. string theory because many more observables have a discrete spectrum, there are many more ways to choose the bases, the nonzero commutators of various observables are more important than ever before, and Bohmian mechanics just can't prosper in such general quantum situations. On one hand, quantum gravity i.e. string theory is just another quantum theory. On the other hand, it is "more quantum" than all the previous quantum theories simply because the quantum phenomena affect many more questions that could have been thought of in the classical way if you worked with simpler quantum mechanical theories (for example, the spacetime topology – especially the number of Einstein-Rosen bridges in the spacetime – can't even be assigned a linear operator in a quantum gravity theory, as Maldacena and Susskind argued).
The non-local fields, collapses, non-local jumps needed for particle annihilations, and other things represent an inevitable source of non-locality that can, in principle, send superluminal signals and that consequently contradicts the Lorentz symmetry of the special theory of relativity. There's no way out here. If you attempt to emulate a quantum field theory in this Bohmian way, you introduce lots of ludicrous gears and wheels – much like in the case of the luminiferous aether, they are gears and wheels that don't exist according to pretty much direct observations – and they must be finely adjusted to reproduce what quantum mechanics predicts (sometimes) without any adjustments whatsoever. Every new Bohmian gear or wheel you encounter generally breaks the Lorentz symmetry and makes the (wrong) prediction of a Lorentz violation and you will need to fine-tune infinitely many properties of these gears and wheels to restore the Lorentz invariance and other desirable properties of a physical theory (even a simple and fundamental thing such as the linearity of Schrödinger's equation is really totally unexplained in Bohmian mechanics and requires infinitely many adjustments to hold – while it may be derived from logical consistency in quantum mechanics). It's infinitely unlikely that they take the right values "naturally" so the theory is at least infinitely contrived. More likely, there's no way to adjust the gears and wheels to obtain relativistically invariant predictions at all.
I would say that we pretty much directly experimentally observe the fact that the observations obey the Lorentz symmetry; the wave function isn't an observable wave; and lots of other, totally universal and fundamental facts about the symmetries and the interpretation of the basic objects we use in physics. Bohmian mechanics is really trying to deny all these basic principles – it is trying to deny facts that may be pretty much directly extracted from experiments. It is in conflict with the most universal empirical data about the reality collected in the 20th and 21st century. It wants to rape Nature.
A pilot-wave-like theory has to be extracted from a very large class of similar classical theories but infinitely many adjustments have to be made – a very special subclass has to be chosen – for the Bohmian theory to reproduce at least some predictions of quantum mechanics (to produce predictions that are at least approximately local, relativistic, rotationally invariant, unitary, linear etc.). But even if one succeeds and the Bohmian theory does reproduce the quantum predictions, we can't really say that it has made the correct predictions because it was sometimes infinitely fudged or adjusted to produce the predetermined goal. On the other hand, quantum mechanics in general and specific quantum mechanical theories in particular genuinely do predict certain facts, including some very general facts about Nature. If you search for theories within the rigid quantum mechanical framework, while obeying the general postulates, you may make many correct predictions or conclusions pretty much without any additional assumptions.
If you ask any of the hundreds of questions (Is the wave function in principle observable? Are observables with discrete spectra fundamentally less than real than those with continuous spectra? Is there a way to send superluminal signals, at least in principle? And so on) in which proper quantum mechanics differs from Bohmian mechanics, the empirical evidence heavily favors quantum mechanics and Bohmian mechanics can only survive if you adjust tons of parameters to unnatural values (from the viewpoint of Bohmian-like theories) and hope that it's enough (which it's usually not).
In 2013, even more so than in 1927, the pilot-wave theory is as indefensible as a flat Earth theory, geocentrism, the phlogiston, the luminiferous aether, or creationism. In all these cases, people are led to defend such a thing because some irrational dogmas are more important for them than any amount of evidence. That's what we usually refer to as bigotry.
And that's the memo.
Add to Digg this Add to reddit
snail feedback (24) :
reader Dilaton said...
I dont know why this is, but I just cant prevent my mind from thinking Bohemian Rhapsody whenever I see the words Bohmian mechanics ... :-D
Going to read this later, from scrolling througy I see that it obviously contains a lot of nice physics I like.
reader Luboš Motl said...
LOL, I live in Bohemia so I would be distracted all the time if I shared the distractions with you. ;-)
reader Peter F. said...
Freakishly good song, that one. Thanks Dilaton for associating and Lumo for linking! :-)
reader lucretius said...
I agree with almost everything (modulo the epithets). I have always been allergic to Bohm not so much because he was, as you say “media savvy commie”, but because of all the “Eastern mystical gibberish” that his ideas are associated with and which made them so cool with the hippie crowd. In fact last year during a trip to Canta Cruz, California we visited a (actually quite nice) “natural food” restaurant, where the walls and the menu were covered with “quantum mystical” deep thoughts that sounded like stuff out of “Wholeness and the Implicate Order”.
Having lived for over 20 years in Japan I am quite familiar with Buddhist thought and it is indeed true that one can find some curious parallels with modern physics but not really more so than one can find in “De Rerum Natura” authored by my namesake. I would say that the similarities are about 90% coincidence and 10% due to some basic structure of human thought and logic, and, of course, that the both ancient Indian (and Greek) thought and high energy physics are concerned with the same basic issue: the origin and nature of everything.
But Bohm’s contribution to the confusion does not endear him to my heart or mind.
Niether of course, does his being a commie - but if you really view this as a serious charge there will be few of his contemporaries among physicists left unscathed. I am not quite sure if this has any relation to his views of physics and metaphysics. I note however that the strongest supporter of Bohm’s view that I have met, Jean Bricmont, is a leftist raving lunatic and an associate of Chomsky (although he has to be given some credit for co-authoring Fashionable Nonsense (
I am glad that Bohm lived in Israel for only 2 years - otherwise I would have felt compelled to like him more.
I have never quite understood why physicists, at least in the early post-war years, were so much more left-wing than mathematicians. I can name quite a few strongly anti-communist mathematicians (above all John von Neumann and my Stanislaw Ulam - my favourite of that generation, and many others) but among physicists Teller is perhaps the only one who comes to my mind, and the situation does not seem to be very different today.
Finally, it seems to me that, apart from the mysticism, the main thing that attracts people to ideas like Bohm’s is the psychological difficulty many face with accepting probability as something that is part of physical reality rather than a human device invented to cope with ignorance. This is true even of mathematicians who work in probability. I have collaborated with “pure” probabilists and have got the impression that only a minority believe in randomness as a feature of the “real world” although everyone has heard that most quantum physicists claim otherwise. In fact, I find myself frequently hesitating about this issue,depending on my mood. For a mathematician probability theory is just a branch of measure theory and all its interesting results involve limit theorems - whose relation to the physical world appears dubious. It seems to me that many people still feel queasy about randomness in physical laws (the way Einstein felt) and this suggests that attempts to find non-probabilistic interpretations of quantum phenomena will continue to find supporters.
reader Mephisto said...
I must admit that religious mysticism is something of a hobby for me. I studied quite a lot (zen buddhism - Hui Hai, Huang Po, Tibetan buddhism, taoism, christian mysticim - Meister Eckhart, Ramana Maharishi, Jiddu Krishnamurti). Jiddu Krishnamurti was a personal friend of David Bohm and I believe he influenced his views a lot. It is fair to say that Krisnamurti was never interested in physics, he was interested in human consciousness, so the holomovement is the sole creation of Bohm himself
Although I would describe myself as a mystic, I am against mixing mysticism with physics. I read the book by Fritjof Capra (Tao of Physics) and disliked it. Modern variants are various kinds of Akashic fields and stuff like that
It is impossible and unwise to mix religion with science. For mystics and pantheists like me, sciece is just a part of devine reality
reader Justin Glick said...
You seem to think that a particle has a precise position at all times, but we just don't know what it is. QM does not say this.
reader serene deputy said...
I think Lubos just wanted to say that you are still describing the same system, say, one structureless point-like particle, not a field or a ghost (otherwise your starting Hamiltonian and Schrödinger equation would have changed), but whose position becomes fundamentally undetermined to some extent.
reader NumCracker said...
Dear Lubos, excuses for this off-topic comment: would there be a way to experimentaly test other interpretations, not formulations, of QM and QFT as the Many-World one? Have this ever being done? Thanks
reader Dilaton said...
Yeah it would be fun, if Lumo could adapt the whole songtext to this TRF article ... :-D
reader Stephen Paul King said...
Qm, IMHO, demands that Nature does not have a preffered observable.
reader Diana Z. said...
Yay, finally a good explanation without too much technical stuff.
I also have an OT request. Can you explain, in this same understandable style, why time goes slower for objects closer to the source of gravity. I looked, and I couldn't find a proper explanation. I hate it when something starts promising and then you see several pages of formulas that will give anyone a headache. With an added "it all stems from relativity, go read it" at the end. Ugh! It almost makes me believe, the authors themselves don't have a clue.
reader Luboš Motl said...
Right, lucretius. I remember the story from Feynman's book that said, among other things, that some promoters of paranormal phenomena convinced a professor David Bohm that they had supernatural abilities...
reader Luboš Motl said...
Dear Numcracker, there doesn't exist any specific enough formulation of MWI that would give some predictions differing from proper QM (at least in principle) - except for versions that are immediately ruled out even by the simplest experiments.
Just like I said about Bohmian mechanics, the best thing that MWI proponents are hoping - and it's just a hope, and an unjustified one - is that they reproduce the predictions of QM exactly. And they're extremely far from it. But everyone knows that QM with a proper interpretation gives predictions for pretty much everything and pretty much every physicist knows that they're correct so to "emulate them exactly" is the ultimate dream of any other "interpretation". An unachievable dream.
So MWI isn't a real theory that would be used to really do active physics. It's a philosophical declaration that some physicists sometimes endorse at the level of words even though they don't exactly know what this theory is supposed to say.
reader Luboš Motl said...
One doesn't need to "demand" it. Nature obliged well before quantum mechanics - and humans - were born. The chronology is exactly the opposite than you suggest. Nature created a world with many observables none of which is "preferred" and people constructed theories that were selected by the demand that they agree with Nature.
All modern, post-1925 fundamental theories of Nature are demanded to agree at the level of the atomic details, i.e. respect the general rules of quantum mechanics, which also means that they have to agree with the fact about Nature that it doesn't have preferred observables.
reader lucretius said...
This thread made me try to think of any Western physicist of whom I knew that he was definitely not left-wing and then I remembered the following.
The first time I learned about the violations of Bell’s inequalities and its implications was in 1981 by reading an article by Bernard D’Espagnat in “Encounter”. I just searched the web and and found it here:
I have not read it since it first appeared, that is, for over 30 years (it will be interesting to see how current it sounds today) but I always remembered its main point, namely, that the results of experiments showed that concept of “independent reality” had to be abandoned. Trying to understand this induced me to learn some fairly technical physics, which I had not been interested in before.
I am not sure how many people reading this are old enough to grasp the significance of such an article appearing in “Encounter”. "Encounter "was then Europe’s leading intellectual publication whose raison d'être was anti-communism. In fact, it was founded by the Paris based Congress for Culutral Freedom, a center-left organization dedicated to opposing communist influence in the West. It’s founder’s were the poet Stephen Spender and the “father” of American neo-conservatism Irving Kristol. Among its leading contributors were major cultural figures such as Raymond Aron, Ignazio Silone, Arthur Koestler and lots of others. There is a good account of all of it on the Wikipedia.
(I used to be a subscriber and still have all my old copies).
I don’t know anything about D’Espagnat’s politics but the fact that he chose to publish that article in "Encounter" means that at least he was definitely not a communist or a “fellow traveller”. In 1967 it had been discovered that the CIA had been secretly funding the magazine, which of course made it a taboo for anyone on the left. I suspect that publising in "Encounter" must have ruined D’Espagnat’s reputation among leftist physicists.
This article was, if I remember correctly, the only article on physics ever to appear in "Encounter" - which shows how much importance was attached to this matter then. It was followed by a polemic between D’Espagnat’s and the well known conservative philosopher Antony Flew ( Flew was a clever many but like many layman, refused to accept that the idea of “non-existence of independent reality” made any sense.
reader Florin Moldoveanu said...
All known QM interpretations are faulty and the only correct interpretation can come from the project to reconstruct QM from natural principles (see QM also goes beyond the usual C* algebraic formulation into the non-commutative geometry formulation of the SM.
QM and classical mechanics are distinct "fixed points" in a category theory formulation and any attempt to derive one from the other is a fool's errand. Any non-unitary time evolution of QM (e.g. the collapse postulate) is incompatible with QM's framework, but MWI is not the answer. The answer is much more subtle and mathematical sophisticated but for all practical purposes the collapse postulate does the job (using the collapse postulate is like using ict in relativity and ignoring additional mathematical structures - see and for the beginning of the answer).
The wavefunction is neither ontological nor epistemological in the usual sense.
reader Dimension10 said...
Great post...
I am quite confused about why Bohmian mechanics is often listed as an "interpretation" of QM, such as here: (That too, for that document, the authors are like Becker, Styer, etc.).
As you say in the post, Bohmian mechanics can't describe all the QM phenomena exactly, so I don't see how it can be listed as an "interpretation of Quantum Mechanics?
In my opinion, it should be listed as... an "alternative" non-mainstream theory to QM, like how MOND is a non-mainstream alternative to NG.
Also, finally, an off-topic question: "How do you get inline MathJax on your post? The MathJax CDN doesn't seem to allow $...$ and ##...## but only $$...$$ which results in display math. Do you use [itex] ... [/itex] or something like that? .
reader Luboš Motl said...
Dear Dimension10, exactly, Bohmian mechanics is an alternative theory - or a wishful thinking about the existence of an alternative theory, if we want to go beyond the known toy examples - so it's demagogic to sell it as an "interpretation" of QM.
The sequences to write TeX via Mathjax are \(E=mc^2\) and\[
E = mc^2
\] but they don't work in DISQUS. Mathjax allows you to define other sequences that start the displayed and inline math modes, including $...$ and $$....$$. The latter actually does work here as well but I didn't allow the single dollar because I sometimes is the character as a unit of money.
reader Dimension10 said...
I just realised you have the "listen" feaure enabled. It pronounces the equations very nicely : ) .
reader Lisa Korf said...
Hi Luboš, I am curious to know how you would reconcile weak measurement trajectories, as in (illustration of measured average trajectories from the article here: ) with your suggestion that "If you could actually 'measure' this permutation that determines how the dots are connected, quantum mechanics would be shown incomplete. " since these, although averaged and still interpolated, are not arbitrary either, and an interference pattern can still be observed. Thanks. LK
reader Luboš Motl said...
Dear Dr Korf, thanks for your question which however shows that you are confused about the status of various claims and patterns here.
The picture you included is pretty much the very same picture that I included in this very blog post and it wasn't measured. The trajectory of a quantum particle can't be measured without affecting it.
The copy of the picture in the Science Magazine is a result of a "weak measurement" but a "weak measurement" isn't a measurement. What is critical is that a weak measurement doesn't measure a property of the measured object/system only. Instead, it determines some function of the properties of the system and numerous conventions and choices that were used to define a particular weak measurement procedure. See e.g.
So the weak measurement isn't unique in any way and the trajectories on the picture were obtained with one particular prescription for a weak measurement protocol. One could get pretty much any other permutation of the dots - any other picture like that with the same density of lines in each region - if we were drawing these pictures using other weak-measurement protocols. There is nothing physical about the randomly drawn trajectories - any choice is just a convention and all conventions are equally physical at the end.
reader lucretius said...
These sort of things can be produced "ad infinitum" so debunking them one by one is no more productive a way to spend time than trying to work out precisely what is wrong with this:
reader Scott Lahti said...
I thank you for your kind words regarding my 20x expansion, from late 2012, of the Wikipedia entry for Encounter magazine, one of whose seeds and two of whose results appear below:
reader bustemup said... |
048d05e91715075c | Resonant States in Negative Ions
by Brandefelt, Nicklas
Abstract (Summary)
Resonant states are multiply excited states in atoms and ions that have enough energy to decay by emitting an electron. The ability to emit an electron and the strong electron correlation (which is extra strong in negative ions) makes these states both interesting and challenging from a theoretical point of view. The main contribution in this thesis is a method, which combines the use of B splines and complex rotation, to solve the three-electron Schrödinger equation treating all three electrons equally. It is used to calculate doubly excited and triply excited states of 4S symmetry with even parity in He-. For the doubly excited states there are experimental and theoretical data to compare with. For the triply excited states there is only theoretical data available and only for one of the resonances. The agreement is in general good. For the triply excited state there is a significant and interesting difference in the width between our calculation and another method. A cause for this deviation is suggested. The method is also used to find a resonant state of 4S symmetry with odd parity in H2-. This state, in this extremely negative system, has been predicted by two earlier calculations but is highly controversial.Several other studies presented here focus on two-electron systems. In one, the effect of the splitting of the degenerate H(n=2) thresholds in H-, on the resonant states converging to this threshold, is studied. If a completely degenerate threshold is assumed an infinite series of states is expected to converge to the threshold. Here states of 1P symmetry and odd parity are examined, and it is found that the relativistic and radiative splitting of the threshold causes the series to end after only three resonant states. Since the independent particle model completely fails for doubly excited states, several schemes of alternative quantum numbers have been suggested. We investigate the so called DESB (Doubly Excited Symmetry Basis) quantum numbers in several calculations. For the doubly excited states of He- mentioned above we investigate one resonance and find that it cannot be assigned DESB quantum numbers unambiguously. We also investigate these quantum numbers for states of 1S even parity in He. We find two types of mixing of DESB states in the doubly excited states calculated. We also show that the amount of mixing of DESB quantum numbers can be inferred from the value of the cosine of the inter-electronic angle. In a study on Li- the calculated cosine values are used to identify doubly excited states measured in a photodetachment experiment. In particular a resonant state that violates a propensity rule is found.
Bibliographical Information:
School:Stockholms universitet
School Location:Sweden
Source Type:Doctoral Dissertation
Keywords:NATURAL SCIENCES; Physics; Atomic and molecular physics; Molekylfysik; Atomfysik; Kärnfysik; Partikelfysik
Date of Publication:01/01/2001
© 2009 All Rights Reserved. |
33bf18d94abad22a |
Support Options
Submit a Support Ticket
Nanoelectronic Modeling Lecture 20: NEGF in a Quasi-1D Formulation
By Gerhard Klimeck1, Samarth Agarwal2, Zhengping Jiang2
1. Purdue University 2. Electrical and Computer Engineering, Purdue University, West Lafayette, IN
Published on
This lecture will introduce a spatial discretization scheme of the Schrödinger equation which represents a 1D heterostructure like a resonant tunneling diode with spatially varying band edges and effective masses. Open boundary conditions are introduces with Quantum Transmitting Boundary Method (QTBM). The QTBM is related to the NEGF-based selfenergy treatment and the complete list of NEGF equations that are typically solved are listed. This lecture is not intended to truly teach the NEGF approach. We refer to Prof. Datta’s exentsive lectures on nanoHUB for the formal introduction of the NEGF approach.
Learning Objectives:
1. Effective Mass Tight-Binding Hamiltonian in 1D discretized Schrödinger Eq.
2. Quantum Transmitting Boundary Method (QTBM)
Open Boundary Conditions
3. Fundamental NEGF Equations
Cite this work
Researchers should cite this work as follows:
• Gerhard Klimeck; Samarth Agarwal; Zhengping Jiang (2010), "Nanoelectronic Modeling Lecture 20: NEGF in a Quasi-1D Formulation," http://nanohub.org/resources/8203.
BibTex | EndNote
Università di Pisa, Pisa, Italy
|
dbee22084e8ea081 | Take the 2-minute tour ×
We all learn in grade school that electrons are negatively-charged particles that inhabit the space around the nucleus of an atom, that protons are positively-charged and are embedded within the nucleus along with neutrons, which have no charge. I have read a little about electron orbitals and some of the quantum mechanics behind why electrons only occupy certain energy levels. However...
How does the electromagnetic force work in maintaining the positions of the electrons? Since positive and negative charges attract each other, why is it that the electrons don't collide with the protons in the nucleus? Are there ever instances where electrons and protons do collide, and, if so, what occurs?
share|improve this question
Things don't collide with other things. Collision is due to Pauli exclusion, which only works with identical fermions. The only things that collide in the classical sense of bumping into each other when they are close are identical fermions, other particles just feel a repulsive/attractive force. – Ron Maimon Dec 18 '11 at 7:04
4 Answers 4
up vote 16 down vote accepted
In fact the electrons (at least those in s-shells) do spend some non-trivial time inside the nucleus.
The reason they spend a lot of time outside the nucleus is essentially quantum mechanical. To use too simple an explanation their momentum is restricted to a range consistent with begin captured (not free to fly away), and as such there is a necessary uncertainty in their position.
An example of physics arising because they spend some time in the nucleus is so called "beta capture" radioactive decay in which $$ e + p \to n + \nu $$ occurs within the nucleus. The reason this does not happen in most nuclei is also quantum mechanical and is related to energy levels and Fermi-exclusion.
To expand on this picture a little bit, let's appeal to de Broglie and Bohr. Bohr's picture of the electron orbits being restricted to a set of finite energies $E_n \propto 1/n^2$ and frequencies can be given a reasonably natural explanation in terms of de Broglie's picture of all matter as being composed of waves of frequency $f = E/h$ by requiring that a integer number of waves fit into the circular orbit.
This leads to a picture of the atom in which all the electrons occupy neat circular orbits far away from the nucleus, and provides one explanation of why the electrons don't just fall into the nucleus under the electrostatic attraction.
But it's not the whole story for a number of reasons; for our purposes the most important one is that Bohr's model predicts a minimum angular momentum for the electrons of $\hbar$ when the experimental value is 0.
Pushing on, can solve the three dimensional Schrödinger equation in three dimesions for Hydrogen-like atoms:
$$ \left( i\hbar\frac{\partial}{\partial t} - \hat{H} \right) \Psi = 0 $$
for electrons in a $1/r^2$ electrostatic potential to determine the wavefunction $\Psi$. The wave function is related to the probability $P(\vec{x})$ of finding an electron at a point $\vec{x}$ in space by
$$ P(\vec{x}) = \left| \Psi(\vec{x}) \right|^2 = \Psi^{*}(\vec{x}) \Psi(\vec{x}) $$
where $^{*}$ means the complex conjugate.
The solutions are usually written in the form
$$ \Psi(\vec{x}) = Y^m_l(\theta,\phi) L^{2l+1}_{n-l-1}(r) e^{-r/2} * \text{normalizing factors} $$
Here the $Y$'s are the spherical harmonics and the $L$'s are the generalized Laguerre polynomials. But we don't care for the details. Suffice it to say that that these solutions represent a probability density for the electrons that is smeared out over a wide area near around the nucleus. Also of note, for $l=0$ states (also known as s orbitals) there is a non-zero probability at the center, which is to say in the nucleus (this fact arises because these orbital have zero angular momentum, which you might recall was not a feature of the Bohr atom).
share|improve this answer
This seems to me still not enough for this Question, even though, or particularly because, the questioner is new to Physics SE, but this -1 wasn't me. I can't do justice to this Question, so perhaps it's just that I want a really Useful Answer. – Peter Morgan May 3 '11 at 17:08
@Peter: Agree that this is terse, and only minimally informative. Without knowing more about the questioners preparation, it was that or a very long and detailed answer. Maybe I'll have time for the latter later on. – dmckee May 3 '11 at 17:13
Rather extensively update. Without some feedback from voithos, I can't see where else to go with this. – dmckee May 3 '11 at 23:29
@dmckee Wow, that explanation is fantastic, thank you. Honestly, I didn't understand about 90% of it, and the other 10% I couldn't put in context. But I'll try to try to analyze, and perhaps look back on later when I hopefully will have a deeper understanding on this subject. – voithos May 5 '11 at 2:06
@voithos: If you don't get most of it, then I've pitched it at the wrong level. – dmckee May 6 '11 at 15:47
This was the basic reason for the invention of quantum mechanics.
Simple mechanics with electromagnetism do not work in atomic dimensions, particularly with the charged electrons. Classical electromagnetism would have the electrons radiate energy away because of the continuous acceleration of a circular path and finally fall in the nucleus.
share|improve this answer
Sorry, Anna, -1. For nearly the same reasons I gave above for downvoting sb1's Answer, though I suspect I would have left yours alone if sb1's were not there goading me. Indeed, after writing the previous sentence I decided to undo the -1, but it took me more than 5 minutes, so it won't let me until you edit your Answer. – Peter Morgan May 3 '11 at 17:04
I think "nature is quantized" is too strong! – user1355 May 3 '11 at 17:35
An intuitive way is to think of matter waves. If the electron were a point particle, it would have to start from a definite position, say somwewhere on its orbit, and all of it would feel the electric attraction to the nucleus and it would start falling just like a stone. It could not find a stable orbit like the moon does since it is charged and whenever it accelerates it gives off electromagnetic radiation, like in a radio antenna transmitting radio waves. But then it loses energy, and cannot maintain its orbit.
The only solution to this is if the electron can somehow stand still. (Or achieve escape velocity, but of course you are asking about the electrons in the atom, so by hypothesis, they have not got enough energy to achieve escape velocity.) But if it stands still and is a point particle, of course it will head straight to the nucleus because of the attraction.
Answer: matter is not made of point particles, but of matter waves. These matter waves obey a wave equation. The point of any wave equation, such as $${\partial^2f\over \partial t^2} = - k {\partial^2f\over \partial x^2}$$ (this, if $k$ is negative, is the wave equation for a stretched and vibrating string) is that the right hand side is the curvature of the wave at the spot $x$, and the equation says the greater the curvature, the greater is the rate of change of the wave at that spot (or, in this case, the acceleration, but Schrodinger used a slightly different wave equation than de Broglie or Fock), and hence the kinetic energy, too.
There are certain shapes which just balance everything out: for example, the lowest orbital is a humpy shape with centre at the centre of the nucleus, and thinning out in all directions like a bell curve or a hill. Although all the parts of the smeared-out electron might feel attracted to the nucleus, there is a sort of effect which is purely quantum mechanical, a consequence of this wave equation, which resists that: if all parts approached the nucleus, the hump becomes more acute, a sharper, higher peak, but this increases the left hand side of the equation (greater curvature). This would increase the magnitude of the right hand side, and that greater motion tends to disperse the peak again. So the electron wave, in this particular stationary state, stays where it is because this quantum mechanical resistance exactly balances out the Coulomb force.
This is why Quantum Mechanics is necessary in order to explain the stability of matter, something which cannot be understood if everything were made of mass as particles with definite locations.
share|improve this answer
Very interesting. On somewhat of a side note, since you mentioned the lowest orbital, what about the higher orbitals? The lowest orbital works to balance out the Coulomb force, but what causes the existence of the other orbitals? I am aware of the Pauli exclusion principle, but I don't have any intuition as to how it works. – voithos Dec 18 '11 at 6:59
Oh, it just gets more complicated even though the basic principle is the same. At that points, words are not precise enough anymore and one uses anna's approach... The Pauli exclusion principle has nothing to do with it. There are analogues to atoms with a proton as a nucleus, and a charged boson in various orbitals. Bosons do not obey the Pauli exclusion principle but they still obey a wave equation (that of Fock). It is the wave equation that is the whole point. – joseph f. johnson Dec 18 '11 at 7:10
Here's the Feynman response I read in his opening paragraphs in his Feynman lectures:
The reason a proton and electron simply don't crash into each other is that if they did, we would exactly know their position-assuming one of them is stable, which one is (the proton). If we knew their position, we would be highly unaware of the momentum, meaning it could be very large, essentially having more energy to escape.
I have noticed this behavior to be similar in my opinion to how a gas in a piston reacts to exerted pressure.
The Heisenberg uncertainty principle essentially pushes back on the Coulomb force, but only probabilistically, so perhaps they could get close, but not completely close. Otherwise, our uncertainty of momentum would be infinite. Here's the relation (excuse my latex skills):
/delta p delta h= hbar/2
Less uncertainty in position=more uncertainity in momentum.
share|improve this answer
protected by Qmechanic Feb 4 '13 at 5:59
Would you like to answer one of these unanswered questions instead?
|
6f82ed52515b92a2 | Take the 2-minute tour ×
I'm a mathematician with only the basic knowledge of Physics, so my question may be trivial: in this case, mercy me. :-)
My questions are:
Many thanks in advance.
share|improve this question
I do not dare to give the answer. But I can point out why $V$ is named like that. To me, it seems that (P) is the Dirichlet problem associated to the time-independent Schrödinger equation $H u=E u$. $H$ contains the kinetic part, $(\hat{p})^2/2m$, which is essentialy the Laplacian you wrote in (P), and the potential part $V$; whence the notation and name potential in your problem. – c.p. Jul 1 '12 at 21:32
Now, there is a non-linear version of Schrödinger equation, but it does not contain other Laplacian operator but the second order. This equation is refered to as Gross-Pitaevskii. – c.p. Jul 1 '12 at 21:35
(and being those equations generalizations of Schrödinger's, $\lambda$ is a genralization of the energy) – c.p. Jul 1 '12 at 23:48
Thank you Jorge. Another answer can be found @math.SE: math.stackexchange.com/q/166057 – Pacciu Jul 4 '12 at 11:56
Your Answer
Browse other questions tagged or ask your own question. |
900b581fef4ea6c2 | Thursday, 17 March 2016
Why biology is so hard! The 'peculiarly difficult position' of the Biologist: an analysis of Zinssers' view of the sciences.
This essay was submitted in partial fulfillment of my degree of Bachelor of Science (Honours), Department of Zoology, The University of Western Australia (June 2001). I recently was talking about this with a colleague and I thought I would share it. It's a long, but hopefully interesting read about why being a biologist can be more difficult than a chemist or a physicist.
Hans Zinsser
Hans Zinsser
Hans Zinsser (1947) presents in his book; Rats, Lice and History, a bibliography on the virus causing typhus fever. The first three chapters, however, present more of a protest against the "American attitude" wherein the author insists that a specialist should have no interests beyond his chosen field. In presenting this view Zinsser compares biologists to chemists or physicists and makes an interesting quote (Zinsser 1947, p. 13) "The biologist is in a peculiarly difficult position. He cannot isolate individual reactions and study them one by one, as a chemist can. He is deprived of the mathematical forecasts by which the physicist can so frequently guide his experimental efforts. Nature sets the conditions under which the biologist works, and he must accept her terms or give up the task altogether".
The ‘difficult position’ to which Zinsser (1947), and also the subject of this essay, was that of accounting for the complexity in, and the inherent difficulty of studying biology when compared to the other sciences such as chemistry or physics. This is not to say that the fields of chemistry or physics are simple, as these fields are anything but. However Zinsser (1947) suggests certain problems exist when studying biology compared to the other sciences in the above quote, and are the focus of this essay.
The first problem identified by Zinsser (1947) is that a biologist cannot isolate individual reactions. Chemistry involves processes whereby species react to form products, and these can be easily explained by individual reactions or, in more complicated chemical systems, by a series of reactions. Biological systems, such as an animal, may display two major problems when trying to isolate chemical reactions as a chemist does, although it is not clear to which one Zinsser (1947) referred within the quote. The first is the inherent difficulty in isolating specific reactions. Each animal is a complex system of inter-relating macromolecular structures, substrates and enzymes; isolating a single reaction within this system is not easy. However it is not impossible and some biologists, for example Krebs, have formulated complex detailed chemical reactions to explain biological processes.
This process can be related to the second problem in isolating chemical reactions within animals.
Duck of Vaucanson
This is the idea of reductionism versus holism and whether it is possible to reduce a complex organism down to chemical or even physical explanations. Although there are still reductionary biologists there is much debate about whether the results obtained by such a study can ever really describe a complex biological system. Reductionism is the practice of explaining the properties of whole organisms entirely by the properties of the parts that compose them (Mohr 1989). This usually involves two steps 'analysis' –the breaking down of complex systems into manageable components, and 'synthesis' –the relation of the parts, such as their spatial and temporal ordering and the way that they interact. For example a society or community of animals (such as ants) can be broken down into single organisms (such as a worker ant). The organism can then be further broken down into organs or appendages (eyes, brains, gut, reproductive system, Malphigian tubules etc) which are further broken down into cells, the cells into cell organelles, cell organelles into macromolecules, macromolecules into molecules, molecules into atoms, atoms into subatomic particles.
The reductionist approach implies that all complex systems consist of smaller and simpler parts. Moreover, it is assumed that complex systems originated from simpler systems in the course of universal evolution. Where evolution is considered a deterministic process, governed by causal laws (Mohr 1989). It emphasizes whether and to what extent a proposition, a theory , or a whole branch of science can be reduced to another proposition, theory, or branch of science. Reduction is tempting since it satisfies one of the great desires of the scientist -to have unifying theories with a wide scope. However there are problems associated with reducing these complex systems, and dividing them up into simpler ones. The concept of 'emergences' arises since the sum of the parts is not always equal to the sum of the whole. In fact, very few systems can be thought of, or represented as additive functions of the properties of it's constituent parts, it is the functional relationship between these parts that matters (Medwar and Medwar 1983). 'Complexity' in biology is due to the particular interactions of the parts (such as the molecules inside the cell, or the cells within the organs etc). At higher levels of complexity there are properties that cannot be described, or predicted in the lower levels. For example, at present analysing the organs of an ant could in no way predict the complex social system the ants portray, similarly doing so (analysing organs) in humans could not predict the possession of a conscious mind. These are called 'emergent' properties, and may well be considered a necessary property to the natural system in which it develops. However, for the biologist 'emergent properties' means limits to reductionism. To ignore emergent properties at the different levels of complexity to maintain maximum reducibility would mean to ignore the richness of the animal world. Thus a living cell cannot be explained in terms of its parts but only in terms of the organisation of those parts. Although the whole is nothing but the parts put together, it is the 'putting together' that makes the cell and this cannot be accounted for by the parts themselves (Mohr 1989).
Emergent properties of termites
For chemical and physical systems reductionism is also an important part of research. A typical example is the creation of theories of great generality such as quantum mechanics or the theory of relativity. Most (but not all) atoms seem to be ruled by known principles or equations such as Maxwell's equation or the Schrödinger equation. Equally large chemical complexes can be reduced down to smaller complexes, smaller complexes to atoms. Whether emergence exists in chemistry is not clear, if one was to consider a large molecule, could it's properties be predicted by studying the individual atoms? Take for example a typical hydrocarbon, consisting of carbon and hydrogen molecules. Adding a carboxyl group (by attaching an =OH molecule to the carbon) will almost always make the molecule behave like an acid (except in large hydrocarbons where it's properties will be governed by Van Der Wels forces). There are whole fields of chemical engineering that are based on the fact that the actions of molecules are the sum of its parts. But there are still examples that are much too complicated for computations from principles. Physics can still not explain the behaviour of uranium or even oxygen. Is this an example of emergence within chemistry? It is not known whether this is an example of emergence, or an example of our lack of understanding of the atoms. For example, if more was known about the movement of electrons around the atom and the interaction they have with other atoms within the complex, then one may be able to predict the properties of the complex more precisely (although a similar argument could be used in biology).
However, a major difference between the sciences is the degree to which a system is reduced (ie from the highest (most complex) level to the lowest. (least complex) level). Biological systems can be reduced many more times than can a chemical or a physical system. The difference in the amount each system is reduced for analysis may influence the number and effect of emergences. The number and the extent of the examples in biology of emergence coupled with the multitude of levels throughout which the complexity of biology is reduced when compared to the other sciences may suggest that emergence will have a greater effect in biological systems. This may lead to doubt, and a decrease in the predicability in biological systems.
Hence many philosophers of biology e.g. Wuketits (1989) have concluded that biology requires an 'organism-centred' view of life. Thus, unlike chemical or physical systems, to examine biological systems a 'holistic' approach must be taken by biologists. As the zoologist Ritter quotes "the organism in its totality is as essential to an explanation of its elements as its elements are to an explanation of the organism" {Beckner 1967). Holism was greatly developed by Bertalanffy in his General Systems Theory (Bertalanffy 1968). Holism suggests that neither whole determines the parts nor the parts determine the whole but that a complex interaction between the parts and the whole is to be supposed. Bertalanffy's theory has influenced biology as well as other sciences, and it shows some of
the differences between the sciences.
It can be summarized as follows;
(1) The whole (of an organism) is more than the sum of its parts.
(2) Living beings are open systems, ie non-equilibrium systems. Physics traditionally
deals with closed systems, ie equilibrium systems.
(3) Living systems are not static systems; they are regarded as continuous processes.
(4) Organisms are homeostatic systems; any living system represents a dynamic
interplay at all levels of its organization.
(5) Organisms are hierarchically organized systems. Any organism is structured in a
way so that its individual members (organs, cells) are 'super-systems' of other
elements or levels of organization.
So the biologist is in a difficult position where he must consider all level of complexity of
an organism. To study biology successfully he must examine the parts of an organism, the
whole of the organism and the interaction of these two, to completely understand any biological system. This task is not easy and carries with it other difficulties such as methodology and whether this is actually possible in some species. One tends to agree if one imagines a huge complex network, we can understand that isolating a pattern in this complex network by drawing a boundary around it and calling it an object will be somewhat arbitrary.
A second problem broached by Zinsser (1947 p. 13) is that "Nature sets the conditions under which the biologist works". This is similar to the above problem of reductionism, that each organism is a complex interaction of the whole and the parts of an organism, and whether an organism can be studied outside this network. Traditionally biology was more evidence based where observations were made in the field and inferences were made from these observations. However an aspect which renders biologists different from chemists or physicists is that we ask the question "what for"?
"What for" is not asked by chemists or physicists because there is no answer that makes sense, an electron spins around a ball of protons and neutrons, what for? But asking that question in biology is not so irrelevant. This question, however, inevitably leads to intervention in biological systems to answer it.
Intervention by biologists is typically performed in two scenarios: in the field and in the laboratory. Both carry their advantages, but unfortunately both also carry disadvantages. For the Chemist or the Physicist the decision is easy, the laboratory provides the adequate arena for scientific discovery. The lab provides an environment where physical conditions such as temperature and pressure can be controlled; and thus for the chemist experimentation couldn't be easier (although having had experience as a chemist this is somewhat of an understatement). The biologist faces other considerations, in that some laboratory experiments may produce results not representative of the organism in its natural environment. For example, when testing physiological performances of lizards sprint speeds and endurance are usually tested in the laboratory. Lizards are run along a motorised treadmill till exhaustion to measure endurance or along a race track, to measure maximal running speed. These data then used to compare lizards and the differences attributed to Darwinian selection on the animal, assuming that the animal runs that fast or can run for so long because it has been selected to. However, selection acts most directly on what an animal does in nature, its behaviour. Performance, on the other hand, as defined by laboratory measurements, generally exhibits an animals ability to do something when pushed to its morphological, physiological or biochemical limits. Whether animals routinely behave at or near physiological limits under natural conditions is an important empirical issue. Some data (see Garland and Losos (1994 p. 24) for a comprehensive list) suggests that animals do not behave at their limits in nature, and "close encounters of the worst kind" between predators and prey where an animal may be forced to behave at or near its physiological limits are few and far between (Christian and Tracey 1981, Jayne and Bennett, 1990). This reflects the reductionism problem, where an animal is reduced from its social and environmental surroundings and thus some aspects of its behaviour cannot be predicted accurately. Hence the need for a holistic view on the organism which can be best achieved by performing experiments in the natural environment.
Red necked Pademelon
Environmental experiments are all but unknown to physicist and to a lesser degree to chemists. These are usually performed by intervening with animals in the natural environment, however the problem that this creates is the lack of controls. For example consider the hypothetical example of the red-necked pademelon (Thylogale thetis). Wahungu et al. (1999) examined the effects of browsing by the pademelons on shoots of rainforest plants. They tested this idea by planting four shoots from each of nine local rainforest plant species and four clover seedlings, in twelve quadrats along two transects. All the shoots in one of the transects were excluded from pademelon browsing by erecting 1.0 x 1.0 x 0.5m high cages of 20mm mesh over each of the quadrats in that transect. Shoots from the other quadrat were left exposed. Will this experiment test pademelon browsing? This method does not account for other species that may feed upon the shoots, within the open transect –but like the pademelon are also excluded from the caged quadrats. The feeding behaviour of the pademelons may also be altered by the presence of multiple shoots within a small area, the pademelon may feed on many more different species of shoots then it would normally since they are now closely available in larger quantities than may normally be available in nature.
Similarly we may have approached this problem in many other ways. We might begin by looking at the plants that this pademelon primarily eat. But this may not account for indirect effects, ie the pademelon eats plant A, however plant B is aided/disadvantaged by the absence of plant A (Plant A may compete with plant B for sunlight and the absence of plant A increases the abundance of plant B or it may be symbiotic with it were the presence of plant A aids the growth of plant B conversely a reduction in the abundance of plant A may cause a similar reduction in the abundance of plant B) and thus the while observations may suggest the pademelon affects only plant A the full effect is not known. Another method might be to look at vegetation in areas inhabited by the pademelon and compare these to areas not inhabited by the pademelon, examining the differences. The problem this causes is that it does not account for changes for differences in climate or other species at the different locations - even if these are known they could not be controlled for.
The third problem addressed by Zinsser (1947 :p. 13) is presented by the quote "He is deprived of the mathematical forecasts by which the physicist can so frequently guide his experimental efforts". Newton showed that mathematical descriptions give us insight into the nature of things. However, our mathematics has been mostly limited to simple systems with linear interactions. This corresponds to systems with few pieces that do not interact strongly with each other. But the biological world as we have seen above consists of anything but, it is filled with systems that have many pieces that strongly interact with each other. These systems are usually described as fractals or chaotic systems.
Fractals are usually defined as objects or processes whose small pieces resemble the whole, while chaotic systems are those with output so complex it mimics random behaviour (Liebovitch,1998). Fractals have several properties that distinguish them; self-similarity, scaling, and certain statistical properties. Self -similarity (or more accurately statistical self -similarity) can occur in biological systems where little pieces of an object are similar to larger pieces. Many of these show self-similarity within space. For example, there are self-similar patterns in the branching of the arms (dendrites) of nerve cells. Larger arms break up to from smaller arms, which break up to form smaller arms and so on. At each stage the pattern resembles the one before it.
Other examples of self -similar patterns in space include the arteries and the veins of the retina, the tubes that bring air into the lungs, and the tubes (ducts) in the liver that bring bile to the gall bladder. Many body surfaces in the body have self-similar undulations with ever finer pockets or fingers. These ever finer structures increase the area available for the exchange of nutrients, gasses, and ions. These surfaces include the lining of the intestine, the boundary of the placenta and the membranes of cells: (Liebovitch 1998). Some biological systems can also be self-similar in time. Ion channels, are proteins in the cell membrane with a central hole that allows ions passage in or out of the cells. These proteins can change in structure, closing the hole and blocking the flow of ions. The small electrical current due to these ions can be measured in an individual ions channel molecule, and is high when open and low when closed. When a recording of current is played back at low time resolution, the times during which the channel was open and closed can be seen (see Figure 1). When one of these open or closed times is played back at higher time resolution, it can be seen to consist of many briefer open and closing times. The current through the channel is self -similar because the pattern of open and closed times found at low time resolution is repeated in the open and closed times found at higher time resolution (Liebovitch, 1998). Other examples of temporal fractals may include the electrical signal generated by the contraction of the heart or even a cell multiplying over time.
Current through ion channels (From Liebovitch 1998)
The trouble this creates for the biologist is that there is no unique 'correct' value for a measurement. The value used to measure a property, such as length, area or volume, depends on the resolution used to make the measurement. Measurements made at different resolutions will yield different values. This means that the differences between the values measured by different people could be due to the fact that each person measured the property at a different resolution. Hence, the measurement of a value of a property at only one resolution is not useful to characterise fractal objects or processes. Instead we need to determine the scaling relationship. The scaling relationship shows how the values measured for a property depend on the resolution used to make the measurement. For example the surface area of a cell will increase as the magnification used to examine the cell increases. This now requires the biologist to measure many values at different resolutions.
Fractals also present statistical problems for the biologist. The statistical knowledge of most scientists is limited to the statistical properties of Gaussian distributions. Fractals do not have the properties of Gaussian distributions. In order to understand the many fractal objects and processes in the natural world, the biologist is required to learn about the properties of stable distributions. The variance of fractals is also usually large, and increases as more data are analysed.
For example Luria and Delbruck (1943) wanted to determine whether mutations were: (1) occurring all the time but are only selected when there is a change in the environment or (2) occurring only in response to a change in the environment. To test this they let a cell multiply many times and then challenged its descendants with a killer virus. If the mutations occur all the time, then by chance, some cells will become resistant to the killer virus before it is given to them. This resistant cell will divide and give rise to resistant daughter cells in subsequent generations. If the resistant cell is produced early on, it will form many resistant daughter cells. If it is produced latter on it will not have time to produce many resistant daughter cells. Each time the experiment is run the mutations will occur at different times. The variation in the timing is amplified by the resistance found in the daughter cells. This results in a large variation in the final number of resistant daughter cells when the experiment is run many times. If the mutations occur only in response to the virus, then by chance, some cells will become resistant to the killer virus when it is given. However in this case they will not have time to produce many resistant daughter cells: Thus there will only be a small variation in the final number of resistant cells when the experiment is completed. Luria and Deldruck (1943) found that there was a large variation in the number of resistant cells, thus they concluded that mutations occur all the time.
This example shows how variance in a fractal system (the dividing of the cells) will be a large number. Knowing this, it is of interest to determine if the variance does or does not have a finite, limiting value. This can be done by measuring how the variance depends on the amount of data included. If the variance increases with the amount of data included, as it does in the Luria and Deldrucks (1943) experiment, then the data have fractal properties and the variance does not exist. The trouble is that we do not know how to perform statistical tests to determine if the parameters of the mechanism that generated the data have changed from one time to another or between experiments run under different conditions. The statistical tests available are based on the assumption that the variance is finite. These tests are not valid to analyse fractal data where the variance is infinite (Liebovitch, 1998).
Like fractals, chaotic systems are numerous in biological systems. Chaos is defined as complex output that mimics random behaviour generated by simple, deterministic system (Liebovitch 1998). The opening and closing of ion channels, electrocardiogram (ECG) of heart beat pulses, ectroencephalogram (BEG) electrical recording of the nerve activity of the brain and even epidemics of measles are all examples of chaotic systems. We are used to thinking that the variability found in biological systems is due to mechanisms based on chance that reflect random processes. Thus attempting to classify systems as chaotic or random can be very difficult. Although techniques, developed by the mathematician Poincare around the 1900s, where data measured in time can be transformed into objects in space, called 'phase space', by a processes called 'embedding', make such classification easier. The major problem is that data generated chaotic system, even if they can be identified as such, are so complex, analysis of data using current mathematical methods is extremely difficult {Liebovitch 1998).
Besides the complexity of the output from these systems other problems also exist when dealing with them. If we re-run a non-chaotic system with almost the same starting values, we get almost the same values of the variable at the end. However, if we re-run a chaotic system with almost the same starting values, we get very different values of the variables at the end of the experiment. This is called sensitivity to initial conditions. Chaotic systems simply amplify small differences in initial conditions into large differences. This makes it extremely difficult for the biologist to control for an experiment. Even a small change in experimental method such as the time of day, slight variation in temperature or concentration of a substance could lead to different results. This may explain the large variation found in the results of biological experiments especially as the complexity of the system increases.
Some chaotic systems also exhibit a property dubbed bifurcation. Bifurcation occurs when the value of a parameter (a certain property of the system) changes by a small amount, but there is a large change in the behaviour of the system (Liebovitch 1998). This can reduce the predictability of systems. For example Glycolysis, the process that transfers energy from sugar to ATP, exhibits bifurcation. There are numerous reactions in glycolysis. The overall speed of the reaction is set up by two steps that involve enzymes. Each enzyme speeds up one important reaction. The products produced by each of these reactions also affect the enzyme activity. Thus there is positive and negative feedback control in this reaction system. Markus and Hess (1985) studied what would happen if the input of sugar into these biochemical reactions happened in a periodic way. They found that for some frequencies the ATP concentration fluctuated in periodic way. For other frequencies, the ATP concentration fluctuated in a chaotic way. Only a small change in input was required to produce a sudden change in behaviour from periodic to chaotic fluctuations. This sudden change of behaviour as a parameter is varied is termed a bifurcation. We are used to thinking that small changes in parameters must produce similarly small changes in the behaviour of the system. This intuition is based on our experience with linear systems (common in physics and chemistry) and is not necessarily true for non-linear systems (common in biology). The behaviour of a non- linear system can change dramatically when there is only a small change in the value of a parameter. Biological experiments with similar experimental parameters can sometimes produce markedly different results. Biological effects do not always depend smoothly on the values of the experimental parameters. For example, the biological effects of electromagnetic radiation occur within a set of distinct 'windows' in the amplitude and the frequency parameters of the radiation supplied.
It may be easy to think that much of what we study can be interpreted in different ways or can never be proven; but in this fact we are not alone. Even Einstein was quoted saying his theory of relativity could never be proven. This essay does not aim at decreasing or putting down the relative worth of biology as a science. Instead it aims at expressing the intelligence and achievements of biologists who have managed to achieve so much with so many odds against them. Zinsser also strongly expresses the need for scientists in general to have abroad range of interests rather than being specialised in anyone particular field.
Perhaps it was Darwin’s interest in Geology, particularly in the works of Charles Lyell, who suggested the earth may have evolved to its present state, that smoothed the path for Darwin to accept evolution in animals (although Lyell did not at first accept Darwin’s views after publication of the Origin of the Species). T.H. Huxley, a friend of Darwin, latter wrote "I cannot believe that Lyell was for others, as for me, the chief agent in smoothing the road for Darwin".
It may be this ability of biologists to draw from different fields of art, science or even philosophy that makes biology such an exciting subject. Since many biologists believe and Zinsser states "whenever he (the biologist) attacks a problem -that before he can advance toward his objective, he must first recede into analysis of the individual elements that compose the complex systems with which he is occupied". This is perhaps one of the fundamental differences between biology and the "exact" sciences.
Beckner, M.O. (1967) Organismic biology, in P. Edwards (ed.), The Encyclopedia of Philosophy, Vol 5, MacMillan, New York.
Bertalanffy, L.von (1968) General System Theory: Foundations Development, Applications, Braziller, New York.
Christian, K.A. and Tracey C.R. (1981) The effect of the thermal environment on the ability of hatchlings Galapagos land iguanas to avoid predation during dispersal. Oecologia 49: 218-223.
Garland, T and Losos, J.B. (1994) Ecological morphology of locomotor performance in sqarmate reptiles. Pp 240-302. In: Functional Morphology: Intergrative Organismal Biology (eds P.C. Wainwright and S.M. Reilly). University of Chicago Press, Chicago.
Jayne, B.C. and Bennett, A.F. (1990) Selection of locomotor capacity in a natural population of garder snakes. Evolution 44: 1204-1229.
Liebovitch, L.S. (1998) Fractals and Chaos: Simplified for the life sciences. Oxford University Press, New York.
Luria, S.E. and Delbruck, M (1943) Mutations of bacteria from virus sensitivity of virus resistance. Genetics 28: 491-511.
Markus, M., Kuschmitz, D., and Hess, B. (1985) Properties of strange attractors in yeast glycolsis. Biophys. Chem. 22: 95-105.
Medwar, P.B. and Medwar, J.S. (1983) Reductionism, In A Philosophical Dictionary of Biology, Harvard University Press, Cambridge, Mass.
Mohr, H. (1989) Is the program of molecular biology reductionist? In Hoyningen-Huene P. and Wuketits, F.M. (eds) Reductionism and Systems Theory in the life Sciences. Kluwer Academic Publishers, London.
Wahungu, G.M., Catterall, C.P., and Olsen, M.F. (1999) Selective herbivory by red-necked pademelons Thyloggale thetis at the rainforest margins: factors affecting predation rates. Australian Journal of Ecology, 24: 577-586.
Wuketits, F.M. (1989) Organisms, vital forces, and machines: classical controversies and the contemporary discussion ‘Reductionism vs. Holism’. In Holyningen-Huene, P. and Wuketits, F.M. (eds) Reductionism and Systems Theory in the life sciences. Kluwer Academic Publishers, London.
Zinsser, H. (1947) Rats, Lice and History. Little, Brown and Company, Boston
Tuesday, 29 December 2015
making gifs from video files in Matlab
This is an update to a previous post. The previous post is below, edited to suit the new code which runs in matlab. there is a link here to the new code which was co-written with David Solletti.
How to convert and resize an AVI to gif in matlab
If you are giving a presentation, and you don't have awesome videos in it, you are either a 60 year old well renown expert in your field, or you are just plain mean. Videos are the highlight of any talk, the trouble is, they are often large, slow and have unpredictable issues when it come to finding the correct path file, since they always seem to be looking for some inaccessible folder, on a different computer, often hundreds of miles away. Then there you are, onstage, the video doesn't work, suddenly you have an angry crowd of caffeine fueled scientists wanting your open your skull to feast on the gooey stuff inside....
"Oh no, Dr Christofer" you say, "how do i overcome these problems?"
The answer is you use gifs! Gifs are without a doubt my new favourite format for showing moving images. So much better than videos, and only second to interpretive dance, which infact i did see one jerboa expert perform infront of a crowd of scientists. But if, like me, your interpretive dance skills are somewhat lacking you need the power of gif
How can you harness this power? Well, they can be made quite simply using the free ware program GIMP, but i had a few issues with this, particularly when it came to resizing my images. Plus it also meant getting a second program to extract individual frames from the movie. A better way would be to do both steps in matlab.
I looked on file exchange, and found some code called avi2gif.m but this seemed to use the 'aviread' function which didnt work in my version of matlab 2012a. Plus it also didnt let you resized your image or make it faster (you can only increase the delay between frames). Both are important if you want to get the file size down really low (for example only takes gifs < 2mb)
the new code can be found here
one important feature of the code, is the ability to reduce the frame number. Below is an example of this process.
anyway, that is my cheap and nasty code which worked for me. An important point to remember is the line "k = 1:2:nFrames etc" cuts out every other frame. I found if i did not do this, then the gif would run really slow. This might be an effect of using high speed film to make the videos. For example, this gif below was made using frameskip = 1
if it was any slower, we would have to use some sort of geological clock to time it by. Seriously it seems like that slow kid at school, who sat around eating glue all day.
so i used frameskip = 2
bit quicker but still its gonna run for a painfully long time when you are standing infront of a crowd trying to imagine yourself in everyones underwear
frameskip = 4
works much better - seems smoother
Well let me know if you have any improvements comments or suggestions, and i look forward to seeing many gifs in the upcoming conference in portand!
Sunday, 29 November 2015
Echidna Biomechanics
Ever wonder what an echidna does with all its time?
The echidna (Tachyglossus aculeatus) is a spiny ball of monotreme that looks something like this. Those spines are sharp mind you, and often leave a weird irritating/itchy marks on your skin after they stab into you. Given that its close relative the Platypus (Ornithorhynchus anatinus) has poisonous spurs on its hind limbs makes me wonder whether those spines aren't filled with something nasty. My point is that pretty much nothing is going to try to eat an echidna. Some germans once wrote a paper on what happens when you do. They came across the body of one of Australia's top predators, and one of my favourite animals, the Perentie (Varanus giganteus) which had the brave, yet somehow transparently stupid idea, to attempt eat an echidna whole and had in doing so met its own demise.
The photo is from Kirschner et al. (1996),
Darren Naish's blog .
The perentie had come across the echidna (dead or alive?) and had tried to eat it but the spikes had pierced its throat, and it wasn't able to neither swallow, nor eject the echidna from its mouth. Its not clear what actually killed the lizard in the end (starvation maybe?) but its likely it had some time to reflect on this, and no doubt many other decisions it had made throughout its long life of being lizardy and awesome. I actually think this specimen is on display in the Queensland Museum.
So the echidna is pretty much invenerable to predation, and presumably also asteroid strikes, lasers, paper cuts and anything less than a direct thermonuclear attack. So the question then arises, if you have nothing to fear what do you spend you afternoons doing? Hanging out by the local tree hollow? finding only the finest and tastiest termites available? The question, I am sure, has sometimes kept you up at night. Infact the question is all the more important when you consider that these little critters, with an Australia wide distribution, spend much of their time digging up the ground after termites, and in doing so move quite a bit of dirt. How much? No idea, but enough that these guys can start to change the profile of the landscape, putting them in that neat category of animals called ecosystem engineers. So if they are moving a bunch of dirt around, we ought to know about it.
To answer this question I teamed up with two echidna researchers, Christine Cooper (Curtin), and my old PhD supervisor Phil Withers (UWA). They were looking at the thermoregulation in echidnas (which is interesting in itself, since they are half mammal half lizard, white hot balls of spikey terror) - which was a great opportunity to test out some sensors I had been working on. When I say I was working on them, I mean another academic Phil Terrill from the school of engineering at UQ was working on them. I had originally anticipated using these sensors on some large varanid lizards, but it turns out that lizards are a giant pain in the ass to work with since they seem to travel forever, in no apparent direction, and bury themselves under piles of trees, rocks and dirt which make retrieving the sensors a little harder. Echidnas would surely be easier.
So I set off back to WA, to a small patch of bush called Dryandra Woodland, which was known for two things. Spikey trees called Dryandra, and spikey monotremes called Echidnas. They certainly are easier to catch than lizards, and to my best knowledge nobody in the history of the universe has been bitten by one. That seemed all well and good, so we strapped the accelerometer sensors, some temperature sensors, a GPS and a radio tracker onto the back of these echidnas, and let them go again, safe in the knowledge that we could retrieve them at our hearts desire, gather the data, replace any batteries, and set them off again.
It would be a week long peek into the private life of an echidna.
Turns out life is of course not that simple, our echidnas, likely pissed at our decorating them with sensors, antennas, and what not, retreated to the deepest and darkest caves humanly possible, only to emerge in the hours of the darkest nights, where they would attempt to evade capture by three sleepy
and exhausted scientists.
They were very nearly successful and many nights we were forced to climb into their lairs to change the batteries on their back, or change sensors over. But in the end we did get some data, and since this is a biomechanics blog we are going to focus on the biomechanics data. The first and most important thing i did before releasing the echidnas was to perform the sacred ritual among biomechanists, the 'Calibration dance'. This dance has many forms, each unique to the scientist that devises them. They have, in my observations as a biomechanist, two equally important and fundamental functions. Firstly, they must relate the position and movement reference frame to a recording cameras, this would also be important for synchronising accelerometer signals to the cameras later. And secondly, some might argue more importantly, they must make the person performing the calibration dance look as ridiculous as possible. And so it was with great fortune that I was able to convince my co-researcher Christine Cooper to perform this sacred dance with the echidnas.
Click to make Christine bigger
From this I could synchronise the camera to other activities, which we observed as the echidna performed, during its daring, yet slow and indecisive attempts, to escape from us once re-released. We got walking
And in rare cases climbing
We then divided the echidnas day in 30 second chunks, and used these signature accelerometer traces to assign to each short interval of time to a particular activity and in doing so work out exactly what the echidna was doing with its day.
And what does an echidna do with its day??
It turns out not much. Many of our echidnas spend as much as 80% of their day hiding in rock caves, thinking about whatever it is that echidnas think about, ants probably.
But from these short periods of activity we were able to get some interesting biomechanical data. Analysing the chunks of time when it was moving, and running a simple fourier analysis allowed us to determine the stride frequency of the echidna at different periods of the day. With some knowledge of the stride length of the echidna, we should be able to work out things like speed during different activities, and more. This will all help us figure out the private life of echidnas.
Friday, 5 September 2014
Dissection of the hindlimb of monitor lizards: V. varius and V. komodoensis, Part 1 of 4, Superficial Dorsal aspect.
One of the major questions I am trying to determine in my research is how muscle and bone strains change with body size and habitat among Australia’s giant lizards the Varanids (aka monitor lizards aka goannas aka large uncooperative lizards).
When I first attempted to dissect the hindlimb muscle of monitor lizards I was amazed about how little information there was on the topic. In the end the two most helpful bits of literature was the Snyder paper from 1954, and the book chapter, The Appendicular locomotor apparatus of Lepidosaurs, by Russell and Bauer, (2008), in Biology of the Reptilia, Vol 21.
Luckily I had a visit from muscle expert Taylor Dick, from Simon Fraser University Canada, and we were able to dissect some big lizards.
As a guide to help anyone else who might also be silly enough to want to follow along this line of inquiry we have made a guide below to help you identify some of the major muscles in the lizard hindlimb.
There will be 4 posts in total, this post will focus on the dorsal superficial aspect of the upper and lower hindlimb.
Varanus varius: This specimen was freshly sacrificed, and the muscles are very clear and easily defined. Click on the video below for a walk through.
Varanus komodoensis: Dissection of the Komodo dragon. This specimen had been frozen for 7 years, so the separation of the muscles is a little bit more difficult. Click on the video below for a walk through.
Monday, 4 August 2014
Evolution of bipedal running
Awesome lizard shot by
Simon Pynt which
sadly the journal did not
want on its cover
This week my paper on the evolution of bipedalism came out in the journal Evolution. This work is part of a long ongoing project understanding why these lizards sometimes run on two legs and sometimes run on four, and why Australian agamid lizards in particular seem to be so very good at the former. But to understand this we need a little bit of background into Bipedalism and why lizards are so weird.
Awesome picture of a dinosaur
I stole from the web. To make this blog
post look cooler.
Bipedalism (running on two legs) evolved independently many times, for example in hopping marsupials (like kangaroos), hopping placentals (like kangaroo rats), primates (like us), birds, dinosaurs, lizards, insects, and this awesome octopus! In birds, primates and dinosaurs the forelimbs appear to be used for something else so bipedalism makes sense, and hopping on two legs can save energy, but neither of these reasons seem to apply to lizards. Further I showed in an earlier paper that bipedal lizards are not faster, nor can they run for longer. So why are they doing it?
Well a dutch researcher called Peter Aerts actually suggested a different reason. Perhaps, he suggested, lizards were not trying to run bipedally on purpose, but rather they were trying to do something else, maybe they were trying to become more manoeuvrable. One way to become more manoeuvrable, is to shift all your mass backwards (which makes it easier to turn corners), and then accelerate quickly. Unfortunately for the lizards these things have a side effect, just like when a motorcycle accelerates too quickly, when a lizard shifts its mass backwards and accelerates too quickly, it can cause the front of the body to pop up, like its popping a wheelie. Seen in this light, bipedalism in some lizards might have been an accident, just a consequence of accelerating too hard, and this seems to match some of the data I have collected before on lizards. There is certainly an acceleration threshold where a lizard will pop on its two back legs, and a model produced by Peter Aerts even predicts when this should happen. And this model matches the data, for most lizards.
Another awesome dinosaur photo i stole from the web. Geez
these dinosaurs would have been so bad ass!
The trouble is that some lizards seem to beat the model. Some lizards seem to be able to run for much longer and at lower accelerations than is predicted from this accidental model. Are these lizards exploiting bipedalism? taking advantage of the accident? This is actually not so unheard of in nature, infact it’s common enough that scientists gave it a name, they called it an Exaptation (to differentiate it from the perhaps more familiar adaptation). Exaptations are exciting since they show us another way traits can evolve. Infact one of the most common examples of exaptation is the evolution of feathers in birds. Feathers, which we now commonly associate with flight in birds, did not originally evolve for this purpose. Lots of recent reports have shown us that the origin of feathers predates that of birds and are present in dinosaurs, meaning feathers probably evolved for another reason, like keeping dinosaurs warm. It was only later that birds exploited these feathers to make their remarkable flying wings. So could bipedalism, like the feathers of birds, be an exaptation? This is what I set out to find.
Figure showing the evolution of the mass distribution
among lizards. Red means a backwards shift, blue forwards.
The bipedal lizards are marked with a red box.
First I had to know, do lizards (and their ancestors) which run bipedally have their body mass pushed backwards, perhaps to try and become more manoeuvrable? I looked at this across 124 species of lizards, basically any lizard I could get my hands on from the Queensland museum, with the help of my student at the time Nicolas Wu, and I found yes indeed, the lineages leading to bipedalism had shifted their body centre of mass backwards.
Next I tested the model, I calculated (based on Aerts’ model) when lizards should go bipedal, and then ran these lizards a bunch of times to calculate the exact acceleration where they switched from four legs to two legs. As predicted some lizards matched the model quite well, but others were able to beat it, running bipedally sooner than expected.
This is one of the rare videos i got of a lizard transitioning from 4 legs to 2!
Finally we looked at where these differences are greatest in the family tree of Australian agamids. We found the ones that matched the model best occurred very early on in the evolutionary tree, but as the tree branched out the differences became greater. New species were beating the model more and more. All this adds up to one thing, an exaptation. Bipedalism first appeared on the scene a long time ago, and those that ran bipedally did so only by accident. But at some point some lizards started exploiting this, running bipedally further and more often than expected, taking advantage of the consequence. This is exciting since not only are we seeing an exaptation happen, but it means that running on two lizards actually conveys an advantage to these lizards. Just what this advantage is thought, I have no idea…yet.
Tuesday, 10 June 2014
Notes on running large lizards over forceplates
Taylor Dick, SFU (probably didn't expect
to be holding a lizard that big anytime
during her stay).
Anyone who has had the misfortune of stumbling upon this blog, and particularly those who have suffered through many of its posts might have noticed that one of the main themes is determining how muscle and bone strains change with body size and habitat among Australia’s giant lizards the Varanids (aka monitor lizards aka goannas aka large uncooperative lizards). Recently I had convinced Muscle expert Taylor Dick from SFU to come to Australia to study these questions with the eventual goal of building a musculoskeletal model of these lizards in the open-source biomechanics software OpenSim. She had already endured one trip out to the Australian desert in order to catch these beasts, but more was yet to come.
Example output of the force plate
from a dragon lizard, A. gilberti
Force plate design, shown here without
a plate on the top. Photo probably taken
during one of its many repair attempts
The second part of this project was to simultaneously measure forces and kinematics of these lizards which would act as valuable input parameter for this model. To do this I had constructed a 15 meter long racetrack at the university, along with a custom made force plate which would be buried in the ground, for the lizards to run over. I will add more details on the forceplate later, but basically it consists of 4 octagon rings arranged at each corner of a metal plate. Each is capable of measuring forces in two directions vertical and horizontal, and by positioning octagons in adjacent corners at 90 deg angles to one another I was capable of measuring fore-aft, lateral and vertical forces. An added bonus of having 4 vertical force sensors in each corner is that I could accurately estimate the centre of pressure of the foot during the stride.
Taylor with probably not enough laptops
One of our uncooperative
My thoughts were that building the force plate would be the most difficult part of this project, and having already done so, I was under the ill-begotten conclusion that the hardest part was over. But yet, as always I had forgotten the type of lizards I was dealing with, and the kind of misfortune which befalls a scientist. We lost several days battling noisy electrical systems in the animal yards, and were forced to take several trips to and from the lab to repair the force plate which I had so loving crafted for lab work, but for which the real world with its various sharp protruding objects, it was no match. When we finally had a working system, two high speed cameras on a custom built scaffold, combined with the force plate, it seemed again the worst was over.
trying to lead via example.
We began running the lizards – and it was time for the lizards to shine. Yet by the end of several hours of work we had only a handful of useful trial. It seems the lizards, for reasons I can only assume are nefariously motivated, refused to step wholly on the platform. Instead preferring the mind-blowingly frustrating alternative of stepping neatly to one side of the force plate, or on the edge of the platform, such that while data temptingly appeared upon the screen it was utterly useless, since the proportion of the force directed onto the force plate could not be known.
We attempted everything under the sun to encourage lizards stepping onto the forceplate, including doubling the size of the plate, halving the width of the racetrack, and painting it wholly black, such that it matched the surrounding carpet, and could not be mistaken for the presumably treacherous hazard it appeared, to be avoided at all costs. Yet none of these appeared to increase our success rate, which was as low as 1 in 30 runs. As can be seen in the video below
However, perseverance paid off, and by the end of the week we had collected over 60 successful trials, from these giant, largely uncooperative beasts. Below is a video of the rare and elusive, "successful trial" |
10c8c055c429cf08 | Thursday, June 30, 2016
1- Change is not merely necessary to life - it is life. ( Alvin Toffler)
2- Change is the process by which the future invades our lives. (Alvin Toffler)
3- Man has a limited biological capacity for change. When this capacity is overwhelmed, the capacity is in future shock. (Alvin Toffler)
4- The illiterate of the 21st Century are not those who cannot read and write but those who cannot learn, unlearn and relearn. (Alvin Toffler
5- The future always comes too fast and in the wrong order. (Alvin Toffler)
6- Knowledge is the most democratic source of power. (Alvin Toffler)
7- One of the definitions of sanity is the ability to tell real from unreal. Soon we'll need a new definition. (Alvin Toffler)
8- The great growling engine of change - technology. (Alvin Toffler)
9- Our technological powers increase, but the side effects and potential hazards also escalate. (Alvin Toffler)
10- Technology feeds on itself. Technology makes more technology possible. (Alvin Toffler)
11- It is better to err on the side of daring than the side of caution. (Alvin Toffler)
12-Rational behavior ... depends upon a ceaseless flow of data from the environment. It depends upon the power of the individual to predict, with at least a fair success, the outcome of his own actions. To do this, he must be able to predict how the environment will respond to his acts. Sanity, itself, thus hinges on man's ability to predict his immediate, personal future on the basis of information fed him by the environment.
13- Change is the only constant (Heidi Toffler)
a) In memoriam Alvin Toffler
So Alvin Toffler has died last Monday and I remembered with deep nostalgia reading, first "The future Shock" and later "The Third Wave", both important and influential. Toffler's role is explained in the following sentence
"His insights about how society behaves when too much change happens too quickly helps to guide our new direction for The World Future Society."
We (I am speaking about my wife and me plus our closer circles) have liked very much these writings, Toffler was a huge literary talent and for persuasiveness. However then we were living in an anomalous society (here the term is perfect, for LENR it must be avoided!) see my Septoe:
42. The Future Shock was amortized by irrationality.
Then after 1990 our world became a bit more rational politically, socially and economically speaking and the Future has arrived, is accelerating and we can participate more actively to it. It does not happened exactly as Toffler has predicted it but we have understood that In predictions there exists things fundamentally more important than inerrancy- as catching the Spirit of the time and Toffler has done this masterfully. Has he predicted the Internet /Web? It seems yes, in a way!
Perhaps Toffler has exaggerate a bit with the SHOCK; I asked my 0 years old grand-daughter Nora if it was difficult to advance from Mama's laptop to her own tablet and then to the smartphone she received at her birthday. Not at all it was much more painful to learn to read, write and the basics of math, IT is more human and rational.
b) LENR's specific shock(s)
The case of Cold Fusion/LENR: its past was shocking enough, its present is a teal shock and its future has to be made also so, but in the best sense!
Iwas repeatedly shocked by the slow development of the LENR field in contrast with the Tofflerian future in action. Now when this started to change, dark forces conspire to kill the LENR technology dream.
Please complete the details yourself.
Of Mice, Materials and Men
Who is talking about LENR on social media forums?
A poem for IH and their silence at the death of their LENR dreams
Do not go gentle into that good night
Do not go gentle into that good night,
Old age should burn and rave at close of day;
Rage, rage against the dying of the light.
Though wise men at their end know dark is right,
Because their words had forked no lightning they
Do not go gentle into that good night.
Good men, the last wave by, crying how bright
Their frail deeds might have danced in a green bay,
Rage, rage against the dying of the light.
Wild men who caught and sang the sun in flight,
Do not go gentle into that good night.
Grave men, near death, who see with blinding sight
Blind eyes could blaze like meteors and be gay,
Rage, rage against the dying of the light.
And you, my father, there on the sad height,
Do not go gentle into that good night.
Rage, rage against the dying of the light.
Space team discovers universe is self-cleaning
it is also about a source of cosmic energy
A Good slogan: "Face your fears"
Fear comes from a deep-seated place of feeling rather than thinking.
(Jeff Wise)
tensions of modern learning
Wednesday, June 29, 2016
The Rhino Principle.
A rhino is not a particularly subtle or intelligent creature, yet it has managed to dominate the savanna through sheer determination and aim. It takes initiative when it sees something it wants and puts everything into what it does best: charge!
I've always been suspicious of collective truths. (Eugene Ionesco)
a) Why is IH fabricating so many memes?
More readers have asked why the supporters of IH try to create so many anti-Rossi and again so many pro-IH memes? A first aspect showing they are not good in the art of Memetics is the "optimal density" of memes. Exactly as in agriculture where it is an optimal density of plants at crops, too dense memes start to mutually annihilate each other; just remember the incompatible "the plant is impossible" (people killed by heat not removable) and the other"the plant does not work at all" Indeed is if a list of memes is done and used according to a plan, or an "everything goes" ineffective mentality rules without control.
Why do they do this? Because, very probably, they do not have anything better! If they did, they would not have made another motion to dismiss The situation is: Rossi wants to go in Court, IH wants to escape. You can guess three causes why is this soo......please!
b) The stoppable subtleties of Jed Rothwell's personal memes
Admirable hyperactive for his followers< Jed Rothwell has created his own discussion thread serving simultaneously as a meme generator, a guillotine for everything Andrea Rossi and a training camp for his, very specific subtleties.
This it is:
I was wrong about Rossi, but what I fear most is that I might be partly right
Subtle as a rhinoceros, Jed writes here about the very document that has irreversibly convinced him that there was no excess heat in the 1MW experiment- he has received this from Andrea Rossi
Subtle as a rhinoceros he seemingly suggest here: "Andrea Rossi is a collection of sins and evilness, however even he is able to appreciate a genius, a genuine high class expert and this must be the reason that this otherwise very secret document has found me. Unfortunately, contrary to Rossi's expectation the paper has convinced me quasi instantly but beyond any doubt ,that the plant does not work at all, the Test is worse than a disaster."
The text is imagined but this must be the idea. A possible- but morally impossible! alternative is that Jed has invented the ghost document thing as a - again- subtle trap for Rossi who desperate seeing that Jed convinces everybody that no excess heat- will come with is data and Jed will officially mas sacrate and annihilate them- an easy pray for him!. Rossi did not want to comment about Rothwell's calorimetric genius, he knows why?!
I have to confess that I have not searched the rothwellisms thoroughly- surly I will not be one of his biographers however the following statement- a strong pro-IH mme: has an even higher degree of rhinocerian subtlety- it is text and context- not extract:
without which this field would be dead, dead, dead."
Some critics have found it needs a few definitions, then Jed has never spoken about the grace of God- as far I remember- the message is crystal clear: "IH is the savior of LENR!" Isn't it too much to say that without money from IH LENR would be 3-times dead; simply dead is not sufficient? Money from SKINR, possible from the ARMY, money outside US as in Japan, India, Russia, China, EU etc.will not be able to keep LENR in a half dead state? For Jed, the idea that LENR needs even more new ideas and young researchers than (micro)-funding is too subtle? He says directly to al LENR researchers: "Be against Rossi, be with IH- otherwise your research will die, die, die!" Further- no comment!
c) Re-read Eugene Ionesco's play- we LENR fighters have to avoid rhinocerization!
It is plenty of absurdity in this process of creating the memes of IH. I well remember this play about people losing their humanity
I am terrified and will not tell more, my readers can decide if the meme-factory has something rhinocerian in it or, if on the contrary it is a congregation of angels?!
However, I will finish in Jed';s NEW style asking:
"For God's sake, IH please go boldly and openly to the Trial!"
1) Is Clueless Jed Rothwell Paid or Played to Slander Penon and the ERV Reports on the MW COP~50 E-Cat Plant?
2) Andrea Rossi answers
Gerard McEk
June 28, 2016 at 8:11 AM
Dear Andrea,
You recently said that the light of the QuarkX has given you an idea how the Rossi-effect may work. (In other words: you may have seen the light).
1. Do you make any progress with the theory and
2. Do you expect it to lead to new patents?
In the past you said that you were preparing many patents.
3. Do you expect some of these to be published soon?
4. Is there any progress in the domestic QuarkX or
5. Do you expect that the lower temperature Ecat to be the most suitable solution?
Thank you for answering our questions.
Kind regards, Gerard
Andrea Rossi
June 28, 2016 at 3:55 PM
Gerard McEk:
1. yes
2. yes
3. no
4. yes
5. I do not know yet
Thank you for your attention,
Warm Regards,
3) Andrea Rossi does not answers and does not comment:
June 28, 2016 at 1:52 PM
Dear Dr Andrea Rossi:
sifferkoll link given here
My comment: IH again tries to escape from the litigation. If 1/100 of the slanders and the lies deposited in the blogs by the mad dogs of IH were true, IH would be eager to go in court…the fact that they are trying to delay and to suffocate the litigation makes clear that they are afraid of it.
Evidently they know that you have evidence that will defeat them in Court, where what counts is not the chattering of the mad dogs, but the real evidence.
In fact it appears that you are fighting to go in Court, they are trying to run away.
4) Russian language video: "News re LENR and CNF philosophical storm:
Seminar: "Philosophical Storm" June 28, 2016 presentation of Igor Iurievich Danilov
First part: QuarkX of Andrea Rossi
Second part: Microbes of Tamara Vladimirovna Sahno and Viktor Mihailovich Kurashov
5) Did Jed Rothwell Admit Being an IH Contracted Spin Doctor with a Freudian Slip?
The correct link to the Calaon paper is this:
Understanding of molecular hydrogen has implications from industry to medicine
Tuesday, June 28, 2016
The rule or domination by a meme or memes which are cultural practices or ideas that are transmitted verbally or by repeated actions from one person's conceptions to the minds of other people.
My Septoe: "20. We live in memecracies, ideas dominate us."
In the original introduction to the word meme in the last chapter of 'The Selfish Gene,' I did actually use the metaphor of a 'virus.' So when anybody talks about something going viral on the Internet, that is exactly what a meme is, and it looks as though the word has been appropriated for a subset of that. (Richard Dawkins)
Image result for memes quotes
a) IH 's plan seems to be based on memes- killer for Rossi and friendly for them
'Meme', the cultural equivalent of "gene" is a concept and word of vital importance however paradoxically it is not a strong meme itself- it is a bit too intellectual. However you cannot think well if you do not consider the existence of memes. I have written a lot about them including in this Blog.
If you are not familiar with memes please read at least:
It is my pleasure to announce you that now again memes have helped me to solve a prob;em I found first very difficult- in retrospective I was slow and non-creative and rigid in thinking.
It is about the enigma of the furious and seemingly senseless character and plant and technology assassination campaign of the IH propagandist lead by Jed Rothwell. (see a new opus by him below).
Why, for Hermes's sake, if they are right and can automatically win the Trial?
Why, for Minerva's sake, if they are wrong, does it help when facts speak at the Trial?
First it is obvious that IH manifests a totally negative enthusiasm toward the Trial and try very hard to escape from it see papers at 4) Legal battle. No traces of the noble spirit of "Fiat Justitia, pereat mundus",_et_pereat_mundus Justice at any price, but perhaps the cost is too high and the chances to win not so very high.
So what they actually do is clear:
Stay calm but angry, inventive, efficient, and make MEMES- two types:
A- Killer anti-Rossi and anti- what belongs to Rossi memes;
B- Friendly nice pro-IH memes
A-memes are cheap, free, but B-memes have a cost and need more fantasy.. and money. PLEASE read for that the opinions of Doug Marker.
The plan is to disseminate these memes on the Web, make them contagious, the Press, the public opinion and perhaps even the jurors from the Court will be memefied so the 'obviously good' will increase tremendously its chances to defeat the 'evidently malefic.'
We live in memecracies. Indeed?
b) Jed Rothwell's new opus
Did Rossi and IH have a valid contract that states; that if the general performance test were successful . . ."
I have not read the contract carefully, and I know little about contracts. Here is what I know: the performance was not successful. The data from Rossi proves that the machine did not work.
"IH has not paid and said the Test was not good; where is the first written document with serious warnings from IH to Rossi saying this; was it after the 1st, 2nd or 3rd ERV report?"
This is not a technical question. This has no bearing on calorimetry or science. This question illustrates how you have missed the point. You cannot judge a technical question by looking at people's behavior, or by examining business contracts. This question is fluff.
The dispute between Rossi and I.H. is about calorimetry. It is about flow rates, temperatures, steam quality and instruments. Your questions are irrelevant. Even if you knew the answers, they would not bring you one millimeter closer to knowing whether the machine worked or not.
Instead of waiting to learn the technical details, you obsess over these unrelated, non-scientific questions and gossip that has no bearing on the technical issues. I do not understand how a person with a technical background could make such a mistake.
So dear Jed there are only technical questions, later you say we have to apply the Scientific Method. Please apply it to Rossi's question regarding persuasion of the investors, OK?
1) Ok, So What Did Really Happen When Industrial Heat F*cked Up the Deal with Leonardo/Rossi? And Why?
2) Jones Day Lawyer Drones on Repeat in Another MTD. However, again Showing the Malicious Intent of IH!
6) The mystery of the irrational withdraw of the ECAT support
7) TheNewFire - LENR News
8) Yet Another LENR Theory: Electron-mediated Nuclear Reactions (EMNR)
9) Andrea Calaon∗ Independent Researcher, Monza, Italy
An attempt is made to build an LENR theory that does not contradict any basic principle of physics and gives a relatively simple explanation to the plethora of experimental results. A single unconventional assumption is made, namely that nuclei are kept together by a magnetic attraction mechanism, as proposed in the 1980s of the past century by Valerio Dallacasa and Norman Cook. This assumption contradicts a non-proven detail of the standard model, which instead attributes the nuclear force to a residual effect of the strong interaction. The theory is based also on a property of the electron which has been known for long, but has rarely been used: the Zitterbewegung (ZB). This property should allow the magnetic attraction mechanism that binds nucleons together, to manifest also between the electron and any isotope of hydrogen, leading to the formation of three neutral pseudo-particles (the component particles remain separate entities), collectively named here Hydronions (or Hyd). These pseudo-particles can then couple with other nuclei and lead to a fusion reaction “inside” the electron. The Coulomb barrier is not overcome kinetically, but through what could be interpreted as a range extension of the nuclear force itself, realized by the electron when some specific conditions are satisfied. The most important of these necessary conditions is that the electron has to “orbit” the hydrogen nucleus at a frequency of 2.055 × 1016Hz. This frequency corresponds to photons with an energy of about 85 eV or equivalently a wavelength of 14.6 nm in the Extreme Ultra Violet (EUV). So the large quanta of nuclear energy fractionate into EUV photons during the formation of the Hydronions and during the coupling of Hydronions to other nuclei. The formation of Hydronions requires the so called Nuclear Active Environment (NAE), which is what makes LENR so rare and difficult to reproduce. The numbers suggest that the NAE forms when an unshielded atomic core electron orbital that has an “orbital frequency” near to the coupling frequency is stricken by a naked Hydrogen Nucleus (HNu). This theory therefore implies that the NAE is not inside the metal matrix, but in its immediate neighbourhood. The best candidate atoms for a NAE are listed, based on the energy of their ionization energies. The coincidence with the most common LENR materials appears noteworthy. The Electron Mediated Nuclear Reactions (EMNR) theory can explain also very rapid runaway conditions, radio emissions, biological NAE, and the so called “strange radiation”. ⃝
c 2016 ISCMNS. All rights reserved. ISSN 2227-3123 Keywords: EMNR theory, Extreme ultra violet, Hydronion,
10) Electron Deep Orbits of the Hydrogen Atom
J. L. Paillet-1 , A. Meulenberg- 2
1 Aix-Marseille University, France,
2 Science for Humanity Trust, Inc., USA,
This work continues our previous work [1] and in a more developed form [2]), on electron deep orbits of the hydrogen atom. An introduction shows the importance of the deep orbits of hydrogen (H or D) for research in the LENR domain, and gives some general considerations on the EDO (Electron Deep Orbits) and on other works about deep orbits. A first part recalls the known criticism against the EDO and how we face it. At this occasion we highlight the difference of resolution of these problems between the relativistic Schrödinger equation and the Dirac equation, which leads for this latter, to consider a modified Coulomb potential with finite value inside the nucleus. In the second part, we consider the specific work of Maly and Va’vra [3], [4]) on deep orbits as solutions of the Dirac equation, so-called Deep Dirac Levels (DDLs). As a result of some criticism about the matching conditions at the boundary, we verified their computation, but by using a more complete ansatz for the “inside” solution. We can confirm the approximate size of the mean radii of DDL orbits and that decreases when the Dirac angular quantum number k increases. This latter finding is a self-consistent result since (as distinct from the atomic-electron orbitals) the binding energy of the DDL electron increases (in absolute value) with k. We observe that the essential element for obtaining deep orbits solutions is special relativity.
Some thoughts on why Jed Rothwell is so surprisingly persistent with his comments and claims that the entire Rossi 12 month test was a failure (and by default plus his blunt claims that Andrea Rossi and his claims dishonest).
Firstly, those of us who made published comments about IH's highly questionable behavior (such as my own comments published here) honed in on these aspects ...
1) That IH had paid 2 lots of money to Rossi for eCat tech
2) That IH have been in a relationship with Rossi for close on 3 years and not given any prior clear indication before the 12 month test of any issues with that relationship.
3) That Rossi's claim that IH used the early phases of the 12 month test to do fundraising. (this was very damning to IH's position).
4) Especially, that when this all exploded, the loudest anti Rossi voices were almost all known anti LENR people as well and thus were clearly biased and opportunistically leaping in to exploit the Rossi IH rift but were still using the situation to attack LENR in general.
What I am seeing, is that IH have adjusted their tactics in their publicity battle with Andrea Rossi by privately enlisting the support of people we all know are pro LENR (such as Jed Rothwell - and 1 or 2 others). Jed is currently and without doubt, a champion for IH's position. The issue you (Peter) raise regadring this is how is Jed able to be so certain when few others have access to what ever it was/is that he has been given access to.
The outcome: Jed is proclaiming (probably with some sense of justification) that we now need to accept 'Jed says' vs 'Rossi says'.
IMHO, it is clear that IH have set out to enlist the support of recognized pro LENR identities to counter the quite effective anti IH messages that Andrea Rossi put out around the time he filed his lawsuit.
But the questions you (Peter) raise, are valid questions and deserve to be answered. Clearly there will be no answers from Jed who argues he doesn't know other than the material passed to him to enlist his active anti Rossi remarks.
So, by enlisting people who are known to be pro LENR people, IH are effectively countering the harsh criticism some of us directed at them and also those people who we know are both anti LENR and anti-Rossi and who leaped on the IH bandwagon as champions of their battle with Rossi.
It seems to me IH saw itself in difficulty in the word war until it was able to associate its position with pro-LENR people and break away from only being supported by the opportunity grabbing anti-LENR voices we all know so well.
In regard to IH actively enlisting pro-LENR people to publicly join in their defense, If I were in their shoes (they are clearly in a difficult position) I would do what they are doing too.
But, it does raise the question as to what kind of inducements or assistance that IH might be offering these people to go public on their behalf and in their defense against Andrea Rossi.
I know that all LENR researchers need and seek support so when pro-LENR people clearly become vociferous supporters of the IH position against Andrea Rossi, it justifiably raises the question as to what 'rewards' tangible / intangible, these pro IH voices are being offered.
All such questions are valid and deserve answers.
Doug Marker
Monday, June 27, 2016
What is impossible in LENR ? That Andrea Rossi will give it up before everybody will be able to get energy from it.
That is the only impossibility I can be sure of, (Andrea Rossi).
Image result for paavo nurmi quotationImage result for jesse owens quotations
“Citius, Altius, Fortius.
Faster, Higher, Stronger.” (Olympic quote)
In soccer, LENR and Life Citius always wins !
Yesterday my wife and I were watching soccer UEFA EURO 2016 Belgium vs, Hungary. After a few minutes remembering my career as apprentice chronometrist for athletics from the 1950s my father acting then as coach- I told my wife:"see the Belgians are running so much faster, under 10.5 seconds per 100 meters while the Hungarians are slower then 11 seconds per 100 meters. it will be catastrophe.
Something similar happened a few days ago when the Albanians "outrun" the Romanian team. Fastness is so important in other sports too, I remember that as a kid I was sent to maestro Pellegrini's fencing school and he was contented with me- clumsy but very fast moving hands. It was in our period of reading Dumas and Michel Zevaco etc. so it was a cult of fencing and duelling but it is over now I have some duels with LENR people as below.
It is not easy to speak about fastness in classic LENR However Rossi obviously loves speed inclusive in development.
b) Facts can be understood only in their context.
Jed Rothwell:
You have not seen the data, so you have no basis to be convinced. Or not convinced. This is a technical issue. Opinions don't count. Everything hinges on flow rates, temperatures, instrument specifications, and so on. Based on these factors, experts at I.H. concluded that the reactor is not producing any excess heat. I am far less capable than those experts, but to the best of my ability, looking at a sample of that data, I too reached that conclusion.
You, Peter Gluck and everyone else will have to wait to see the data, and also the analysis of it from Rossi and from I.H. You cannot decide anything until then. You cannot even have an opinion. The rules of engineering and science say that every judgement must be grounded in facts, and you have no facts.
I think it is a grave mistake for Peter to assume he knows what is going on, and to assume that Rossi is right in this dispute, and that I.H. and I are lying. Since he has no facts, this reaction is purely emotional. It is irrational. Since he has no engineering details, he trots out all kinds of half-baked notions about business contracts, or the timing of announcements, or he quotes lies spread by Rossi -- as if you can draw a technical conclusion from such fluff! It is pathetic.
Peter is wrong. He will regret it if the facts are ever revealed. In science, you must never let your emotions or wishful thinking overrule rational, objective, fact-based analysis.
Jed continues stubbornly to non-answering to my 5 stupid, nosy and irrelevant questions and, as a symptom of something I still do not want to define exactly he answers to an imaginary question I have never put
This question, his not mine can be formulated as:
"Rossi says test good, IH says test bad. Being a Rossi fan and having b ovo great prejudices against IH, I believe Rossi. Why, on which basis, you, Jed are certain that IH is right? IT IS AN NONEXISTENT QUESTION!
I will repeat and explain my questions in a form a bit more accessible for you, supposing you are right 100%.
Rossi wrong, IH right. e will state together if this manoeuvre contributes to the missing IQ of the questions, makes them less impertinent and gives them a minimum of sense and relevance.
NOTE. I see logical rational straight thinking and discussions are not on your list of strengths so I must have more patience with you, just first I want to tell you about FACTS that are your privilege and not given to dorinary people:
Facts have significance only in context.
A first fast example. You read;
"Edmond Dantes has mercilessly ruined the lives of three rich and happy men"
What a sadistic rascal is the natural reaction to this but if you put the fact in proper context - the story of Count Monte Cristo by Alexandre Dumas= changes the understanding of the facts completely, isn't? Now your facts being OK let's return, for the lasttime to the lowly evaluated questions
1- Did Rossi and IH have a valid contract that states; that if the general performance test were successful, they should pay a great sum to Rossi? Possible answers Yes and No- the contrct was broken by IH.
Seemingly facts missing it was not and it has opened Rossi's way to a Trial. Jed please do not sya me that IH is happy with Trial, I am stupid but not sooo stupid!
It means if Rossi's results were indeed such a catastrophe total and continuous to maintain the contract.. why 's sake (the Greek God of Greed)
What could be confidential or secret in such a document of angelic honesty- "you are in trouble, we do not see excess heat even with the magnifier glass!" Harmony between thoughts, words and action is essential even to a company.It did not happened even at the end of the test or at the receipt oh ERV report no. 4. It happened when the Trial started. What is ou fact, in what context?
3- IH employees have participated at the test in parallel with Rossi's men; is there a written document showing they are in any way discontent with the test and the test being “a disaster”?
Is this a toxic question? Rossi says they were there- what is the fact yu know and those who ask- no?
4- When was the total incompetence of the ERV discovered; i.e. the inadequacy of the measuring instruments and when was it stated that the measurements are fatally flawed? (a document dated in 2015?)
As far as I can understand the methods of measurement were the same- dreadful from start o finish, they were never good then. This is a sad but explosive fact in the context of a vlid 94 milliion contract.
for a successful test.
5- Rossi claims: “All I know is that Darden and JT Vaughn collected $150 million after the test of the 1 MW E-Cat began, using the first and second report of the ERV as a tool to get the money, then after the 4th report ( equal to the former ones) they said what they said and did not pay” Is this slander and false accusations? Is this slander or false accusation?
This question deserves its color, it can be an infamous accusation but it comes from Rossi and who knows better the facts related to the 1MW plant? Is it a stupid question? Not at all because it is disturbing It is nosy only if it false, completely. It is not relevant for Jed but it can be relevant for many people some of them quite influential due to its deeper significance.
So Jed I ask you to not invent my questions, retract at least "nosy" and feel free to play with your facts
that are flawed like the ultraviolet unicorns- invisible, intangible, unverifiable missing birth certificates
and.. prepare to get facts from the Trial. My own sources say, but I can not reveal their identity that
the trial will take place in the first 5 days of September. A new rule- all the witnesses will be obliged to perform an IQ test before testifying. You have arranged this, ?
1) Excess Heat Generation in Ni + LiAlH4 System (New Report by I.N. Stepanov and V.A. Panchelyuga)
2) LENR afternoon with Ubaldo Mastromatteo- more videos
Pomeriggio Lenr Ubaldo Mastromatteo (5)
Claudio Pace have to ask Ubaldo to send us the text! not clear yet in which extent it is about LENR in the frame of rational mysticism
3) An interesting paper signalled by EGO OUT on June 25, is discussed here:
[Vo]:Ukrainian Paper on the active particle of LENR
4) A cold fusion paper in Dutch:
5) Andrey Illich Fursov (Russian Historian, sociologist, polytologist and publicist): About the Nuclear Cold Fusion of Ivan Stepanovich Filimonenko
6) At June 21, 2016 at Geneva Switzerland it was a press conference about an epochal discovery of transmutation of chemical elements by a biochemical method.
At the press conference have participated Tamar SahnoViktor Kutashov scientists
who made this discovery and Vladislav Karabanov administrator and leader oif this project.
Link to the patent for this invention
Very interesting I started to discuss with Vladimir Vysotskii about this he is the greatest specialist in biochemical transmutations.
7) Also see the above info, here:
Russian Team “Actinides” Announces Discovery of Industrial Biochemical Method of Elemental Transmutation (Press Conference and Press Release)
8 ) Greg Goble
Energy 54+ Black Swans listed by Paul Maher
Umair Haque: "The Art of Awakening"
It is time for a LENR awakening!
Why rudeness at work is contagious and difficult to stop |
fff02574615626ce | Wave function
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
In quantum mechanics, the Wave function, usually represented by Ψ, or ψ, describes the probability of finding an electron somewhere in its matter wave. To be more precise, the square of the wave function gives the probability of finding the location of the electron in the given area, since the normal answer for the wave function is usually a complex number. The wave function concept was first introduced in the legendary Schrödinger equation.
Mathematical Interpretation[change | change source]
The formula for finding the wave function (i.e., the probability wave), is below:
where i is the imaginary number, ψ (x,t) is the wave function, ħ is the reduced Planck constant, t is time, x is position in space, Ĥ is a mathematical object known as the Hamilton operator. The reader will note that the symbol denotes that the partial derivative of the wave function is being taken.
Related pages[change | change source] |
fdca52e96357439a | Laser stripping of hydrogen atoms by direct ionization
Brunetti, E and Becker, W and Bryant, H C and Jaroszynski, D A and Chou, W (2015) Laser stripping of hydrogen atoms by direct ionization. New Journal of Physics, 17. ISSN 1367-2630
Text (Brunetti-etal-NJP2015-Laser-stripping-of-hydrogen-atoms)
Final Published Version
License: Creative Commons Attribution 4.0 logo
Download (1MB)| Preview
Direct ionization of hydrogen atoms by laser irradiation is investigated as a potential new scheme to generate proton beams without stripping foils. The time-dependent Schrödinger equation describing the atom-radiation interaction is numerically solved obtaining accurate ionization cross-sections for a broad range of laser wavelengths, durations and energies. Parameters are identified where the Doppler frequency up-shift of radiation colliding with relativistic particles can lead to efficient ionization over large volumes and broad bandwidths using currently available lasers. |
1b1f284efd332669 |
• 5
$\begingroup$ Order this book, take two weeks off work and enjoy: amazon.co.uk/Emperors-New-Mind-Concerning-Computers/dp/… $\endgroup$ – MoonKnight May 8 '13 at 14:27
• 14
• 1
$\begingroup$ Possible duplicate: physics.stackexchange.com/q/7/2451 $\endgroup$ – Qmechanic May 8 '13 at 15:00
• 3
$\begingroup$ "Not only does God play dice, but... he sometimes throws them where they cannot be seen." Stephen Hawking $\endgroup$ – Dan Neely May 8 '13 at 17:28
• 2
• 2
$\begingroup$ Does this necessarily mean that the universe isn't deterministic though? Doesn't this just affect what we can determine based on what we can observe? $\endgroup$ – mowwwalker May 8 '13 at 16:21
• 10
$\begingroup$ @Walkerneo: According to the Copenhagen interpretation of QM, it does mean the universe is non-deterministic. There are other interpretations of QM which allow for determinism though. Which is the correct interpretation (if any)? Currently, no one knows. $\endgroup$ – BlueRaja - Danny Pflughoeft May 8 '13 at 19:14
• 19
• 3
• 2
$\begingroup$ Other good references for understanding what decoherence can and cannot say about the measurement problem: Schlosshauer, Zurek. (That second one was my advisor.) $\endgroup$ – Jess Riedel Nov 14 '13 at 10:19
• 8
$\begingroup$ It doesn't matter whether we talk about the Schrodinger equation or any other wave equation such as the Dirac equation. They all give deterministic evolution of the wavefunction. $\endgroup$ – Ben Crowell May 10 '13 at 14:43
• $\begingroup$ Fair enough. Anyway, Alex gave a much better answer so I should just have kept my keypad shut. $\endgroup$ – Groda.eu May 10 '13 at 14:56
• 3
$\begingroup$ Schrödinger equation works just fine in relativity. It takes on the form $i\hbar \partial_t \Psi[\phi] = H \Psi[\phi]$, where $\Psi[\phi]$ is a wave-functional over the configuration space of field configurations $\phi$. It's very much fundamental to quantum mechanics, it seems. It's just not a field equation any more. $\endgroup$ – lionelbrits Nov 15 '13 at 0:21
• 3
• $\begingroup$ Lastly, can you please elaborate the last part of your answer? I don't see how QM would be contradictory. $\endgroup$ – Alex A May 10 '13 at 16:47
• 2
And now the questions:
If effects preceded causes, would the universe be deterministic?
• $\begingroup$ Well yes, one can always make an appeal to the idea that everything we experience is part of a larger thing which is unlike everything we experience. But in what sense is our understanding advanced by such claims? What test would show that they are wrong if they are wrong? $\endgroup$ – Andrew Steane Jun 16 at 13:46
The whole question about a totally deterministic universe has other connotations.
However i will throw another answer here.
Physics is not competant to provide an answer to this question, but we can approach it in a reasonable way and point to evidence one way or the other.
The physics that we call 'fundamental' is, at the moment, general relativity and quantum field theory, and combinations thereof. The equation of motion in such theories is deterministic. It is difficult to propose non-deterministic equations of motion without thereby proposing unlikely things such as faster-than-light signalling. So this part of the evidence points to determinism.
Non-determinism is indicated by chaos theory in classical physics, by the unresolved issues surrounding the quantum measurement problem, and chiefly by a third property which I will come to in a moment.
In chaos theory trajectories diverge exponentially, but numbers in science are never precise. This makes it debatable whether or not classical physics is truly deterministic, because the idea of an exact real number in the physical world is sort of a fantasy. As soon as there is a limit to precision, it can get magnified by this exponential sensitivity and soon come to influence macroscopic phenomena. So, within the domain of classical physics, determinism relies on a degree of precision in the physical quantities which may be an unphysical requirement for all we know.
So far I have written enough to show that we do not know whether physical behaviour is deterministic. Now I will put forward what I think is strong evidence that it is not. This is the behaviour of our human bodies and brains which enables us to be reasonable---that is, to engage in reasoned argument and come to understand mathematics and other things on the basis of their reasonableness. If our brains were totally deterministic then we would think the things we do because the motions of our atoms so dictated. It is hard to see what reasonableness would have to do with it. This is not any kind of proof, but it is a suggestion that our thought processes are able to be influenced by something other than a combination of mere determinism and randomness. If so, then the basic physics of the world is of a kind which would support this. This suggests the world is not deterministic. This, to me, is a reasonable conclusion and it is the one I draw.
Note, this is not about adding some mysterious extra dimension or field or anything like that. It is simply to admit that we have not understood the world in full and a more complete understanding will likely show us that such familiar things as electrons and quarks are following patterns as subtly yet profoundly different from quantum theory as quantum theory is from classical theory.
The connection between free will and the ability to understand is, I should add, not one which all philosophers accept, but it is a philosophically respectable position. Roger Penrose, among others, has supported it by means of a careful presentation which can be found, if I recall rightly, in his book Shadows of the Mind (this is independent of what he or anyone else may think about wavefunction collapse).
• $\begingroup$ I would suggest you have made a category mistake in your reasoning here. Read Dennett not Penrose for this stuff. $\endgroup$ – Bruce Greetham Jun 16 at 10:04
• $\begingroup$ @BruceGreetham Thanks; could you say in a few words what kind of category mistake you mean? I have read Dennett's book on consciousness and have a low opinion of it because it merely asserts when it should argue. $\endgroup$ – Andrew Steane Jun 16 at 13:40
• $\begingroup$ I can but only briefly as it is somewhat off topic for Physics SE: "If our brains were totally deterministic then we would think the things we do because the motions of our atoms so dictated. It is hard to see what reasonableness would have to do with it." Dennett's key point is that "reasonableness" is a description from the intentional stance which is an independent but compatible description to the physical stance. But if you don't buy into that whole approach I probably cant convince you. $\endgroup$ – Bruce Greetham Jun 16 at 15:36
• $\begingroup$ @BruceGreetham Yes I find that position unconvincing. It relies on or appeals to a massive coincidence in which unlike categories align with one another for no convincing reason. I agree this is not the place for a lengthy discussion, but this much may at least help other visitors to this question on this site. $\endgroup$ – Andrew Steane Jun 16 at 17:57
protected by Qmechanic May 10 '13 at 14:51
Would you like to answer one of these unanswered questions instead?
|
7807014919f6624f | For an observable $A$ and a Hamiltonian $H$, Wikipedia gives the time evolution equation for $A(t) = e^{iHt/\hbar} A e^{-iHt/\hbar}$ in the Heisenberg picture as
$$\frac{d}{dt} A(t) = \frac{i}{\hbar} [H, A] + \frac{\partial A}{\partial t}.$$
From their derivation it sure looks like $\frac{\partial A}{\partial t}$ is supposed to be the derivative of the original operator $A$ with respect to $t$ and $\frac{d}{dt} A(t)$ is the derivative of the transformed operator. However, the Wikipedia derivation then goes on to say that $\frac{\partial A}{\partial t}$ is the derivative with respect to time of the transformed operator. But if that's true, then what does $\frac{d}{dt} A(t)$ mean? Or is that just a mistake?
(I need to know which term to get rid of if $A$ is time-independent in the Schrodinger picture. I think it's $\frac{\partial A}{\partial t}$ but you can never be too sure of these things.)
There is no mistake on the Wikipedia page and all the equations and statements are consistent with each other. In $$A_{\rm Heis.}(t) = e^{iHt/\hbar} A e^{-iHt/\hbar}$$ the letter $A$ in the middle of the product represents the Schrödinger picture operator $A = A_{\rm Schr.}$ that is not evolving with time because in the Schrödinger picture, the dynamical evolution is guaranteed by the evolution of the state vector $|\psi\rangle$.
However, this doesn't mean that the time derivative $dA_{\rm Schr.}/dt=0$. Instead, we have $$ \frac{dA_{\rm Schr.}}{dt} = \frac{\partial A_{\rm Schr.}}{\partial t} $$ Here, $A_{\rm Schr.}$ is meant to be a function of $x_i, p_j$, and $t$. In most cases, there is no dependence of the Schrödinger picture operators on $t$ - which we call an "explicit dependence" - but it is possible to consider a more general case in which this explicit dependence does exist (some terms in the energy, e.g. the electrostatic energy in an external field, may be naturally time-dependent).
In Schrödinger's picture, $dx_{i,\rm Schr.}/dt=0$ and $dp_{j,\rm Schr.}/dt=0$ which is why the total derivative of $A_{\rm Schr.}$ with respect to time is given just by the partial derivative with respect to time. Imagine, for example, $$ A_{\rm Schr.}(t) = c_1 x^2 + c_2 p^2 + c_3 (t) (xp+px) $$ We would have $$ \frac{dA_{\rm Schr.}(t)}{dt} = \frac{\partial c_3(t)}{\partial t} (xp+px).$$ These Schrödinger's picture operators are called "untransformed" on that Wikipedia page. The transformed ones are the Heisenberg picture operators given by $$A_{\rm Heis.}(t) = e^{iHt/\hbar} A_{\rm Schr.}(t) e^{-iHt/\hbar}$$ Their time derivative, $dA_{\rm Heis.}(t)/dt$, is more complicated. An easy differentiation gives exactly the formula involving $[H,A_{\rm Heis.}]$ that you quoted as well. $$\frac{d}{dt} A_{\rm Heis.}(t) = \frac{i}{\hbar} [H, A_{\rm Heis.}(t)] + \frac{\partial A_{\rm Heis.}(t)}{\partial t}.$$ The two terms in the commutator arise from the $t$-derivatives of the two exponentials in the formula for the Heisenberg $A_{\rm Heis.}(t)$ while the partial derivative arises from $dA_{\rm Schr.}/dt$ we have always had. (These simple equations remain this simple even for a time-dependent $A_{\rm Schr.}$; however, we have to assume that the total $H$ is time-independent, otherwise all the equations would get more complicated.) The two exponentials on both sides never disappear by any kind of derivative, so obviously, all the appearances of $A$ in the differential equation above are $A_{\rm Heis.}$. The displayed equation above is the (only) dynamical equation for the Heisenberg picture so it is self-contained and doesn't include any objects from other pictures.
In the Heisenberg picture, it is no longer the case that $dx_{\rm Heis.}(t)/dt=0$ (not!) and the similar identity fails for $p_{\rm Heis.}(t)$ as well. $A_{\rm Heis.}(t)$ is a general function of all the basic operators $x_{i,\rm Heis.}(t)$ and $p_{j,\rm Heis.}(t)$, as well as time $t$.
The Heisenberg picture is defined as
$$A_{\mathrm{H}}(t) = e^{iHt/\hbar} A_{\mathrm{S}}(t) e^{-iHt/\hbar}$$
differentiating both sides we obtain
$$i\hbar \frac{\mathrm{d}}{\mathrm{d} t} A_{\mathrm{H}}(t) = [ A_{\mathrm{H}}(t), H] + i\hbar \left( \frac{\mathrm{d}}{\mathrm{d} t} A_{\mathrm{S}}(t) \right)_{\mathrm{H}} \>\>\>\>\>\>\>\>\>\>\>\>\>\> (1)$$
Some textbooks rewrite the last term using the notation [*]
$$\frac{\partial}{\partial t} A_{\mathrm{H}}(t) \equiv \left( \frac{\mathrm{d}}{\mathrm{d} t} A_{\mathrm{S}}(t) \right)_{\mathrm{H}}$$
[*] I agree on that this notation is awkward for mathematicians (it is not a true partial derivative) and the more rigorous physics textbooks use (1) with the total time derivative.
It's easiest to derive this from the Schrödinger picture:
Let $B(t)$ be a time-dependent operator in the Schrödinger picture. The corresponding operator in the Heisenberg picture is $A(t) = e^{iHt/\hbar} B(t) e^{-iHt/\hbar}$. Differentiation with respect to $t$ gives
$$ \frac{d}{dt} A(t) = e^{iHt/\hbar} \left(\frac{i}{\hbar} H B(t) + \frac{\partial}{\partial t}B(t) - \frac{i}{\hbar} B(t) H) \right) e^{-iHt/\hbar} $$ $$ = e^{iHt/\hbar} \left(\frac{i}{\hbar} [H,B(t)] + \frac{\partial}{\partial t}B(t)\right) e^{-iHt/\hbar} = \frac{i}{\hbar} [H,A(t)] + \frac{\partial A}{\partial t} $$
In other words, the last partial derivative is to be understood in the sense that you take the operator $\frac{\partial B}{\partial t}$ and "evolve it in time" via the Schrödinger equation.
Useful non-example: the velocity operator $\vec v$. The velocity operator is the derivative of the position operator, but it's the total derivative as the system evolves. Hence,
$$ \vec v = \frac{i}{\hbar} [H,\vec r] .$$
In the Schrödinger picture, the position operator is, of course, time independent. Since $H$ is time independent as well, this is also the right velocity operator in the Schrödinger picture.
As always in the Hamiltonian formulation of mechanics, whether classical or quantum, $$\partial A\over\partial t$$ means the way $A$ varies explicitly in time simply from the occurrence of $t$ explicitly in its formula.
But some of the other parts of the formula of $A$ might change with time also, thus contributing something to the total change in $A$ as time goes by, notated $$dA\over dt.$$
This is the same as the notation in the chain rule in several variables where $df=\frac{\partial f}{\partial x}dx + \frac{\partial f}{\partial t}dt$. The differential on the Left Hand Side is the « total differential » $df$ but it is the sum of two terms, only one of which is the explicit dependence of $f$ on $t$.
For those doing homework and - like me - have their heads spinning about Schrodinger vs. Heisenberg pictures, and consequently have lost sight of the basic principles which got your here, recall there is a difference between the evolution of expectation values and the evolution of operators.
$$\frac{\partial x}{\partial t} = 0$$
the operator $x$ does not evolve with time in Schrodinger picture. As it relates to the original question which started this thread, in the Heisenberg picture however, $x$ would be represented with evolution operators tacked on either side of it, each of which have explicit dependence on time. Hence, what I believe Wikipedia meant by "transformed operator" relates to the "picture" it is represented in.
Your Answer
|
c65df020773b0109 | PhD Paul Tiwald
Local Electronic Excitations in Extended Systems: A Quantum-Chemistry Approach
Real solids and surfaces are not "perfect". Crystals inevitably contain various defects and surfaces are subject to interactions with ambient particles leading, for example, to oxidation and adsorption. Since such effects are present in everyday devices and applications a deep understanding of the underlying physics is of great importance. In this thesis we study the properties of two very localized imperfections: the F-type color center in alkali-halide crystals and the charge transfer during scattering of an ion from an insulator surface. Both effects have been studied for a long time but a detailed theoretical understanding on the ab-initio level seems to be missing.
This thesis provides state-of-theart ab-initio calculations and addresses open questions. In particular, we present an ab-initio study of the physics underlying the so-called Mollwo-Ivey relation. This relation connects the F-center absorption energies with the crystal lattice constants and has not been fully understood so far. Second, we present the first ab-initio results on the charge-transfer probability during scattering of a proton from a lithium-fluoride surface. This study is based on a non-adiabatic molecular dynamics approach that provides microscopic insight into the charge-transfer process. Both the light absorption by the color center and the charge transfer represent local electronic excitations: the F-type color center consists of an electron strongly localized at an anionic vacancy and the electron transferred is strongly localized in close vicinity of the proton. This localization allows for application of the so-called embedded cluster approach in which the extended system is approximated by an embedded finite-sized active cluster. To study the properties of the active clusters we apply high-level quantum chemistry methods solving the electronic Schrödinger equation. |
05526fb6c6342a6f | Application of Scale Relativity to the Problem of a Particle in a Simple Harmonic Oscillator Potential
Journal of Quantum Information Science
Vol.07 No.03(2017), Article ID:79131,12 pages
Saeed N. T. Al-Rashid1, Mohammed A. Z. Habeeb2, Khalid A. Ahmed3
1Physics Department, College of Pure Science, University of Anbar, Anbar, Iraq
2Physics Department, College of Science, Al-Nahrain University, Baghdad, Iraq
3Physics Department, College of Science, Al-Mustansiriyah University, Baghdad, Iraq
Copyright © 2017 by authors and Scientific Research Publishing Inc.
Received: July 22, 2017; Accepted: September 11, 2017; Published: September 18, 2017
In the present work, Scale Relativity (SR) is applied to a particle in a simple harmonic oscillator (SHO) potential. This is done by utilizing a novel mathematical connection between SR approach to quantum mechanics and the well-known Riccati equation. Then, computer programs were written using the standard MATLAB 7 code to numerically simulate the behavior of the quantum particle utilizing the solutions of the fractal equations of motion obtained from SR method. Comparison of the results with the conventional quantum mechanics probability density is shown to be in very precise agreement. This agreement was improved further for some cases by utilizing the idea of thermalization of the initial particle state and by optimizing the parameters used in the numerical simulations such as the time step and number of coordinate divisions. It is concluded from the present work that SR method can be used as a basis for description the quantum behavior without reference to conventional formulation of quantum mechanics. Hence, it can also be concluded that the fractal nature of space-time implied by SR, is at the origin of the quantum behavior observed in these problems. The novel mathematical connection between SR and the Riccati equation, which was previously used in quantum mechanics without reference to SR, needs further investigation in future work.
Simple Harmonic Oscillator, Scale Relativity, Numerical Simulations, Fractal Space-Time
1. Introduction
Scale relativity (SR) developed by Nottale based on the extension of the principle relativity as follows “the fundamental laws of nature apply whatever the state of scale of the coordinate system” [1] [2] [3] [4] [5] . The observation resolutions now characterize the reference system and can be defined only in relative way. This major concept of SR leads to giving up hypothesis of differentiability of space-time. Quantum mechanics can then be reformulated from this basic principle of SR form of covariance and geodesic equations, by considering a particle as a geodesics in now fractal space-time. There are at least three major fields of application for SR method, microphysics, complex systems and cosmology [6] [7] [8] [9] [10] .
As far as quantum mechanics is concerned, Nottale and co-workers were able to apply the theory to solve many problems, especially those related to the conceptual and interpretation aspects. The derivation of the postulates of quantum mechanics from basic principle of SR [11] , is basis of the present work. It shows that quantum mechanical behavior appears without any use of the Schrodinger equation, but as a consequence of the fractality of space-time. The extension of the SR theory to the derivation of the main equations of relativistic quantum mechanics [12] and the relationship between the classical and quantum regimes [13] have been also discussed on the basis of the SR among other important consequences and implications. With all these far reaching aspects of the theory, direct investigations which would shed light on the basic workings of the SR method as formulated by Nottale seem to be warranted.
The fractional equations of motion which are obtained from application of SR, were applied directly by Hermann [14] , in terms of a large number of explicit numerically simulated trajectories for a free particle in an infinite one-dimensional box [15] [16] [17] . Similarly, Al-Rashid [18] [19] [20] , applied SR to the finite one-dimensional square well potential and special case in a double oscillator problems.
The validity of SR not restricted to the cases by Hermann [14] and Alrashid [18] [19] [20] . Besides, such applications are expected to reveal some novel concepts, such as the connection between SR and the Riccati equation [21] [22] [23] as revealed in the present work.
In this paper, the problem of a particle moving in one dimensional SHO will be treated by applying the principle of SR along the lines of Hermann. To the best of our knowledge, this problem has not been treated by using Hermann line elsewhere [14] [24] .
2. Equation of Motion
One may start with the complex Newton Equation [14] [18] :
u = m ð d t V (1)
where u is a scalar potential and V is a complex velocity, then separate this equation into real and imaginary parts:
m ( t V D Δ U + ( V ) V ( U ) U ) = u and m ( t V + D Δ V + ( V ) U + ( U ) V ) = 0 (2)
Here, the average classical velocity V is expected to be zero because the simple harmonic oscillator is a symmetric system. Then, the equations of motion can be reduced as [14] [18] :
x ( x D U ( x ) + 1 2 U 2 ( x ) ) = 0 (3)
t U ( x ) = 0 (4)
where U is the imaginary part of complex velocity and D is the diffusion coefficient. Equation (4) shows that U is a function of x alone. The potential of the
one-dimensional SHO can be written as 1 2 m ω 2 x 2 , where ω is the angular frequency. Then, Equation (3) becomes:
x ( x D U ( x ) + 1 2 U 2 ( x ) ) = 1 2 m ω 2 x x 2 (5)
Integrating and rearranging terms in the resulting equation, one obtains:
d d x U ( x ) + 1 2 D U 2 ( x ) 1 2 D m ω 2 x 2 + 1 D c 1 = 0 (6)
where c1 is a constant of integration. Letting c 1 = E / m (as in Hermann’s work) [14] , then Equation (6) becomes:
d U ( x ) d x + m U 2 ( x ) m 2 ω 2 x 2 + 2 E = 0 (7)
where D = 2 m . The last equation has the form of a Riccati equation [19] [20]
[21] [22] [23] . To solve this equation, one may trans-form it into a 2nd order differential equation of the form [19] [20] [21] [22] [23] :
r y ( x ) + r 2 q ( x ) y ( x ) = 0 (8)
U ( x ) = 1 r y ( x ) y ( x ) (9)
and y ( x ) is an arbitrary function of x. From this Equation (7), it follow that:
r = m ; q ( x ) = 2 ( 1 2 m ω 2 x 2 E ) (10)
Then, Equation (7) becomes:
y ( x ) + 2 m 2 ( E 1 2 m ω 2 x 2 ) y ( x ) = 0 (11)
Its solution is:
y n ( x ) = A n exp ( x 2 2 ) H n ( x ) (12)
where An is a constant and Hn is a Hermite polynomial of order n and n = 0 , 1 , 2 , . Then, Un(x) is given by:
U n ( x ) = m ( x H n ( x ) + H n ( x ) ) (13)
Using the equality H n ( x ) = x n H n 1 ( x ) then, Equation (13) becomes:
U n ( x ) = m ( x + 2 n ( H n 1 ( x ) / H n ( x ) ) ) (14)
As in Hermann work [14] , U(x) is treated as a difference of velocities, i.e., it is a kind of acceleration. Thus, the equation of position coordinate has the following form, which is a stochastic process:
d x ( t ) = m ( x + 2 n ( H n 1 ( x ) / H n ( x ) ) ) d t + d ξ + ( t ) (15)
where d ξ + ( t ) is now Gaussian random variable of standard deviation 2 D d t .
3. Numerical Simulations
Equation (15) represents a stochastic process [14] . Here, in the problem of a one-dimensional SHO, it was found that the assumption 2Ddt = 1 is not useful for the present simulations since it gives bad results for the present application. Then, one starts to adjust the value of dt until one approaches a specific value for
which meaningful results are obtained. It was found that a value of d t = 10 3 m
is suitable for the present simulations. It seems that this value of dt is related to the period of the motion in the SHO potential. It is expected that a suitable value which gives meaningful numerical simulation results is that which leads to a sufficient number of time steps during one period so as to give meaningful counts. This is a consequence of the statistical nature of these simulations which requires better statistics to be meaningful. Then, Equation (15) becomes:
d x ( t ) = 10 3 ( x + 2 n ( H n 1 ( x ) / H n ( x ) ) ) + 10 3 N ( 0 , 1 ) (16)
where the choice of units was made such that = m = 1 .
A computer program was written (see Appendix A), following Hermann’s procedure [14] , to make numerical simulations for the SHO problem. Numerical simulations are performed using Equation (16) which represent trajectory equations of the particle for different different values of the quantum number n (n = 0, 1, 2, 3, 4 and 5). The output of these simulations gives the probability density ƒ(x) of the particle in simple harmonic oscillator potential. To construct it, one may divide the region into 601 pieces (bins), which gives the best results and time steps (cc) of 108 and 5 × 108 steps were used, as in Hermann’s work.
The results of the present numerical simulations are compared with the probability density of conventional quantum mechanics, that is, P ( x ) = N n 2 H n 2 ( x ) e x 2 where N n = 1 / 2 n n ! π 1 / 2 is the normalization constant [15] [16] [17] . The continuous curves indicate the results of the present simulations and the dashed curves the results of conventional quantum mechanics, with the same normalization as the numerical results. The comparison between the present results and the results of conventional quantum mechanics is further facilitated by calculating the standard deviation σ and correlation coefficient ρ, which are given by [14] :
σ = i = 1 N ( P ( i ) f ( i ) ) 2 N (17)
ρ = i = 1 N ( P ( i ) P ) ( f ( i ) f ) i = 1 N ( P ( i ) P ) 2 i = 1 N ( f ( i ) f ) 2 (18)
where N is the number of pieces, P ( i ) P ( x ) and f ( i ) f ( x ) .
Figures 1-3 show the results of numerical simulations for n = 0, 1, 2, 3, 4, and 5 with 108 time steps (cc). These numerical simulations started with arbitrary particle at the position x = 2 . Also, the output of the simulations was normalized by multiplying it with a constant q whose value depends on the number of divisions of the region (here, q = 50).
Here, it was found, after some numerical tests, that the thermalization process [14] is useful to improve the present results. Figure 4 shows the results of such numerical tests for n = 2, 3 and 5 which have starting points ss = 100 and 200. These starting points are chosen after many attempts and were found to give better results from other choices. The improvement is clear from the values of σ
Figure 1. Probability density for a particle in a SHO potential (a) n = 0 and (b) n = 1, without thermalization process.
Figure 2. Probability density for a particle in a SHO potential (a) n = 2 and (b) n = 3, without thermalization process.
Figure 3. Probability density for a particle in a SHO potential (a) n = 4 and (b) n = 5, without thermalization process.
and ρ compared with Figure 2 and Figure 3. The present results can also be improved to increase convergence between them and the results of quantum mechanics by using more time steps. Figure 5 shows the results obtained this way, for n = 3. It appears that there is a better agreement with the results of conventional quantum mechanics compared with the results from a thermalization process for n = 3 (see Figure 4).
It was also found that, in the present problem, convergence between the results of numerical simulations and those of conventional quantum mechanics can be improved by increasing the number of boxes. This is clear in Figure 6, where it appears that there is better agreement between the two results for n = 3 when the number of boxes was increased to 1201.
4. Conclusion
The quantitative prediction of the behavior of a quantum particle in simple harmonic oscillator potential can be correctly obtained without explicitly writing the Schrödinger equation nor using any other of the conventional quantum axiom. This leads one to conclude from the present work that SR is a well- founded approach for deriving quantum mechanics from the concept of fractal space-time, consequence of the extension of the relativity principle to resolu-
Figure 4. Probability density for a particle in a SHO potential (a) n = 2, (b) n = 3 and (c) n = 5, with thermalization process.
Figure 5. Probability density for a particle in a SHO potential with n=3 for longer time steps (cc = 5 × 108).
Figure 6. Probability density for a particle in a SHO potential with n = 3 after increasing the number of boxes.
tions. Successful applications were not achievable without, among other things, a new adjustment for the time step dt after some deeper understanding of the underlying particle motion in some problems. It is expected that this understanding is necessary when attempts are made to solve other quantum mechanical problems. The appearance of the Riccati equation in connection with SR theory in the present work, and the use of this equation in conventional quantum mechanics in previous works [22] [23] leads one to conclude that this equation is deeply rooted in the quantum mechanical behavior. It is also concluded from the attempts made in the present work that it is possible to improve the numerical simulation results by parameter optimization, and that further improvement is possible, but requires more computer time. SR is not a particularly advantageous approach for solving quantum mechanical problems directly. Rather, reveals the relationship between the quantum behavior and the fractality of space-time.
We would like to deeply thank Prof. Dr. L. Nottale (Director of Research, CNRS, Paris, France) for clarifying some points regarding his theory of scale relativity, Dr. R. Hermann (Dept. of Physics, Univ. de Liege, Belgium) for his suggestions concerning further applications of scale relativity method and Dr. Stephan LeBohec (Dept. Physics and Astronomy, Univ. of Utah, Salt Lake City, Utah, USA) for his continuous encouragement.
Cite this paper
Al-Rashid, S.N.T., Habeeb, M.A.Z. and Ahmed, K.A. (2017) Application of Scale Relativity to the Problem of a Particle in a Simple Harmonic Oscillator Potential. Journal of Quantum Information Science, 7, 77-88. https://doi.org/10.4236/jqis.2017.73008
1. 1. Nottale, L. (1998) Fractal Space-Time and Microphysics: To wards of a Theory of Scale Relativity. World Scientific (First Reprint).
2. 2. Nottale, L. (1994) The Scale Relativity Program. Chaos, Solitons and Fractals, 10, 459-468. https://doi.org/10.1016/S0960-0779(98)00195-7
3. 3. Nottale, L. (1992) The Theory of Scale Relativity. International Journal of Modern Physics A, 7, 4899-4936. https://doi.org/10.1142/S0217751X92002222
4. 4. Nottale, L. (2004) The Theory of Scale Relativity: Non-Differentiable Geometry and Fractal Space-Time. AIP Conference Proceedings, 718, 68-95. https://doi.org/10.1063/1.1787313
5. 5. Nottale, L. (1996) Scale Relativity and Fractal Space-Time: Application to Quantum Physics, Cosmology and Chaotic Systems. Chaos, Solitons and Fractals, 7, 877-938. https://doi.org/10.1016/0960-0779(96)00002-1
6. 6. Nottale, L. (1997) Scale Relativity and Quantization of the Universe-I, Theoretical Framework. Astronomy & Astrophysics, 327, 867-889.
7. 7. Nottale, L., Schumacher, G. and Gray, J. (1997) Scale Relativity and Quantization of the Solar System. Astronomy & Astrophysics, 322, 1018-1025.
8. 8. Nottale, L. (1996) Scale Relativity and Quantization of Extra-Solar Planetary Systems. Astronomy & Astrophysics, 315, L9-L12.
9. 9. Nottale, L. (1998) Scale Relativity and Quantization of the Planetary Systems around the Pulsar PSR B1257+12. Chaos, Solitons and Fractals, 9, 1043-1050. https://doi.org/10.1016/S0960-0779(97)00079-9
10. 10. Nottale, L. (1998) Scale Relativity and Quantization of Planet Obliquities. Chaos, Solitons and Fractals, 9, 1035-1041. https://doi.org/10.1016/S0960-0779(97)00078-7
11. 11. Nottale, L. and Célérier, M.N. (2007) Derivation of the Postulates of Quantum Mechanics from the First Principles of Scale Relativity. Journal of Physics A: Mathematical and Theoretical, 40, 14471-14498. https://doi.org/10.1088/1751-8113/40/48/012
12. 12. Célérier, M.N. and Nottale, L. (2010) Electromagnetic Klein-Gordon and Dirac Equations in Scale Relativity. International Journal of Modern Physics A, 25, 4239-4253. https://doi.org/10.1142/S0217751X10050615
13. 13. Nottale, L. (2005) On the Transition from the Classical to the Quantum Regime in Fractal Space-Time Theory. Chaos, Solitons and Fractals, 25, 797-803.
14. 14. Hermann, R.P. (1997) Numerical Simulation of a Quantum Patrical in a Box. Journal of Physics A: Mathematical and General, 30, 3967-3975. https://doi.org/10.1088/0305-4470/30/11/023
15. 15. Schiff, L.I. (1969) Quantum Mechanics. 3rd Edition, Int. Student, McGraw-Hill.
16. 16. Gasiorowicz, S. (1974) Quantum Physics. John Wiley and Sons, Inc., New York.
17. 17. Powell, J.L. and Crasemann, B. (1961) Quantum Mechanics. Addison-Wesley Publishing Co., Inc.
18. 18. Alrashid, S.N.T., Habeeb, M.Z.A. and Ahmed, K.A. (2011) Application of Scale Relativity (ScR) Theory to the Problem of a Particle in a Finite One-Dimensional Square Well (FODSW) Potential. Journal of Quantum Information Science, 1, 7-17. https://doi.org/10.4236/jqis.2011.11002
19. 19. Al-Rashid, S.N.T. (2006) Some Applications of Scale Relativity Theory in Quantum Physics. PhD Thesis, Al-Mustansiriyah University.
20. 20. Al-Rashid, S.N.T. (2007) Numerical Simulations of Particle in a Double Oscillators. Journal of University of Anbar for Pure Science, 1.
21. 21. Charlton, F. (1998) Integrating Factor for First-Order Differential Equations. Classroom Notes, Aston University.
22. 22. Bessis, N. and Bessis, G. (1997) Open Perturbation and Riccati Equation: Algebraic Determination of Quartic Anharmonic Oscillator Energies and Eigenfunction. Journal of Mathematical Physics, 38, 5483-5492. https://doi.org/10.1063/1.532147
23. 23. Rogers, G.W. (1985) Riccati Equation and Perturbation Expansion in Quantum Mechanics. Journal of Mathematical Physics, 26, 567-575. https://doi.org/10.1063/1.526592
24. 24. Nottale, L. and Hermann, R.P. (2003-2017) Private Correspondence.
Appendix A
Chart 1. A schematic illustration of the different part of the. Program to calculate probability density of particle in SHO potential. |
1d1c78be08d1d189 | Download One photon stored in four places at once Please share
yes no Was this document useful for you?
Thank you for your participation!
Document related concepts
Quantum electrodynamics wikipedia, lookup
Probability amplitude wikipedia, lookup
Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup
Delayed choice quantum eraser wikipedia, lookup
Wheeler's delayed choice experiment wikipedia, lookup
Coherent states wikipedia, lookup
Double-slit experiment wikipedia, lookup
Bohr–Einstein debates wikipedia, lookup
Quantum key distribution wikipedia, lookup
Copenhagen interpretation wikipedia, lookup
Wave–particle duality wikipedia, lookup
Density matrix wikipedia, lookup
History of quantum field theory wikipedia, lookup
X-ray fluorescence wikipedia, lookup
Canonical quantization wikipedia, lookup
Quantum state wikipedia, lookup
Interpretations of quantum mechanics wikipedia, lookup
Path integral formulation wikipedia, lookup
Hidden variable theory wikipedia, lookup
Propagator wikipedia, lookup
T-symmetry wikipedia, lookup
Max Born wikipedia, lookup
Renormalization group wikipedia, lookup
Quantum machine learning wikipedia, lookup
EPR paradox wikipedia, lookup
Quantum group wikipedia, lookup
Hydrogen atom wikipedia, lookup
Symmetry in quantum mechanics wikipedia, lookup
Many-worlds interpretation wikipedia, lookup
Orchestrated objective reduction wikipedia, lookup
Renormalization wikipedia, lookup
Matter wave wikipedia, lookup
Bell's theorem wikipedia, lookup
Quantum entanglement wikipedia, lookup
Quantum teleportation wikipedia, lookup
Bell test experiments wikipedia, lookup
Quantum field theory wikipedia, lookup
Quantum computing wikipedia, lookup
Quantum dot wikipedia, lookup
Quantum fiction wikipedia, lookup
Particle in a box wikipedia, lookup
Measurement in quantum mechanics wikipedia, lookup
One photon stored in four places at once
The MIT Faculty has made this article openly available. Please share
how this access benefits you. Your story matters.
Vuletic, Vladan. “Quantum physics: Entangled quartet.” Nature
468.7322 (2010): 384-385.
As Published
Nature Publishing Group
Author's final manuscript
Thu May 26 04:55:20 EDT 2016
Citable Link
Terms of Use
Creative Commons Attribution-Noncommercial-Share Alike 3.0
Detailed Terms
One photon stored in four places at once
Quantum mechanics allows a particle to exist in different places at the same time. Now a single
photon has been stored simultaneously in four locations while maintaining its wave character.
When light passes through two slits to hit a distant screen, a periodic light pattern emerges that is
associated with the interference of the waves emanating from the two sources. Some of quantum
physics’ deepest mysteries – or, according to the iconic Richard Feynman, its only mystery – arise when
that observation is made with a single particle that, although indivisible, must have passed
simultaneously through both slits. Recent advances in the storage of single photons in atomic gases [1]
have now enabled a tour-de-force experiment that investigates interference with light stored
simultaneously in four spatially distinct atom clouds, as reported on p. xxx of this issue. Chou et al.
demonstrate stronger-than-classical correlations (entanglement) in this composite matter-light system,
and study how the entanglement gives way to weaker and weaker, and ultimately only classical,
Classical correlations can arise in situations of limited knowledge about a system. For instance, if we
know that one coin (or photon) has been hidden in one of four boxes, then discovering the coin in one
box would instantaneously tell us that the other three boxes are empty, even if they were separated
from each other by light years. It is not surprising that “particle-type” detection (Fig. 1a) reveals
correlations in the number of coins found in the different boxes if the total number of coins is known.
If, on the other hand, identical light has been stored simultaneously in all four boxes – or, for that
matter, coins sufficiently small to display quantum wave character – then in an alternative measurement
where the light is combined in an interferometer before detection (Fig. 1b), we would find that the
detection probabilities vary periodically with the interferometer path length differences. This “wavetype” detection reveals correlations in the phases of the stored waves, and full correlations require all
boxes to initially contain light.
In a classical world the system can be initially prepared to exhibit either particle-type or wave-type
correlations, but not both. The system will exhibit particle-type correlations if a single photon is placed
in one and only one of the boxes, in which case there will be no interference in Fig. 1b. Alternatively, if
all boxes are filled simultaneously with identical classical fields, the system will exhibit full wave-type
correlations. However, these classical fields necessarily contain many photons, in which case one would
expect no correlations in the particle-type detection setup of Fig. 1a. Nevertheless, in the quantum
world we live in, it is possible to prepare non-locally stored single photon such that full correlations are
observed, no matter which detection setup (Fig. 1a or Fig. 1b) is chosen. Quantum correlations are thus
stronger than classical correlations in that different types of correlations can coexist in one and the
same initial state.
Chou et al. use four atomic ensembles as the storage boxes. Such systems not only hold the photon, but
also act as highly directional emitters that can be triggered on demand by the application of a laser pulse
[1,2]. The Caltech team then measures correlations between the different boxes either in the particle-
type detection setup (Fig. 1a) or in the wave-type setup (Fig. 1b), and from the combination of those
measurements extracts the degree of entanglement. Using a method previously developed for a single
photon traveling simultaneously along four possible paths [3], they identify boundaries between
entanglement that necessarily involves all four boxes, three, or just two of the boxes, and observe the
gradual transition from fourfold to no entanglement in the presence of noise and other imperfections.
While quantum-correlated states with more parts have been observed (the current records stand at
observed entanglement for 14 ions [4], and inferred entanglement of over 100 atoms [5]), the present
system is special in that the entanglement can be efficiently mapped on demand from a material system
onto a light field. Such systems, that have already reached light storage times measured in milliseconds
[6,7], have a variety of potential applications in quantum-protected communication over long distances
[1] or overcoming quantum limits in precision measurements.
P.S. The astute reader may wonder how it is that quantum correlations can be tested with a single
photon as any correlation requires more than one system. The controversy about this issue can be
resolved [8] by viewing the four boxes as the systems that exhibit correlations (in photon number),
rather than considering a single photon with qualms about its parent box.
Figure caption
Quantum mechanics allows a single particle to exist simultaneously in multiple locations. Here a single
photon is simultaneously stored in four boxes (atomic ensembles). The single-photon character of the
stored light can be detected in a particle-type measurement (a), where only one of the four detectors
D1, D2, D3, D4 will register a photon. Alternatively, the wave character of the single photon can be tested
via its ability to interfere with itself in a wave-type measurement setup (b). In a classical world,
correlations in the detection setups (a), (b) are mutually exclusive, and hence the combination of both
measurements can probe quantum correlations between the four boxes.
atomic ensembles and linear optics”, Nature 414, 413 (2001).
2. V. Vuletic, “When Superatoms Talk Photons”, Nature Physics News & Views 2, 801 (2006).
3. S. B. Papp, K.S. Choi, H. Deng, P. Lougovski, S. J. van Enk, and H. J. Kimble, "Characterization of
Multipartite Entanglement for One Photon Shared Among Four Optical Modes", Science 324,
764 (2009).
4. Thomas Monz et al., “Coherence of large-scale entanglement”, arXiv:1009.6126 (2010).
5. C. Gross, T. Zibold, E. Nicklas, J. Estève, M. K. Oberthaler, “Nonlinear atom interferometer
surpasses classical precision limit”, Nature 464, 1165 (2010).
6. R. Zhao et al., "Long-Lived Quantum Memory", Nature Physics 5, 100 (2009).
7. B. Zhao et al, “A Millisecond Quantum Memory for Scalable Quantum Networks”, Nature Physics
5, 95-99 (2009).
8. S. J. van Enk, “Single-particle entanglement”, Phys. Rev. A 72, 064306 (2005), and references |
5d46d62b9bb97e25 | Transparent boundary conditions
Figure 2. Transparent boundary conditions.
Gauge invariance was discovered in the development of classical electromagnetism and was required when the latter was formulated in terms of the scalar and vector potentials. It is now considered to be a fundamental principle of nature, stating that different forms of these potentials yield the same physical description: they describe the same electromagnetic field as long as they are related to each other by gauge transformations. Gauge invariance can also be included into the quantum description of matter interacting with an electromagnetic field by assuming that the wavefunction transforms under a given local unitary transformation. The result of this procedure is a quantum theory describing the coupling of electrons, nuclei and photons. Therefore, it is a very important concept: it is used in almost every field of physics and it has been generalized to describe electroweak and strong interactions in the standard model of particles. A review of quantum mechanical gauge invariance and general unitary transformations is presented for atoms and molecules in interaction with intense short laser pulses, spanning the perturbative to highly nonlinear non-perturbative interaction regimes. Various unitary transformations for a single spinless particle time-dependent Schrödinger equation (TDSE) are shown to correspond to different time-dependent Hamiltonians and wavefunctions. Accuracy of approximation methods involved in solutions of TDSEs such as perturbation theory and popular numerical methods depend on gauge or representation choices which can be more convenient due to faster convergence criteria. We focus on three main representations: length and velocity gauges, in addition to the acceleration form which is not a gauge, to describe perturbative and non-perturbative radiative interactions. Numerical schemes for solving TDSEs in different representations are also discussed. A final brief discussion of these issues for the relativistic time-dependent Dirac equation for future super-intense laser field problems is presented. |
91f5327f2f7fe068 |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Why do we consider evolution of a wave function and why is the evolution parameter taken as time, in QM.
If we look at a simple wave function $\psi(x,t) = e^{kx - \omega t}$, $x$ is a point in configuration space and $t$ is the evolution parameter, they both look the same in the equation, then why consider one as an evolution parameter and other as configuration of the system.
My question is why should we even consider the evolution of the wave function in some parameter (it is usually time)?. Why can't we just deal with $\psi(\boldsymbol{x})$, where $\boldsymbol{x}$ is the configuration of the system and that $|\psi(\boldsymbol{x})|^2$ gives the probability of finding the system in the configuration $\boldsymbol{x}$?
(Added) (I had drafted but missed while copy pasting)
One may say, "How to deal with systems that vary with time?", and the answer could be, "consider time also as a part of the configuration space". I wonder why this could not be possible.
Clarification (after answer by Alfred Centauri)
My question is why consider the evolution at all (What ever the case may be and what ever the parameter may be, time or proper time or whatever).
My motivation here is to study the nature of the theory of quantum mechanics as a statistical model. I am looking at it from that angle.
share|cite|improve this question
Related: – Emilio Pisanty Jul 19 '12 at 12:47
So do I understand you to be asking for a block world formulation of quantum theory? For which you could use the Wightman axioms (albeit they're not close to the successes of Lagrangian QFT). They introduce a single Hilbert space that supports a representation of the Poincaré group, and time is not privileged over space (except for the 1+3 signature). Lagrangian QFT somewhat obscures a block world perspective, insofar as it focuses on a Hilbert space at a single time, corresponding to phase space observables, however a block world perspective of Lagrangian QFT is possible. – Peter Morgan Jul 19 '12 at 20:25
@RajeshD: The Heisenberg formulation takes your point of view, the wavefunction is time independent, but the observables depend on time. This just means that the interaction with the particle at different times is by different operators. – Ron Maimon Jul 20 '12 at 5:24
up vote 2 down vote accepted
I think the main reason is practical, but it might be related to a theoretical reason.
The main reason is that we almost never use the time-dependent Schroedinger equation because if the state wasn't stationary, its rate of change would be, at the usual atomic scales, so fast that we couldn't measure it or study it empirically with laboratory-sized apparatus. Similarly, what governs the observable properties of macroscopic bodies, such as their chemical bonds and colours, involves stationary states. If the states weren't stationary, the body would not persist long enough for us to consider it as having a property. It is striking how little direct empirical support the time-dependent Schroedinger equation has, and how little use it finds. We don't even use it to study scattering events (which, admittedly, for a very brief time occur very rapidly).
This might be related to a deeper theoretical reason one finds in statistical mechanics. In statistical mechanics, it is often pointed out that measurements made with laboratory-sized equipment necessarily involve a practically infinite time average such as $$\lim_{T\rightarrow\infty}\frac1T\int_0^T f(t)g(t)dt.$$ Well, in Quantum Mechanics, measurement has something similar about it, in that it always involves amplification of something microscopic up to the macroscopic scale so we can observe it (an observation made by many, including Feynman), and the main way to do this seems to be to let the microscopic event trigger the change from a meta-stable state to a stable equilibrium state of the laboratory-sized apparatus (H.S. Green Observation in Quantum Mechanics, Nuovo Cimento vol. 9, pp. 880--889, posted at , and many others since). Once again, this involves a long-time, stable, equilibrium as in Statistical Mechanics. But the relation to the practical reason is not completely clear.
That said, in theory it is sometimes possible to rephrase the time-dependent Schroedinger evolution equation into a space-evolution equation, even though no one ever does this since it has no earthly use. Consider the Klein--Gordon equation (which is the relativistic version of Schroedinger's equation), $$({\partial\over \partial x}^2-{\partial\over \partial t}^2 + V )\psi = 0.$$ Obviously, we can get isolate either $x$ or $t$, and under certain conditions take the square root of the operator to get $$ {\partial\over \partial x} \psi = \sqrt{ ({\partial\over \partial t}^2 - V)}\psi .$$
Under the usual physical assumptions of flat space--time and no field-theoretic effects, one could do this to isolate $t$ and get the time evolution because we assume that energy is always positive, so we can indeed take the square root (all the eigenvalues of the Hamiltonian are positive). This may not always be true when, as here, we try to isolate $x$ and get the space-evolution.
Now, as to the question of why consider any evolution at all, why not just consider $\psi(x,y,z,t)$ in a relativistically timeless fashion, the main answer is that it wreaks havoc with the idea of measurement, observable. and the justification of the Born interpretation. Dirac tried to write a Quantum Mechanics textbook your way, but gave up even after the fifth chapter, where he remarks that the notion of observable in not relativistic, and for the rest of the book he proceeds non-relativistically (until he get to the Dirac Equation at the end). The second edition abandons the attempt to be relativistic, is more traditional and uses the time evolution point of view from the start. He remarked, famously,
The main change has been brought about by the use of the word «state» in a three-dimensional non-relativistic sense. It would seem at first sight a pity to build up the theory largely on the basis of nonrelativistic concepts. The use of the non-relativistic meaning of «state», however, contributes so essentially to the possibilities of clear exposition as to lead one to suspect that the fundamental ideas of the present quantum mechanics are in need of serious alteration at just tbis point, and that an improved theory would agree more closely ' with the development here given than with a development which aims at preserving the relativistic meaning of «state» throughout.
And in fact Relativistic Quantum Mechanics, as opposed to field theory, is, like many-particle Relativistic (classical) mechanics, not theoretically very well developed. There seem to be so many problems, people prefer to jump right to Quantum Field Theory in spite of the divergences and need for renormalisation and everything. Furthermore, relativistic QM is restricted to the low energy regime since with high energies, particle pair production is possible, yet the equations of QM hold the number of particles as fixed and do not allow for pair production.
share|cite|improve this answer
Thanks for the nice answer. It was a joy reading it. You really got the spirit of the question. – Rajesh Dachiraju Jun 7 '13 at 22:39
(1) In the Heisenberg picture, the wavefunction does not evolve with time, the operators do.
(2) For relativistic covariance, $t$ ought to be a coordinate with proper time $\tau$ as the evolution parameter.
(3) In QFT, which is relativistically co-variant, $t$ is a coordinate.
If these don't begin to address your question, please re-edit your question to clarify.
share|cite|improve this answer
I have edited with a clarification in view of your answer. – Rajesh Dachiraju Jul 19 '12 at 13:10
it's an empiral fact that time exists, and states evolve in time. or is that really the case, or does it just seem so? interesting question. anyway, feynman path integrals, no such problem.
share|cite|improve this answer
Sorry I missed a crucial part of the question while copy pasting the draft. Now I have added it. I hope you excuse this. – Rajesh Dachiraju Jul 19 '12 at 12:39
You can, sort of. You can take $\psi(x)$ to satisfy the time-independent Schrödinger equation, for some eigenvalue $E_n$ of the Hamiltonian operator that appears in the time-dependent Schrödinger equation. However I would take that to make the time-independent formalism less fundamental. It's also possible for the time-dependent state to be in a superposition of different energy states, which doesn't play well with the time-independent formalism.
share|cite|improve this answer
I think you have digressed a bit from what I had in mind. I do not suggest to consider time independent Schrodinger equation. I am not interested in that and that is not the only choice. My question is just why consider evolution of wave function at all? – Rajesh Dachiraju Jul 19 '12 at 13:00
Your Answer
|
381151e8bbc17eec |
Sign up
Here's how it works:
1. Anybody can ask a question
2. Anybody can answer
Suppose I have a wave function $\Psi$ (which is not an eigenfunction) and a time independent Hamiltonian $\hat{\mathcal{H}}$. Now, If I take the classical limit by taking $\hbar \to 0$ what will happen to the expectation value $\langle\Psi |\hat{\mathcal{H}}|\Psi\rangle$? Will it remain the same (as $\hbar = 1.0$) or will it be different as $\hbar\to 0$? According to correspondence principle this should be equal to the classical energy in the classical limit.
What do you think about this? Your answers will be highly appreciated.
share|cite|improve this question
For a particle in ground state of a box the expectation value of hamiltonian is $\frac{\pi^2\hbar^2}{2mL^2}$ which tends to zero when $\hbar \rightarrow 0$. – richard Oct 31 '13 at 17:34
Related: and links therein. – Qmechanic Nov 1 '13 at 14:54
up vote 1 down vote accepted
The above posters seem to have missed the fact that $\Psi$ is not an eigenfunction, but an arbitrary wavefunction. The types of wavefunctions we normally see when we calculate things are usually expressed in terms of eigenfunctions of things like energy or momentum operators, and have little to do, if anything, with classical behaviour (e.g. look at the probability density of the energy eigenstates for the quantum harmonic oscillator and try to imagine it as describing a mass connected to a spring).
What you might want to do is construct coherent states which are states where position and momentum are treated democratically (uncertainty is shared equally between position and momentum).
Then, the quantum number that labels your state might be thought of as the level of excitation of the state. For the harmonic oscillator, this is roughly the magnitude of the amount of energy in the state in that $E = \langle n \rangle \hbar= |\alpha^2| \hbar$. If you naively take $\hbar \to 0$ then everything vanishes. But if you keep, say, the energy finite, while taking $\hbar \to 0$, then you can recover meaningful, classical answers (that don't depend on $\alpha$ or $\hbar$).
share|cite|improve this answer
First of all I would like to thank all of you for sharing your valuable ideas. I have asked this question, in connection with the coherent state. If you have a pure classical phase-space variables (p,q) then you can find a coherent state with $\alpha=\frac{1}{2}(p+i q)$. Classical energy is given by the classical hamiltonian $H(p,q)$ and the quantum energy can be computed from the $\langle\alpha|H|\alpha \rangle$. These classical and quantum energy values will be different due to zero-point energy. In this case as $\hbar \to 0$ it seems that quantum energy decreases. – Sijo Joseph Nov 8 '13 at 1:31
For the case of a particle in a potential, $\hat{\mathcal H} = \frac{\hat{p}^2}{2m}+V({\mathbf x})$, let an arbitrary wavefunction be written in the form $$\Psi({\mathbf{x}},t) = \sqrt{\rho({\mathbf x},t)}\exp\left(i\frac{S({\mathbf x},t)}{\hbar}\right)\text{,}$$ where $\rho \geq 0$. Then it becomes a simple calculus exercise to derive: $$\Psi^*\hat{\mathcal{H}}\Psi = \rho\left[\frac{1}{2m}\left|\nabla S({\mathbf x},t)\right|^2 + V(\mathbf{x})\right] + \mathcal{O}(\hbar)\text{,}$$ where I'm omitting terms that have at least one power of $\hbar$. Since $\langle\Psi|\hat{\mathcal H}|\Psi\rangle$ is the spatial integral of this quantity, integrating this instead is what we want for an $\hbar\to 0$ limit of energy.
[Edit] As @Ruslan says, the wavefunction would have to oscillate faster to have a kinetic term. In the above, keeping $S$ independent of $\hbar$ means increasing the phase at the same proportion that $\hbar$ is lowered.
Additionally, substituting this form for $\Psi$ into the Schrödinger equation gives, after similarly dropping $\mathcal{O}(\hbar)$ terms, $$\underbrace{\frac{1}{2m}\left|\nabla S({\mathbf x},t)\right|^2 + V(\mathbf{x})}_{{\mathcal H}_\text{classical}} + \frac{\partial S({\mathbf x},t)}{\partial t} = 0\text{,}$$ which is the classical Hamilton-Jacobi equation with $S$ taking the role of the Hamilton's principal function.
share|cite|improve this answer
That's the right answer. UpVote $0$k. – Felix Marin Nov 2 '13 at 9:48
I like this, thank you. I partially agree with you. Normally quantum energy of a coherent state will be higher than the energy of the corresponding classical counter part due to zero point energy. As $\hbar\to0$ the zero point energy vanishes hence quantum energy should depend on the $\hbar$ value. That gives me a contradiction. – Sijo Joseph Nov 8 '13 at 1:51
Normal time-independent hamiltonian looks like $\hat H=\hat T+\hat V$, where $\hat T=-\frac{\hbar^2}{2m}\nabla^2$ is kinetic energy operator and $\hat V=V(\hat x)$ is potential energy operator. As seen from these expressions, only kinetic energy operator changes with $\hbar$.
Now we can see that
1. Quantum mechanical expectation value of particle total energy is sum of expectation values for kinetic and potential energies: $$\langle\Psi\left|\hat H\right|\Psi\rangle=\langle\Psi\left|\hat T\right|\Psi\rangle+\langle\Psi\left|\hat V\right|\Psi\rangle$$
2. Taking $\hbar\to0$, we get $\hat T\to \hat 0\equiv0$. Now expectation value for particle total energy becomes equal to expectation value of its potential energy: $$\langle\Psi\left|\hat H_{\hbar=0}\right|\Psi\rangle=\langle\Psi\left|\hat V\right|\Psi\rangle$$
From this follows immediate answer: no, the expectation value will not remain the same. And interesting result is that for any smooth wavefunction expectation value of kinetic energy is zero when $\hbar$ is zero.
This implies that for classical limit the wavefunction must oscillate infinitely fast (i.e. have zero wavelength) to remain at the same total energy. As you make $\hbar$ smaller, the state with given total energy gets larger quantum number - i.e. becomes more excited.
share|cite|improve this answer
Yes, this can be answered using a classical perspective. We all know the electromagnetic or optical equation: $$ E =\nu h = \omega \hbar \longrightarrow 0 = \omega 0 $$ As Richard has indicated the answer to this can be produced from a visit to wiki, "the Hamiltonian is commonly expressed as the sum of operators corresponding to the kinetic and potential energies" $$ \mathcal{ \hat H } =\hat T + \hat V = {{\hat p^2} \over {2 m}}+V = V - { \hbar^2 \bigtriangledown^2 \over 2m} $$ For this case: $$\mathcal{ \hat H } \rightarrow \hat V = V=0$$ "V" is just the potential the system is placed at, and for our universe we can assume V=0. $$\Psi=\Psi( \vec{r} ) \space \space \space \space and \space thus: \space \space \space \space \mathcal{ \hat H } \mid \Psi \rangle = i \hbar {{\partial \Psi } \over {\partial \vec{r}}} \rangle$$ $$ \langle \Psi \mid \mathcal{ \hat H } \mid \Psi \rangle =\int \Psi^* \mathcal{ \hat H } ( \Psi ) d \vec{r} = \int \Psi^* i \hbar ( \Psi ' ) d \vec{r} $$ So it does not matter what Psi is or what the derivative of Psi over some dimension is or what dimensions Psi is existent in, or what the complex conjugate of Psi is or what limits we integrate over. The solution is a multiple of h.
share|cite|improve this answer
Your Answer
|
e874f318f174c8ad | Maximal theorems and Calderón-Zygmund type decompositions forthe fractional maximal function
by Kuznetsov, Evgeny, PhD
Abstract (Summary)
A very significant role in the estimation of different operators in analysis is played by the Hardy-Littlewood maximal function. There are a lot of papers dedicated to the study of properties of it, its variants, and their applications. One of the important variants of the Hardy-Littlewood maximal function is the so-called fractional maximal function, which is deeply connected to the Riesz potential operator. The main goal of the thesis is to establish analogues of some important properties of the Hardy-Littlewood maximal function for the fractional maximal function. In 1930 Hardy and Littlewood proved a remarkable result, known as the Hardy-Littlewood maximal theorem. Therefore its naturally arose a problem: what is an analogue of the Hardy-Littlewood maximal theorem for the fractional maximal function? In the thesis we will give an answer for this problem. Particularly, we will show that the so-called Hausdorff capacity and Morrey spaces, introduced by C. Morrey in 1938 in connection with some problems in elliptic partial differential equations and the theory of variations, naturally appears here. Moreover, recently Morrey spaces found important applications in connection with the Navier-Stokes and Schrödinger equations, elliptic problems with discontinuous coefficients and potential theory. The Hardy-Littlewood maximal theorem is deeply connected with the Stein-Wiener and Riesz-Herz equivalences. Analogues of these equivalences for the fractional maximal function are also given. In 1971 C. Fefferman and E. Stein, by using the Calderón-Zygmund decomposition, obtained the generalization of the maximal theorem of Hardy-Littlewood for a sequence of functions. This result of Fefferman and Stein found many important applications in Harmonic Analysis and its applications, e.g. in Signal Processing. In the thesis we will give an analogue of one part of the Fefferman-Stein maximal theorem for the fractional maximal operator. In 1952 A. Calderón and A. Zygmund published the paper "On Existence of Certain Singular Integrals", which has made a significant influence on the Analysis of the last 50 years. One of the main new tools used by A. Calderón and A. Zygmund was a special family of the decomposition of a given function in its "good" and "bad" parts. This decomposition provides a multidimensional substitution of the famous "sunrise" lemma by F. Riesz and it was used for proving a weak-type estimate for singular integrals. Furthermore, we want to emphasize that Calderón-Zygmund type decompositions have played an important and sometimes crucial role in the proofs of many fundamental results, such as the John-Nirenberg inequality, the theory of Ap-weights, Fefferman-Stein maximal theorem, etc. In the thesis it is showed that it is possible to construct an analogue of the Calderón-Zygmund decomposition for the Morrey spaces.
Bibliographical Information:
School:Luleå tekniska universitet
School Location:Sweden
Source Type:Doctoral Dissertation
Date of Publication:01/01/2005
© 2009 All Rights Reserved. |
52457ec2466ec530 | RRI Digital Repository >
07. Theoretical Physics >
Research Papers (TP) >
Title: Dirac quantization of parametrized field theory
Authors: Varadarajan, Madhavan
Issue Date: 16-Feb-2007
Publisher: The American Physical Society
Citation: Physical Review D, 2007, Vol.75, 044018
Abstract: Parametrized field theory (PFT) is free field theory on flat spacetime in a diffeomorphism invariant disguise. It describes field evolution on arbitrary (and in general, curved) foliations of the flat spacetime instead of only the usual flat foliations, by treating the “embedding variables” which describe the foliation as dynamical variables to be varied in the action in addition to the scalar field. A formal Dirac quantization turns the constraints of PFT into functional Schrödinger equations which describe evolution of quantum states from an arbitrary Cauchy slice to an infinitesimally nearby one. This formal Schrödinger picture-based quantization is unitarily equivalent to the standard Heisenberg picture-based Fock quantization of the free scalar field if scalar field evolution along arbitrary foliations is unitarily implemented on the Fock space. Torre and Varadarajan (TV) showed that for generic foliations emanating from a flat initial slice in spacetimes of dimension greater than 2, evolution is not unitarily implemented, thus implying an obstruction to Dirac quantization. We construct a Dirac quantization of PFT, unitarily equivalent to the standard Fock quantization, using techniques from loop quantum gravity (LQG) which are powerful enough to supercede the no-go implications of the TV results. The key features of our quantization include an LQG type representation for the embedding variables, embedding-dependent Fock spaces for the scalar field, an anomaly free representation of (a generalization of) the finite transformations generated by the constraints, and group averaging techniques. The difference between the 1+1-dimensional case and the case of higher spacetime dimensions is that for the latter, only finite gauge transformations are defined in quantum theory, not the infinitesimal ones.
Description: Open Access.
URI: http://hdl.handle.net/2289/2269
ISSN: 1550-7998
1550-2368 (online)
Alternative Location: http://link.aps.org/abstract/PRD/v75/e044018
Copyright: 2007 The American Physical Society
Appears in Collections:Research Papers (TP)
Files in This Item:
File Description SizeFormat
2007 PR-D Vol.75 p044018.pdfOpen Access324.64 kBAdobe PDFView/Open
RRI Library DSpace |
21927ed0b62993a4 | Wolfram Blog
S M Blinder
Centenary of Bohr’s Atomic Theory (1913–2013)
December 30, 2013 — S M Blinder, Wolfram Demonstrations Project
I had intended to write a treatise describing the history of the hydrogen atom over the last 100 years. Unfortunately, my time is running out this year, so I will content myself instead with this much briefer blog post outlining the major events associated with Niels Bohr’s three epochal papers in 1913.
The hydrogen atom has been the most fundamental application at each level in the advancement of quantum theory. It is the only real physical system that can be solved exactly (although some might argue that this is also true for the radiation field, as an assembly of harmonic oscillators).
To set the stage for Bohr’s contributions, we need to recount a couple of items in hydrogen’s “prehistory”. Following the discoveries of Kirchhoff and Bunsen, it was known that the emission spectrum of atomic hydrogen contains four lines in the visible region: a red line at 656.21 nm (15239 cm-1), a blue-green line at 486.07 nm (20573 cm-1), and two violet lines at 434.01 nm (23041 -1) and 410.12 nm (24383 cm-1). The spectrum is obtained in the laboratory by passing a 5000 v electrical discharge through gaseous hydrogen. The hydrogen spectrum is simulated in the following figure:
The hydrogen spectrum
Johann Jacob Balmer, a Swiss high-school mathematics teacher, found that the wavenumbers (1/λ) of the four spectral lines could be fitted to a simple formula
The wavenumbers of the four spectral lines
After more lines in the infrared and ultraviolet spectrum of hydrogen were found, the Swedish physicist Johannes Rydberg proposed a generalization of Balmer’s formula:
A generalization of Balmer's formula
with n1 = 1, 2, 3, …, n2 = n1 + 1, n1 + 2, … and R ≈ 109678 cm-1 (current experimental value), known as the Rydberg constant. The series of lines beginning with n1 = 2 reduces to Balmer’s formula. This is now known as the Balmer series. The more intense series of lines with n1 = 1, discovered in 1906, lie in the ultraviolet spectrum and are known as the Lyman series. The Lyman-alpha line, with n1 = 1, n2 = 2, at 121.57 nm, is now an important marker in astronomical observations of distant galaxies and quasars. A collection of such lines, from varying environments around a high-redshift quasar, has been called a “Lyman-alpha forest”.
Other atomic species produce characteristic line spectra, which can serve as “fingerprints” to identify the element, particularly in distant stars. But no atom other than hydrogen has a simple relationship for its spectral frequencies, nothing analogous to Rydberg’s formula.
J. J. Thomson discovered in 1897 that electrons were a component of all atoms. For several years thereafter, the prevalent picture of the atom was the “plum pudding model”, in which the negatively charged electrons were pictured as plums embedded throughout a positive sphere—the pudding. If the plum pudding model were correct, alpha particles scattered through a metallic foil should be deflected by, at most, small angles. The positive-charge distribution would then show a diameter of the order of 10-8 cm, the square root of the scattering cross section.
Experiments in Ernest Rutherford’s laboratory, carried out by Hans Geiger and Ernest Marsden in 1909, directed a beam of 6 MeV alpha particles, produced by radioactive disintegration, at a very thin gold foil, just the thickness of several atoms. The surprising result was that a small fraction of the alpha particles was scattered at large angles. Rutherford estimated that the scattering centers were very small, something less than 10-14 cm in diameter. It is now known that nuclear radii are typically of the order of several times 10-13 cm. The unit 10-13 cm is known as 1 fermi (fm), which can also be interpreted as 1 femtometer. The following CDF is an idealized representation of the scattering of alpha particles from a gold nucleus, based on the Demonstration “Rutherford Scattering” by Enrique Zeleny:
In 1911, Rutherford proposed the “nuclear model of the atom”. As we now understand it, a neutral atom of atomic number Z consists of a compact, nearly point-like, positively charged nucleus +Z e surrounded by a cloud of Z negatively charged electrons, each with charge -e.
This is where Niels Bohr now enters the picture.
Niels Bohr
From the American Institute of Physics photo archives: Bohr c. 1913; Danish commemorative stamp 1963; Bohr with Einstein.
Niels Henrik David Bohr (1885–1962) was born in Copenhagen, Denmark. His parents were Christian Bohr, a professor of physiology at the University of Copenhagen, and Ellen Adler Bohr, who came from a wealthy Danish Jewish family prominent in banking and parliamentary politics. His brother Harald Bohr became a prominent mathematician, noted for his work on Dirichlet series and the distribution of zeros in the zeta function. He introduced the concept of almost periodic functions. Both Harald and Niels were enthusiastic football (meaning soccer) players. Harald played for the Danish national team in the 1908 Olympics. Niels’ son, Aage Bohr, also became a physicist, who shared the 1975 Nobel Prize for his discoveries on nuclear structure. Bohr founded the Institute for Theoretical Physics at the University of Copenhagen, which was visited by all the major figures in the development of quantum mechanics. The Niels Bohr Institute, as it is now known, is still going strong.
Bohr spent a year as a postdoctoral student in Rutherford’s laboratory in Manchester. No doubt influenced by Rutherford’s nuclear model, he adapted the quantum concepts introduced by Planck and Einstein to propose a model of the atom as a miniature Solar System with electrons orbiting the nucleus. Three classic papers, published in Philosophical Magazine, summarize Bohr’s atomic theory:
[1] N. Bohr, “On the Constitution of Atoms and Molecules, Part I,” Philosophical Magazine, 26, 1913 pp. 1–25.
[2] N. Bohr, “On the Constitution of Atoms and Molecules, Part II: Systems Containing Only a Single Nucleus,” Philosophical Magazine, 26, 1913 pp. 476–502.
[3] N. Bohr, “On the Constitution of Atoms and Molecules, Part III: Systems Containing Several Nuclei,” Philosophical Magazine, 26, 1913 pp. 857–875.
We will next present the content of Bohr’s atomic theory. His original arguments have been modified to take advantage of a century of pedagogical experience and also some applications of Mathematica.
Bohr proposed that spectral lines arise from transitions between discrete energy levels of atoms. Building on the ideas of Max Planck on black body radiation and Albert Einstein on the photoelectric effect, a transition between two atomic energy levels E1 and E2 is associated with the absorption or emission of a photon of frequency ν (wavelength λ=c/ν, where c=2.9979 x 1010 cm/sec, the speed of light) according to the relation
Subscript[E, 2]-Subscript[E, 1]=h \[Nu] = h c/\[Lambda]
where h is Planck’s constant, 6.626 x 10-27erg sec. (We will use cgs and Gaussian electromagnetic units in this article for historical continuity.) For E2 > E1, this represents the emission of a photon as the energy of the atom decreases from E2 to E1 or the absorption of a photon of the same frequency as the energy increases from E1 to E2. This is also in accord with the Rydberg–Ritz combination principle (1906), which states that the spectral lines of any element will exhibit frequencies that are either the sum or the difference of the frequencies of two other lines. Applied to the Rydberg formula for hydrogen, it now follows that the discrete—we can now call them quantized—energy levels of a hydrogen atom have the form
Subscript[E, n]=-((R h c)/n^2),
which are negative values since they correspond to bound states of the atom. The integers labeling the energy states (n = 1, 2, 3, …) are called quantum numbers.
Bohr turned next to the specific case of the hydrogen atom, consisting of a single electron in interaction with a single proton whose mass is approximately 1836 times greater. The attractive Coulomb force between the two particles, varying as the inverse square of their separation, is exactly analogous to the gravitational attraction between a planet and the Sun. Bohr exploited this analogy with the Kepler problem to picture the hydrogen atom as a miniature version of the Solar System. In bound states, the electron should orbit the proton in an elliptical trajectory, with the proton at one focus. Bohr considered the simplest case of a circular orbit with the coordinate origin fixed at the position of the proton.
According to the virial theorem for an inverse-square force law, such as the Kepler problem and its electrical analog, the proton-electron system, we have T = -½V, where T is the kinetic energy and V the potential energy. The kinetic energy can be expressed in a form applicable to a circular rotating system, as T=, where L is the angular momentum of the electron, m its mass, and r the radius of the orbit. The potential energy is, by Coulomb’s law in Gaussian units,V=-(e^2/r), where ±e are the charges of the proton and electron, respectively. By the virial theorem, the total energy E = T+V can be expressed either as E = ½ V or E = -T. In terms of the Rydberg energy formula, we can then write
(h c R )/n^2=e^2/(2r), and also (h c R )/n^2=L^2/(2 m r^2)
To obtain a solution, we need a third relation. This can be provided by Bohr’s correspondence principle, which states that in the limit of large values of the quantum numbers, a quantum system will approach classical behavior. In this application, Bohr reasoned that, for large values of n, the frequency associated with a transition n –> n+1 will approach the classical frequency of the radiation emitted by an electron in a circular orbit. It is convenient to refer to the radian frequency
\[Omega]=2\[Pi] \[Upsilon]=(2\[Pi] c)/\[Lambda] From mechanics, \[Omega]=L/(m r^2) Noting that 1/n^2-1/(n+1)^2->2/n^3for large values of n, we can substitute in the Rydberg formula to obtain\[Omega]=(4\[Pi] c R)/n^3=L/(m r^2)
Now we can use Mathematica‘s equation solver to determine R, L, and r in terms of fundamental constants and quantum numbers:
Mathematica's equation solver
The predicted value of the Rydberg constant is R = 109737cm-1. The slight discrepancy with the experimental value for hydrogen can be corrected by replacing m=me with the
reduced mass of the electron in hydrogen: memp. This gives, to high precision,
RH = 109677.5810 cm-1. The value found above pertains to infinite nuclear mass. Using the best current values of the fundamental physical constants,
Subscript[R, \[Infinity]]= (2 \[Pi] m e^4)/(h^3 c)= 109737.3157cm^-1.
The result determining L actually introduces the profound concept of angular-momentum quantization. Dirac later wrote the constant h/2 π, which occurs frequently in quantum-theory formulas, as a single symbol ħ, pronounced “h-bar”. The quantum condition for the component of orbital angular momentum in any direction can then be written L = n ħ.
The third output of the Solve command above gives the radius of the electron’s orbit around the proton. For n = 1, we have the radius of the lowest energy state—the ground state—which is known as the Bohr radius and designated a0.
Subscript[a, 0]=\[HBar]^2/(m e^2) = 0.529177*10^-8cm = 0.529177 Å
This was actually the first theoretical determination of the approximate magnitude of atomic dimensions. More generally, the electron orbital radius is given by rn = n2 a0, for n = 1, 2, 3, …. Following is a diagram showing the first few orbits. The proton is represented by a blue point, the electron by a red point. The radius of the innermost circle equals a0.
A diagram showing the first few orbits
In contrast to a classical system, which possesses a continuum of allowed energy levels, a bound quantum system can exist only in one of a discrete (or quantized) set of energy levels. In a transition between the levels n and n‘, the electron somehow makes a quantum jump, accompanied by the absorption or emission of a photon of frequency ν = (EnEn’) / h. The precise nature and mechanism of the “quantum jump” has been a controversial subject of philosophical speculation for an entire century.
Mathematica diagram showing a quantum jump.
Mathematica diagram showing a quantum jump.
The speed of the electron in the first Bohr orbit can be calculated from the angular momentum L = m v r. With L = ħ and r = a0 = ħ2 / m e2, we find v = α c, where α = e2 / ħ c ≈ 1/137, the fine-structure constant. This is comfortably within the non-relativistic domain (until we consider nuclei with large atomic number). Still, v is in excess of 2000 km/sec, which is a subtle hint that life on an atomic scale might be much different from what it is in the macroscopic world.
Hartree introduced a system of atomic units in 1928, in which ħ = m = e ≅ 1. This has the advantage of making atomic parameter values of the order of 1, rather than 10 to some large negative power. The unit of length, accordingly, equals a0, called 1 bohr. The unit of energy, called 1 hartree, can be constructed from e2/a0, equivalent to 27.212 eV. The energy levels of the hydrogen atom are then given by En = Subscript[E, n]=-(1/(2n^2)) hartrees. The ground-state energy, when n = 1, equals -13.5984 eV. Thus the ionization potential of atomic hydrogen is 13.5984 volts.
The Bohr model applies more generally to one-electron ions, with a nucleus of atomic number Z: Z = 1 for H, Z = 2 for He+, Z = 3 for Li2+, and so on. The potential energy is now generalized to V=-Z (e^2):r, which leads to an energy of Subscript[E, n]=-(Z^2/(2n^2)) hartrees and an atomic radius of n^2 Subscript[a, 0]/Z^2
The Bohr atom represents a fundamental departure from the well-established classical theories of mechanics and electrodynamics. Most spectacularly, an electron in a Bohr orbit ought to radiate away its energy in about 10-11 seconds and experience a death spiral into the nucleus. I like to call this the “atomic Hindenburg disaster”. (The Hindenburg, a hydrogen-filled dirigible, crashed and burned in a famous disaster in 1937.) Nonetheless, the extraordinary agreement with the spectrum of atomic hydrogen must somehow justify this radical departure from classical physics.
Sommmerfeld and Wilson in 1916 generalized Bohr’s formula for the allowed orbits to a set of quantum conditions on action integrals, of the form
The Sommerfeld–Wilson quantum conditions reduce to Bohr’s results in the case of circular orbits, but now elliptical Kepler orbits are allowed as well, with the nucleus at one focus. In elliptical orbits, the momentum p can vary as a function of position.
For the hydrogen atom treated as a three-dimensional problem in spherical polar coordinates, there is actually a set of three quantum conditions:
A set of three quantum conditions
where n, , m are integers with the ranges n = 1, 2, 3, …, = 0, 1, …, n – 1, m = -, - + 1, …, + (2 + 1 values). The energy En is determined by n, which is called the principal quantum number. The value of , known as the azimuthal quantum number, determines the orbital angular momentum, but the precise connection must await the modern quantum-mechanical theory. The different values of m, called the magnetic quantum number, exhibit the phenomenon of space quantization, in which an elliptical orbit can be oriented in 2 + 1 possible directions in three-dimensional space. (Full details on these results can be found in Appendix XIV of M. Born, Atomic Physics, 5th ed., New York: Hafner, 1935. The quantum number κ corresponds to what we designate as +1).
The n = 1 ground state is still a circular orbit, but the n = 2 level allows an elliptical orbit in addition to the circular one, with possible values = 0, 1. The n = 3 level has three allowed orbits, with = 0, 1, 2, and so on. The semi-major axis of the ellipse has the same value as the Bohr orbit, a = n2 a0. The semi-minor axis is then given by b = n( + 1)a0. The orbits for n = 1, 2, 3 are shown in the following Mathematica Demonstration:
A Mathematica Demonstration
For a given n, the multiplicity of values of and m imply that there are, in total, n2 allowed orbits. The energy level En is said to be n-fold degenerate.
The atomic theory of Bohr, generalized by Sommerfeld and Wilson, is known today as the Old Quantum Theory, to distinguish it from modern quantum mechanics, which dates from 1925.
Circulating electric charges give rise to magnetic moments, in this case obeying the laws of classical electrodynamics. The general relation is μ = e/(2m c) L. Thus an orbiting electron with one unit of angular momentum has a magnetic moment equal to μB = e ħ/2m c ≈ 9.274 x 10-21 ergs/gauss, which is known as a Bohr magneton.
Many atomic spectral lines appear, under sufficiently high resolution, to be closely spaced doublets, a prime example being the yellow sodium D-lines, a doublet at 588.995 and 589.592 nm. Uhlenbeck and Goudsmit proposed in 1925 that this was due to an intrinsic angular momentum possessed by the electron (in addition to its orbital angular momentum) which could have just two possible orientations. This property, known as spin, occurs as well in other elementary particles. Spin and orbital angular momenta are roughly analogous to the daily and annual motions, respectively, of the Earth around the Sun. To distinguish spin from orbital angular momentum, we designate the corresponding quantum numbers as s and ms, instead of and m, which we now modify to m. For electrons, s always has the value ½, meaning that its intrinsic angular momentum equals ½ħ. Correspondingly, ms has two possible values, ±½. The electron is said to be a “spin-½ particle”. Particles with spin equal to an odd half-integer are known as fermions. By contrast, particles with integer values of spin, such as the photon with spin 1, are called bosons.
A charged particle with spin also exhibits a magnetic moment, given by the slightly generalized relation
\[Mu]=g e/(2m c) S,
where S here means the intrinsic angular momentum, equal to ½ħ for the electron. It turns out that the g-factor for electron spin is equal to 2, so that the spin magnetic moment is also equal to 1 Bohr magneton, μB. Interactions between the orbital and spin angular momenta give rise to fine structure splittings of spectral lines, which are smaller in magnitude than the energies of orbiting electrons by a factor of the order of α ≈ 1/137, the fine structure constant.
Wolfgang Pauli proposed his exclusion principle in 1925. For fermions in a quantum system, such as the electrons in an atom or molecule, all the individual quantum states can be, at most, individually occupied. No such restriction applies to bosons; any number can occupy each quantum state. An example is a Bose–Einstein condensate, in which almost all the particles in a system occupy the ground state.
When applied to the electrons in an atom, the exclusion principle requires that every electron be described by a unique set of quantum numbers: n, , m, ms. No two electrons can have the same set. The values of are usually designated by a code (originating in the classification of spectral lines, but no longer so limited): = 0 is designated s (not to be confused with the spin angular momentum), = 1 is designated p, = 2 is d, = 3 is f (this continues for higher angular momenta, but we will not need them here).
A complex atom can thereby be represented as a central nucleus surrounded by its electrons moving in Bohr–Sommerfeld orbits. The logos of the International Atomic Energy Agency and the former U. S. Atomic Energy Commission are both idealized versions of such pictures:
The structure of the periodic system of the elements could be rationally explained for the first time on the basis of the Old Quantum Theory. In accordance with the Aufbau principle, as the atomic number is increased, electrons successively occupy the lowest available Bohr–Sommerfeld orbits, taking account of the Pauli exclusion principle, which restricts each set of quantum numbers n, , m, ms to, at most, a single electron. This produces the familiar shell structure of atoms. Complete shells, containing all values of the quantum numbers up to certain combinations of n and are occupied, appear to confer an enhanced stability to a set of atoms with the “magic numbers” Z = 2, 10, 18, 36, 54, 86, and 118. These are the so-called noble gases: helium, neon, argon, krypton, xenon, radon, and the synthetic transuranium element 118, discovered in 2006.
The following “staircase form” of the periodic table was proposed by Bohr in 1921:
Periodic table
Bohr supported Mendeleev’s original prediction that there was a missing element with Z = 72, chemically similar to zirconium. It was first isolated in 1923 as an impurity in zircon by Danish chemists in Copenhagen. This led to the element being named hafnium (Hf), which was the Latin name for Copenhagen. The element with Z = 107 was synthesized in 1981, and named bohrium (Bh) in honor of Niels Bohr.
The picture of the atom based on the Sommerfeld–Wilson quantum conditions and Bohr–Sommerfeld orbits of electrons is now known as the Old Quantum Theory. Despite its number of successful applications, the theory suffers from some serious flaws. To cite one, angular momenta are usually too large by one unit of ħ. For example, the hydrogen atom ground state is known to be zero rather than ħ. Zero angular momentum might be accomplished with a “pendulum orbit”, in which the electron oscillates linearly through the nucleus. This was, however, usually ruled out in the Old Quantum Theory because it involves collisions of the electron with the nucleus. Also, the theory is inconsistent with known behavior of atoms as nearly spherical particles. Although the Bohr model might have been able to sidestep the “Hindenburg disaster”, it can not avoid what might be called the “Heisenberg disaster”. By this we mean that the presumption of well-defined orbits is completely contrary to modern quantum theory, in particular the Heisenberg uncertainty principle, which states that the position and momentum of a particle cannot simultaneously be known exactly.
In addition, the Bohr theory was unable to produce any quantitatively valid results for atoms and ions containing more than one electron, most notably the helium atom. And it was a complete disaster in attempted applications to molecules. We should, however, mention the heroic effort of J. H. Van Vleck in his doctoral thesis (1922) to construct a model for helium, in which two Bohr orbits move in planes crossing at an angle of 88°. This gave a calculated ionization energy in approximate agreement with the experiment.
Despite its failings, the Old Quantum Theory was an important transitional step in the development of the modern picture of the atom. The reigning theory is now quantum mechanics, which emerged, beginning in 1925–1926, with major initial contributors being Werner Heisenberg, Erwin Schrödinger, and P. A. M. Dirac.
This pretty much concludes our account of Niels Bohr’s contributions to atomic structure. To his everlasting credit, he enthusiastically embraced the new quantum mechanics and his Theoretical Physics Institute became a shrine for the “Copenhagen interpretation of quantum mechanics”.
Bohr, largely in collaboration with Heisenberg, developed a viewpoint whose principal tenet was the hypothesis that quantum mechanics can usually predict only probabilities of atomic events. Variables thus do not have definite values until they are actually measured. In mathematical terms, a measurement results in “collapse of the wave function”. Einstein, for one, could never accept this rejection of objective reality (“Do you think the Moon isn’t there when you’re not looking?”). This led to an extended series of arguments between Bohr and Einstein, which actually produced many fruitful insights into some finer points of quantum theory and relativity. The controversy on the metaphysical foundations of the quantum theory continues even to this day, with concepts such as entanglement and decoherence becoming notable topics for consideration. Related to these is the possibility of quantum computation, in which two-valued bits are replaced by complex continuously variable qubits. Still, most practicing physicists go about their quantum-mechanical computations, blissfully unconcerned about the underlying metaphysics, as recommended by the advice, “Just shut up and calculate!”
The subsequent theoretical treatment of the hydrogen atom by successively advanced versions of quantum theory is well covered in the existing literature. (See my Demonstration “Relativistic Energy Levels for Hydrogen Atom” embedded below.)
The most important modification was that the solutions of the Schrödinger equation for the hydrogen atom replaced the Bohr–Sommerfeld orbits by atomic orbitals, three-dimensional probability amplitudes for the distributions of electrons in space. (Explore the following pair of Demonstrations: “Visualizing Atomic Orbitals” and “Hydrogen Orbitals“, by Guenther Gsaller and Michael Trott, respectively.)
The energy levels of hydrogen-like systems are given by En = -Z2/2n2 hartrees, agreeing exactly with the Bohr model, but now the degeneracies of these energy levels are correctly accounted for. The solutions, depending on the three quantum numbers n, , m, are usually designated 1s, 2s, 2 p0, 2 p±1, and so on. Electron spin must still be put in “by hand”. Even though only the one-electron problem can still be solved exactly, the Schrödinger equation can be written down for any complex atom or molecule, and approximation techniques such as the variational method and perturbation theory could be applied to obtain highly accurate solutions. The concept of atomic and molecular orbitals is central to modern theoretical chemistry.
Dirac’s relativistic quantum theory of the electron automatically accounts for electron spin, with its g-factor of 2, spin-orbit coupling, the fine-structure of atomic energy levels (some of the degeneracies predicted by both Bohr and Schrödinger theories being thereby partially resolved), and, most dramatically, the existence of antimatter. Solving the Dirac equation gives the energies of a hydrogen-like system:
Solution of the Dirac equation
The energy now depends, in addition to the principal quantum number n, on the total angular momentum j = ± ½, the vector sum of orbital and spin contributions. (Remarkably, the same formula was obtained by Sommerfeld, in a relativistic generalization of the Old Quantum Theory, with κ instead of j + ½; explore my Demonstration “The Fine-Structure Constant from the Old Quantum Theory” embedded below this paragraph). The ground state is now designated 1 s½. The degeneracy in the n = 2 states is now partially broken, with 2 p3/2 higher in energy than 2 p½ and 2 s½. This constitutes fine-structure splitting, largely due to spin-orbit coupling. The 2 p½ and 2 s½ levels remain degenerate in Dirac theory but, as we will see, they are ultimately split by the Lamb shift of quantum electrodynamics.
It is interesting to expand the above energy formula in powers of the fine-structure constant α:
Expand the above energy formula
The first term gives the relativistic rest energy of the electron, m c2. The second term simplifies, in atomic units, with m = 1 and c2 α2 = 1, to -Z2/2 n2, reproducing the Bohr–Schrödinger formula. The complicated third term then accounts for relativistic corrections that cause the fine-structure splittings. It can be written more compactly as
-((\[Alpha]^2 Z^4)/n^4)(n/(j+1/2)-3/4).
Next came quantum electrodynamics (QED), which was also first pioneered by Dirac. It was later turned into the most spectacularly accurate physical theory in existence, with computational methods developed by Schwinger, Feynman, Tomonaga, and Dyson in the 1950s. In 1947, Lamb and Retherford measured the energy difference between the 2 s½ and 2 p½ states predicted to be degenerate by the Dirac theory. The Lamb shift, of the order of 1057 MHz, is due to the interaction of the electron with the vacuum state of the radiation field. QED is able to reproduce its magnitude to better than one part in 108. Also QED was able to account for the “anomaly” in the electron-spin g-factor. Its observed value is actually g ≈ 2.00232. Schwinger derived the first-order correction, giving
g=2(1+\[Alpha]/(2\[Pi])+ \[Ellipsis]).
A more complete computation gives a value agreeing with the experiment to 1 part in 1012.
Finally, we describe the hyperfine structure of the hydrogen atom. The proton, like the electron, has an intrinsic angular momentum of ½ħ but a magnetic moment about 1000 times smaller than the electron’s. In the ground electronic state, a transition between the antiparallel and parallel electron-nuclear spin orientations produces a photon of 1420 MHz radiation, which falls in the microwave region, equivalent to a wavelength of 21 cm. The transition will occur in an isolated hydrogen atom approximately once every 107 years. Still, there are enough hydrogen atoms in a galaxy, including the Milky Way, for this transition to be detectable on Earth. It was first observed by Ewan and Purcell in 1951. In radio astronomy, radiation from hydrogen atoms is an important tool in the study of structure and dynamics of distant galaxies.
In the words of Arthur Schawlow, “The spectrum of the hydrogen atom has proved to be the Rosetta stone of modern physics: once this pattern of lines had been deciphered much else could also be understood.” We have followed this process, beginning with the primitive four-line visible spectrum of hydrogen, and culminating in measurements sensitive enough to detect the Lamb shift.
Posted in: Mathematics
Leave a Comment
Posted by Frank December 30, 2013 at 11:53 pm
Very, very fine modeled a history of science theory evolution. Thanks and regards.
Posted by Luke January 4, 2014 at 7:37 am
S. M. Blinder
This Comment is a reply to several questions and suggestions on my Blog commemorating the 100th anniversary of Neils Bohr’s atomic theory.
Michael Frayn’s award-winning play Copenhagen, produced in 1998, was an attempt to reconstruct what actually took place during Werner Heisenberg’s visit to Neils Bohr in 1941, during the Nazi occupation of Denmark in World War II. Heisenberg had worked at Bohr’s Institute for Theoretical Physics for several years beginning in 1924 and, by all accounts, they had a very warm personal friendship. Heisenberg, in collaboration with Bohr, were the principal creators of the Copenhagen Interpretation of quantum mechanics. Bohr’s half-Jewish ancestry placed him in a rather delicate position in the wartime political environment. There is no general agreement on what was actually discussed between Heisenberg and Bohr (another manifestation of the Heisenberg uncertainty principle?). Heisenberg, the head of the German atomic bomb project, hinted on the technical difficulties of the project. He may have judged the possibility of a uranium fission weapon as unattainable or that it might succeed if the war lasted long enough. He tried to engage Bohr on the morality of physicists devoting themselves during wartime to weapons such as the atomic bomb. Some speculation was that the purpose of Heisenberg’s visit was intelligence gathering: an attempt to determine if Bohr had any knowledge of the Allies’ progress on uranium fission.
Bohr and his family escaped from Denmark in 1943, narrowly avoiding arrest by the Nazi occupation forces. He worked for a time on the Manhattan Project at Los Alamos, under the alias “Nicholas Baker.” After the War, Bohr was a vigorous proponent for international cooperation on nuclear energy. He was influential in the creation of the International Atomic Energy Agency and received the first Atoms for Peace Award in 1957.
Bohr, in collaboration with John Wheeler, derived a simple relation predicting which nuclei were likely to undergo spontaneous fission: Z^2/A >2(a_S/a_C), where a_S and a_C are the surface and Coulomb energy factors, respectively. With a_S ≈18 MeV and a_C ≈ .472 MeV, Z^2/A equals about 50. See:
N. Bohr and J. A. Wheeler, “The Mechanism of Nuclear Fission,” Phys. Rev. 56, 426-450 (1939).
Another subject of comments I received concerned the relation between Bohr orbits and de Broglie matter waves. It turns out that the allowed orbits contain exactly an integer number of de Broglie wavelengths. Refer to the two Demonstrations: http://demonstrations.wolfram.com/BohrsOrbits/ and http://demonstrations.wolfram.com/ElectronWavesInBohrAtom/
Posted by S. M. Blinder January 9, 2014 at 3:00 pm
Dileep V. Sathe
On Bohr’s first postulate
In the last fifty years, educationists have observed a few common misunderstandings among students, young and grown-up. They seem to be global and cause adverse effect on the liking for physics in students. So I am working on conceptual aspects of Newtonian mechanics for nearly 35 years and also focus on related topics like the planetary motion and Bohr’s theory of Hydrogen atom. So I would like to draw attention of readers to his first postulate and reconsider it in view of my following statement.
As per the first postulate – the centripetal force for the motion of electron, around the nucleus of Hydrogen, is provided the electrostatic force of attraction between the electron and the positively charged nucleus. For 100 years, we have been using it without any real difficulty. But I noticed a conceptual difficulty, describing it first in a quarterly of UNESCO in Jan-Mar 1980. Let me raise it again here.
We use the electrostatic force in the concept of electrical potential energy also, where the motion of charged particle is *along the line of action of force*. Thus we use one and the same force for two different modes of motion – i) motion along the line of action in the potential energy and ii) motion perpendicular to the line of action in Bohr’s theory. Frequently the question that comes to mind is: If one and the same force is used in two totally different modes of motion, WHY there are no conditions to allow only required mode to take place and not the other mode? I think that this question requires serious attention of educationists as well as physicists. Feel free to discuss more on this point.
Posted by Dileep V. Sathe January 16, 2014 at 1:03 pm
Guido Fano
I read your beautiful post on the history of the hydrogen atom; I like it, and it is a pity that you gave up (if I understand correctly) the idea of writing a larger treatise. About the two main components of physics, i.e. experiments and mathematics, I am more inclined to the second. I have studied most of what you write in the post many years ago. After, I have never used and I have never taught Bohr-Sommerfeld theory, but only Quantum Mechanics. But it was nice to see again in your post ( I have forgotten) that Nature started to reveal his secrets with the four lines in the visible region of the hydrogen spectrum; of course it is amazing and gives us an almost religious feeling that in galaxies, far from us ten billions of light years, infinitesimal spin-flips of electrons of H atoms, produce photons that reach us, and inform us about their origin. I am sure that you agree with this feeling.
Posted by Guido Fano January 25, 2014 at 4:26 pm
Leave a comment
|
410b739d3ea356f5 | Chemical Sciences: A Manual for CSIR-UGC National Eligibility Test for Lectureship and JRF/Eckart conditions
From Wikibooks, open books for an open world
Jump to: navigation, search
The Eckart conditions[1] simplify the molecular Schrödinger equation that arises in the second step of the Born-Oppenheimer approximation. They allow the separation of the nuclear center of mass motion and (partly) the rotational motion from the internal vibrational motions. Although the rotational and vibrational motions of the nuclei in a molecule cannot be fully separated, the Eckart conditions minimize the (Coriolis) coupling between these motions.
Definition Eckart conditions[edit]
Let be the coordinate vector of nucleus A () with respect to a special body-fixed frame with origin in the center of mass of the molecule, a so-called Eckart frame.[2] The mass of nucleus A is MA. We suppose that the potential energy surface has a (deep) minimum for (); this means that the molecule is assumed to have a well-defined geometry (a semi-rigid molecule). The equilibrium coordinates are expressed with respect to the Eckart frame.
We define displacement coordinates . The displacement coordinates satisfy the translational Eckart conditions, if:
The rotational Eckart conditions for the displacements are:
where indicates a vector product.
Separation of external and internal coordinates[edit]
Displacement coordinates that satisfy the Eckart conditions are usually referred to as internal molecular coordinates. To clarify this nomenclature we define first external molecular coordinates.
The nuclear configuration space R3N arising in the second step of the Born-Oppenheimer approximation is a linear space of dimension 3N. One can renormalize its basis and obtain so-called mass-weighted coordinates, or one can introduce a generalized inner product with
playing the role of positive definite metric. That is, an inner product in the space is defined as
We will follow the second route.
The following 3N-dimensional vectors are external coordinates
where t and s are arbitrary triplets of real numbers (3-dimensional column vectors). The subspace of R3N of all vectors T and S is of dimension 6, for it is easy to show that the 6 vectors obtained from unit vectors t and s [t=(1,0,0), t=(0,1,0), s = (1,0,0), etc.] are orthogonal and hence linearly independent. We write this 6-dimensional subspace as Rext and the total configuration space becomes
where Rint is the 3N - 6 dimensional orthogonal complement of Rext. The elements of Rint are internal coordinates. We show that they obey the Eckart conditions. We also show that, conversely, vectors obeying the Eckart conditions are in the space Rint:
because is in the orthogonal complement of T. Now, since t is arbitrary,
which are the translational Eckart conditions. In a very similar manner,
because is in the orthogonal complement of S. Finally, since s is arbitrary
which are the rotational Eckart conditions.
Relation to the harmonic approximation[edit]
In the harmonic approximation to the nuclear vibrational problem, expressed in displacement coordinates, one must solve the generalized eigenvalue problem
where H is a 3N x 3N symmetric matrix of second derivatives of the potential . H is the Hessian matrix of V in the equilibrium . The diagonal matrix M contains the masses on the diagonal. The diagonal matrix contains the eigenvalues, while the columns of C contain the eigenvectors.
It can be shown that the invariance of V under simultaneous translation over t of all nuclei implies that the vectors T, introduced above, are eigenvectors of eigenvalue zero. From the invariance of V under an infinitesimal rotation of all nuclei around s it can been shown that also the vectors S are eigenvectors of eigenvalue zero. Thus, the 6 columns of C corresponding to eigenvalue zero, are determined algebraically. (If the generalized eigenvalue problem is solved numerically, one will find in general 6 linearly independent linear combinations of S and T). The eigenspace corresponding to eigenvalue zero is at least of dimension 6 (often it is exactly of dimension 6, since the other eigenvalues, which are force constants, are never zero for molecules in their ground state). Thus, T and S correspond to the overall (external) motions: translation and rotation, respectively. They are zero-energy modes because space is homogeneous (force-free) and isotropic (torque-free).
By the definition in this article the non-zero frequency modes are internal modes, since they are within the orthogonal complement of Rext. The generalized orthogonalities: applied to the "internal" and "external" (zero-eigenvalue) columns of C are in fact the Eckart conditions.
1. C. Eckart, Some studies concerning rotating axes and polyatomic molecules, Physical Review, vol. 47, pp. 552-558 (1935).
2. L. C. Biedenharn and J. D. Louck, Angular Momentum in Quantum Physics, Addison-Wesley, Reading (1981) p. 535.
The classic work is:
• E. Bright Wilson, Jr., J. C. Decius and Paul C. Cross, Molecular Vibrations, Mc-Graw-Hill (1955). Reprinted by Dover (1980).
More advanced book are:
• D. Papoušek and M. R. Aliev, Molecular Vibrational-Rotational Spectra, Elsevier (1982).
• S. Califano, Vibrational States, Wiley, New York-London (1976). |
f1be1fb4ad7634e3 | Take the 2-minute tour ×
After reading about the hydrogen atom and understanding how Schrodinger's equation explains most part of the atomic spectrum of an hydrogen atom, and also came to know that, it explains most of the chemical reactions and a huge tool in chemistry.
I am now almost convinced, that it is wise to accept the Schrodinger equation as a law that govern's the motion of subatomic particles like electrons at quantum scales. Now I am a little curious about one problem. How does an electron (a distribution of charge) move under the influence of its own electrostatic Coulomb's field. I am interested only in strictly theoretical sense, but also like to know if there is any practical importance to it.
I'd like to consider this problem first in a 1-D setup, purely due to my lack of acquaintance with partial differential equations. So lets consider a 1-D electron, as a linear charge distribution of constant density $\rho$ and distributed over a length $2r_e$.
Now I am interested to setup the Schrodinger equation for it, in the case where there is no external field. I'd appreciate some help/comments on setting it up, solution and analysis/interpretation of the resulting wave function and what it actually means, at diefferent energy levels(very high, very low, etc).
share|improve this question
Comment to the question (v1): Note that standard 1D/2D/3D quantum mechanical treatment of the hydrogen atom considers the electron in the proton's electrostatic field rather than the electron's own electrostatic field. – Qmechanic Jul 11 '13 at 13:28
Electrons are considered point particles - though this statement is sidenoted by this answer (does anyone care to expand on this?) - so the radius $r_e$ would be zero. But that's actually just a pedantic point in this context. The electric field around an electron (on its own in a vacuum) is spherically symmetric (as far as we can tell), so this would not influence the electron's movement. – Wouter Jul 11 '13 at 13:30
@Qmechanic : I know this is a different problem from the Hydrogen atom. I am just curious about this special problem. – Rajesh D Jul 11 '13 at 13:33
@RajeshD Yes and so the only term in the Schrödinger equation for a free electron would be the kinetic energy one, since it is not influenced by its own (spherically symmetric) electric field. – Wouter Jul 11 '13 at 13:44
Well there is such a thing as self-interaction in QFT, but not (as far as I know) in the context of regular QM - which is what I think you're currently looking into? And I'm not even sure this self-interaction is possible for a free electron. Could be wrong though. – Wouter Jul 11 '13 at 14:00
Your Answer
Browse other questions tagged or ask your own question. |
7f06daead7b1a7b9 | You are here
The Equations: Icons of Knowledge
Sander Bais
Harvard University Press
Publication Date:
Number of Pages:
[Reviewed by
David M. Bressoud
, on
The world is awash is popularizations of modern physics. Here is one more. The author writes clearly and incisively about the physical ideas, but so have other good popularizers. Bais has chosen an original means of setting himself apart. Most books of this genre take it as axiomatic that they must avoid equations. Bais embraces them, building his book around the equations of physics and using them as the vehicle for describing the counter-intuitive world of relativity theory, quantum mechanics, and string theory.
This book is structured around a comprehensive collection of important equations, starting with the logistic equation and continuing through the equations of Newtonian mechanics, electricity and magnetism, the Korteweg-De Vries and Navier-Stokes equations, the Boltzmann equation, equations of special and general relativity, the Schrödinger and Dirac equations, and ending with the equations of quantum chromodynamics, the Glashow-Weinberg-Salam model for electro-weak interactions, and super-string action. Quite an impressive list, especially when one considers that his audience really is the neophyte, witnessed by the need to begin this book with explanations of the concepts of function, vector, and derivative.
The equations provide useful hooks on which to hang a discussion of the big ideas of physics. Each equation (or collection of equations) gets its own page in which it is displayed in all its incomprehensible glory. In that sense, the equations serve their role as icons, mystical pointers to a meaning beyond what is visible to the uninitiated.
Bais is trying to do more than this. He would like the reader to gain an appreciation for the meaning of these equations. He does attempt an explanation of the significance of each of the symbols and operators, more diligently and successfully in the earlier equations than the latter. Ultimately, he fails. To someone with familiarity with the language of mathematics and with some of the physics, there is too little here. To the person who really is unfamiliar with such equations, too much comes too quickly. Once Bais has moved to partial differential equations and Lagrangian operators, the mathematics is treated perfunctorily and cryptically. I cannot fault him for this. Bais is trying to communicate an important physical principle in a few pages. There is neither time nor space for the mathematics.
The result is that the title of this book is something of a cheat. This is really about the physics, not the equations. The best that might be hoped for the general reader is an increased appreciation that equations play an important role in our understanding of physical universe.
Yet when all is said and done, this is a pretty book that is well-written, and I would not hesitate to give it as stocking-stuffer to a young person who is just beginning to discover the wonder of our physical universe and to appreciate the power of mathematics as the language in which we express our understanding of this universe.
Rise and fall
The logistic equation
Mechanics and gravity
Newton's dynamical equations and universal law of gravity
The electromagnetic force on a charge
The Lorentz force law
A local conservation law
The continuity equation
The Maxwell equations
Electromagnetic waves
The wave equations
Solitary waves
The Korteweg - De Vries equation
The three laws of thermodynamics
Kinetic theory
The Bolzmann equation
The Navier-Stokes equations
Special relativity
Relativistic kinematics
General relativity
The Einstein equations
Quantum mechanics
The Schrödinger equation
The relativistic electron
The Dirac equation
The strong force
Quantum chromodynamics
Electro-weak Interactions
The Gashow-Weinberg-Salam model
String theory
The superstring action
Back to the future
A final perspective |
70443a2dd5d7d871 |
To explain the inverse theorem, let me first discuss the bilinear estimate that it inverts. Define a wave to be a solution to the free wave equation -\phi_{tt} + \Delta \phi = 0. If the wave has a finite amount of energy, then one expects the wave to disperse as time goes to infinity; this is captured by the Strichartz estimates, which establish various spacetime L^p bounds on such waves in terms of the energy (or related quantities, such as Sobolev norms of the initial data). These estimates are fundamental to the local and global theory of nonlinear wave equations, as they can be used to control the effect of the nonlinearity.
In some cases (especially in low dimensions and/or low regularities, and with equations whose nonlinear terms contain derivatives), Strichartz estimates are too weak to control nonlinearities; roughly speaking, this is because waves decay too slowly in low dimensions. (For instance, one-dimensional waves \phi(t,x) = f(x+t)+g(x-t) do not decay at all.) However, it has been understood for some time that if the nonlinearity has a special null structure, which roughly means that it consists only of interactions between transverse waves rather than parallel waves, then there is more decay that one can exploit. For instance, while one-dimensional waves do not decay in time, the product between a left-propagating wave f(x+t) and a right-propagating wave g(x-t) does decay in time. In particular, if f and g are bounded in L^2({\Bbb R}), then this product is bounded in spacetime L^2_{t,x}({\Bbb R}), thanks to the Fubini-Tonelli theorem.
There is a similar “bilinear L^2” estimate for products of transverse waves in higher dimensions. This estimate is the basic building block for the bilinear X^{s,b} estimates and their variants as developed by Bourgain, Klainerman-Machedon, Kenig-Ponce-Vega, Tataru, and others, and which are the tool of choice for establishing local and global control on nonlinear wave equations, particularly at low dimensions and at critical regularities. In particular, these estimates (or more precisely, a complicated variant of these estimates in sophisticated function spaces, due to Tataru and myself), are used in the theory of the energy-critical wave map equation. [These bilinear (and trilinear) estimates are not, by themselves, enough to handle this equation; one also needs an additional gauge fixing procedure before the equation is sufficiently close to linear in behaviour that these estimates become effective. But I do not wish to discuss the (significant) gauge fixing issue here.]
To cut a (very) long story short, these estimates, when combined with a suitable perturbative theory, allow one to control energy-critical wave maps as long as the energy is small. However, the whole point of the “heatwave” project is to control the non-perturbative setting when the energy is large (but finite), and one wants to control the solution for long periods of time.
In my previous “heatwave” paper, in which I established large data local well-posedness for this equation, I finessed this issue by localising time to very short intervals, which made certain spacetime norms small enough for the perturbation theory to apply. This sufficed for the local well-posedness theory, but is not good enough for the global perturbative theory, because the number of very short intervals needed to cover the entire time axis becomes unbounded. For that, one needs the ability to make certain norms or estimates “small” by only chopping up time into a bounded number of intervals. I refer to this property as divisibility (I used to refer to it, somewhat incorrectly, as fungibility).
In the case of semilinear wave (or Schrödinger equations), in which Strichartz estimates are already sufficient to obtain a satisfactory perturbative theory, divisibility is well-understood, and boils down to the following simple observation: if a function \phi: {\Bbb R} \times {\Bbb R}^n \to {\Bbb C} obeys a global spacetime integrability bound such as
\int_{\Bbb R} \int_{\Bbb R}^n |\phi(t,x)|^p\ dx dt \leq M
for some finite exponent p and some finite bound M, then one can partition {\Bbb R} into intervals I on which
\int_I \int_{\Bbb R}^n |\phi(t,x)|^p\ dx dt \leq \varepsilon
for some \varepsilon > 0 at one’s disposal to select. Indeed the number of such intervals is bounded by M/\varepsilon, and the intervals can be selected by a simple “greedy algorithm” argument. This divisibility property of L^p-type spacetime norms allows one to easily generalise the small-data perturbation theory to the large-data setting, and is relied upon heavily in the modern theory of the critical nonlinear wave and Schrödinger equations; see for instance this survey of Killip and Visan.
Unfortunately, the function spaces used in wave maps are not easily divisible in this manner (very roughly speaking, this is because the function space norms contain too many L^\infty_t type norms within them). So one cannot rely purely on refining the function space; one must also work on refining the bilinear (and trilinear) estimates on these spaces. The standard way to do this is to strengthen the L^p exponents in these estimates, and for the basic bilinear L^2 estimate this has indeed been done (in work of Wolff and myself). This suffices for “equal-frequency” interactions, in which one is multiplying two transverse waves of the same frequency, but turns out to be inadequate for “imbalanced-frequency” interactions, when one is multiplying a low-frequency wave by a high-frequency transverse wave. For this, I rely instead on establishing an inverse theorem for the estimate.
Generally speaking, whenever one is faced with an estimate, e.g. a linear estimate
\| Tf \|_Y \leq C \|f\|_X,
one can pose the inverse problem of trying to classify the functions f for which the estimate is tight in the sense that
\| Tf \|_Y \geq \delta \|f\|_X
for some \delta > 0 which is not too small. Such inverse theorems are a current area of study in additive combinatorics, and have recently begun making an appearance in PDE as well. For instance:
• Young’s inequality \|f*g\|_{L^r} \leq \|f\|_{L^p} \|g\|_{L^r} or the Hausdorff-Young inequality \|\hat f\|_{L^{p'}} \leq \|f\|_{L^p}, is only tight (for non-endpoint p,q,r) when f, g are concentrated on balls, arithmetic progressions, or Bohr sets (this is a consequence of several basic theorems in additive combinatorics, including Freiman’s theorem and the Balog-Szemeredi-Gowers theorem);
• The trivial inequality \|f\|_{U^k} \leq \|f\|_{L^\infty} for the Gowers uniformity norms is only expected to be tight when f correlates with a highly algebraic object, such as a polynomial phase or nilsequence (this is the inverse conjecture for the Gowers norm, which is partially proven so far);
• The Sobolev embedding \| f \|_{L^q} \leq C \|f\|_{W^{s,r}} is only tight when f is concentrated on a unit ball (for non-endpoint estimates) or a ball of arbitrary radius (for endpoint estimates);
• Strichartz estimates are only tight when f is concentrated on a ball (for non-endpoint estimates) or a tube (for endpoint estimates).
Inverse theorems for such estimates as Sobolev inequalities and Strichartz estimates are also closely related to the theory of concentration compactness and profile decompositions; see this previous blog post of mine for a discussion.
I can now state informally, the main result of this paper:
Theorem 1 (informal statement). A bilinear L^2 estimate between two waves of different frequency is only tight when the waves are concentrated on a small number of light rays. Outside of these rays, the L^2 norm is small.
This leads to a corollary which will be used in my final heatwave paper:
Corollary 2 (informal statement). Any large-energy wave \phi can have its time axis subdivided into a bounded number of intervals, such that on each interval the bilinear estimates for that wave (when interacted against any high-frequency transverse wave) behave “as if” \phi was small-energy rather than large energy.
The method of proof relies on a paper of mine from several years ago on bilinear L^p estimates for the wave equation, which in turn is based on a celebrated paper of Wolff. Roughly speaking, the idea is to use wave packet decompositions and the combinatorics of light rays to isolate the regions of spacetime where the waves are concentrating, cover these regions by tubular neighbourhoods of light rays, then remove the light rays to reduce the energy (or mass) of the solution and iterate. The wave packet analysis is moderately complicated, but fortunately I can use a proposition on this topic from my paper as a black box, leaving only the other components of the argument to write out in detail. |
06bc3c526bb0d004 | General relativity
From Wikipedia, the free encyclopedia
(Redirected from Curvature of space)
Jump to: navigation, search
A simulated black hole of 10 solar masses within the Milky Way, seen from a distance of 600 kilometers.
The Einstein field equations are nonlinear and very difficult to solve. Einstein used approximation methods in working out initial predictions of the theory. But as early as 1916, the astrophysicist Karl Schwarzschild found the first non-trivial exact solution to the Einstein field equations, the so-called Schwarzschild metric. This solution laid the groundwork for the description of the final stages of gravitational collapse, and the objects known today as black holes. In the same year, the first steps towards generalizing Schwarzschild's solution to electrically charged objects were taken, which eventually resulted in the Reissner–Nordström solution, now associated with electrically charged black holes.[3] In 1917, Einstein applied his theory to the universe as a whole, initiating the field of relativistic cosmology. In line with contemporary thinking, he assumed a static universe, adding a new parameter to his original field equations—the cosmological constant—to match that observational presumption.[4] By 1929, however, the work of Hubble and others had shown that our universe is expanding. This is readily described by the expanding cosmological solutions found by Friedmann in 1922, which do not require a cosmological constant. Lemaître used these solutions to formulate the earliest version of the Big Bang models, in which our universe has evolved from an extremely hot and dense earlier state.[5] Einstein later declared the cosmological constant the biggest blunder of his life.[6]
During that period, general relativity remained something of a curiosity among physical theories. It was clearly superior to Newtonian gravity, being consistent with special relativity and accounting for several effects unexplained by the Newtonian theory. Einstein himself had shown in 1915 how his theory explained the anomalous perihelion advance of the planet Mercury without any arbitrary parameters ("fudge factors").[7] Similarly, a 1919 expedition led by Eddington confirmed general relativity's prediction for the deflection of starlight by the Sun during the total solar eclipse of May 29, 1919,[8] making Einstein instantly famous.[9] Yet the theory entered the mainstream of theoretical physics and astrophysics only with the developments between approximately 1960 and 1975, now known as the golden age of general relativity.[10] Physicists began to understand the concept of a black hole, and to identify quasars as one of these objects' astrophysical manifestations.[11] Ever more precise solar system tests confirmed the theory's predictive power,[12] and relativistic cosmology, too, became amenable to direct observational tests.[13]
From classical mechanics to general relativity
Geometry of Newtonian gravity
Relativistic generalization
As intriguing as geometric Newtonian gravity may be, its basis, classical mechanics, is merely a limiting case of (special) relativistic mechanics.[21] In the language of symmetry: where gravity can be neglected, physics is Lorentz invariant as in special relativity rather than Galilei invariant as in classical mechanics. (The defining symmetry of special relativity is the Poincaré group which also includes translations and rotations.) The differences between the two become significant when we are dealing with speeds approaching the speed of light, and with high-energy phenomena.[22]
With Lorentz symmetry, additional structures come into play. They are defined by the set of light cones (see the image on the left). The light-cones define a causal structure: for each event A, there is a set of events that can, in principle, either influence or be influenced by A via signals or interactions that do not need to travel faster than light (such as event B in the image), and a set of events for which such an influence is impossible (such as event C in the image). These sets are observer-independent.[23] In conjunction with the world-lines of freely falling particles, the light-cones can be used to reconstruct the space–time's semi-Riemannian metric, at least up to a positive scalar factor. In mathematical terms, this defines a conformal structure.[24]
Einstein's equations
Einstein's field equations
G_{\mu\nu}\equiv R_{\mu\nu} - {\textstyle 1 \over 2}R\,g_{\mu\nu} = {8 \pi G \over c^4} T_{\mu\nu}\,
On the left-hand side is the Einstein tensor, a specific divergence-free combination of the Ricci tensor R_{\mu\nu} and the metric. Where G_{\mu\nu} is symmetric. In particular,
On the right-hand side, T_{\mu\nu} is the energy–momentum tensor. All tensors are written in abstract index notation.[30] Matching the theory's prediction to observational results for planetary orbits (or, equivalently, assuring that the weak-gravity, low-speed limit is Newtonian mechanics), the proportionality constant can be fixed as κ = 8πG/c4, with G the gravitational constant and c the speed of light.[31] When there is no matter present, so that the energy–momentum tensor vanishes, the results are the vacuum Einstein equations,
There are alternatives to general relativity built upon the same premises, which include additional rules and/or constraints, leading to different field equations. Examples are Brans–Dicke theory, teleparallelism, and Einstein–Cartan theory.[32]
Definition and basic applications
Definition and basic properties
As it is constructed using tensors, general relativity exhibits general covariance: its laws—and further laws formulated within the general relativistic framework—take on the same form in all coordinate systems.[37] Furthermore, the theory does not contain any invariant geometric background structures, i.e. it is background independent. It thus satisfies a more stringent general principle of relativity, namely that the laws of physics are the same for all observers.[38] Locally, as expressed in the equivalence principle, spacetime is Minkowskian, and the laws of physics exhibit local Lorentz invariance.[39]
Einstein's equations are nonlinear partial differential equations and, as such, difficult to solve exactly.[41] Nevertheless, a number of exact solutions are known, although only a few have direct physical applications.[42] The best-known exact solutions, and also those most interesting from a physics point of view, are the Schwarzschild solution, the Reissner–Nordström solution and the Kerr metric, each corresponding to a certain type of black hole in an otherwise empty universe,[43] and the Friedmann–Lemaître–Robertson–Walker and de Sitter universes, each describing an expanding cosmos.[44] Exact solutions of great theoretical interest include the Gödel universe (which opens up the intriguing possibility of time travel in curved spacetimes), the Taub-NUT solution (a model universe that is homogeneous, but anisotropic), and anti-de Sitter space (which has recently come to prominence in the context of what is called the Maldacena conjecture).[45]
Consequences of Einstein's theory
General relativity has a number of physical consequences. Some follow directly from the theory's axioms, whereas others have become clear only in the course of the ninety years of research that followed Einstein's initial publication.
Gravitational time dilation and frequency shift
Gravitational redshift has been measured in the laboratory[52] and using astronomical observations.[53] Gravitational time dilation in the Earth's gravitational field has been measured numerous times using atomic clocks,[54] while ongoing validation is provided as a side effect of the operation of the Global Positioning System (GPS).[55] Tests in stronger gravitational fields are provided by the observation of binary pulsars.[56] All results are in agreement with general relativity.[57] However, at the current level of accuracy, these observations cannot distinguish between general relativity and other theories in which the equivalence principle is valid.[58]
Light deflection and gravitational time delay
This and related predictions follow from the fact that light follows what is called a light-like or null geodesic—a generalization of the straight lines along which light travels in classical physics. Such geodesics are the generalization of the invariance of lightspeed in special relativity.[60] As one examines suitable model spacetimes (either the exterior Schwarzschild solution or, for more than a single mass, the post-Newtonian expansion),[61] several effects of gravity on light propagation emerge. Although the bending of light can also be derived by extending the universality of free fall to light,[62] the angle of deflection resulting from such calculations is only half the value given by general relativity.[63]
Gravitational waves
Main article: Gravitational wave
Ring of test particles influenced by gravitational wave
One of several analogies between weak-field gravity and electromagnetism is that, analogous to electromagnetic waves, there are gravitational waves: ripples in the metric of spacetime that propagate at the speed of light.[66] The simplest type of such a wave can be visualized by its action on a ring of freely floating particles. A sine wave propagating through such a ring towards the reader distorts the ring in a characteristic, rhythmic fashion (animated image to the right).[67] Since Einstein's equations are non-linear, arbitrarily strong gravitational waves do not obey linear superposition, making their description difficult. However, for weak fields, a linear approximation can be made. Such linearized gravitational waves are sufficiently accurate to describe the exceedingly weak waves that are expected to arrive here on Earth from far-off cosmic events, which typically result in relative distances increasing and decreasing by 10^{-21} or less. Data analysis methods routinely make use of the fact that these linearized waves can be Fourier decomposed.[68]
Some exact solutions describe gravitational waves without any approximation, e.g., a wave train traveling through empty space[69] or so-called Gowdy universes, varieties of an expanding cosmos filled with gravitational waves.[70] But for gravitational waves produced in astrophysically relevant situations, such as the merger of two black holes, numerical methods are presently the only way to construct appropriate models.[71]
Orbital effects and the relativity of direction
Precession of apsides
The effect can also be derived by using either the exact Schwarzschild metric (describing spacetime around a spherical mass)[73] or the much more general post-Newtonian formalism.[74] It is due to the influence of gravity on the geometry of space and to the contribution of self-energy to a body's gravity (encoded in the nonlinearity of Einstein's equations).[75] Relativistic precession has been observed for all planets that allow for accurate precession measurements (Mercury, Venus, and Earth),[76] as well as in binary pulsar systems, where it is larger by five orders of magnitude.[77]
Orbital decay
Orbital decay for PSR1913+16: time shift in seconds, tracked over three decades.[78]
Geodetic precession and frame-dragging
Several relativistic effects are directly related to the relativity of direction.[82] One is geodetic precession: the axis direction of a gyroscope in free fall in curved spacetime will change when compared, for instance, with the direction of light received from distant stars—even though such a gyroscope represents the way of keeping a direction as stable as possible ("parallel transport").[83] For the Moon–Earth system, this effect has been measured with the help of lunar laser ranging.[84] More recently, it has been measured for test masses aboard the satellite Gravity Probe B to a precision of better than 0.3%.[85][86]
Near a rotating mass, there are so-called gravitomagnetic or frame-dragging effects. A distant observer will determine that objects close to the mass get "dragged around". This is most extreme for rotating black holes where, for any object entering a zone known as the ergosphere, rotation is inevitable.[87] Such effects can again be tested through their influence on the orientation of gyroscopes in free fall.[88] Somewhat controversial tests have been performed using the LAGEOS satellites, confirming the relativistic prediction.[89] Also the Mars Global Surveyor probe around Mars has been used.[90][91]
Astrophysical applications
Gravitational lensing
Main article: Gravitational lensing
The deflection of light by gravity is responsible for a new class of astronomical phenomena. If a massive object is situated between the astronomer and a distant target object with appropriate mass and relative distances, the astronomer will see multiple distorted images of the target. Such effects are known as gravitational lensing.[92] Depending on the configuration, scale, and mass distribution, there can be two or more images, a bright ring known as an Einstein ring, or partial rings called arcs.[93] The earliest example was discovered in 1979;[94] since then, more than a hundred gravitational lenses have been observed.[95] Even if the multiple images are too close to each other to be resolved, the effect can still be measured, e.g., as an overall brightening of the target object; a number of such "microlensing events" have been observed.[96]
Gravitational wave astronomy
Artist's impression of the space-borne gravitational wave detector LISA
Observations of binary pulsars provide strong indirect evidence for the existence of gravitational waves (see Orbital decay, above). However, gravitational waves reaching us from the depths of the cosmos have not been detected directly. Such detection is a major goal of current relativity-related research.[98] Several land-based gravitational wave detectors are currently in operation, most notably the interferometric detectors GEO 600, LIGO (two detectors), TAMA 300 and VIRGO.[99] Various pulsar timing arrays are using millisecond pulsars to detect gravitational waves in the 10−9 to 10−6 Hertz frequency range, which originate from binary supermassive blackholes.[100] European space-based detector, eLISA / NGO, is currently under development,[101] with a precursor mission (LISA Pathfinder) due for launch in 2015.[102]
Observations of gravitational waves promise to complement observations in the electromagnetic spectrum.[103] They are expected to yield information about black holes and other dense objects such as neutron stars and white dwarfs, about certain kinds of supernova implosions, and about processes in the very early universe, including the signature of certain types of hypothetical cosmic string.[104]
Black holes and other compact objects
Main article: Black hole
Whenever the ratio of an object's mass to its radius becomes sufficiently large, general relativity predicts the formation of a black hole, a region of space from which nothing, not even light, can escape. In the currently accepted models of stellar evolution, neutron stars of around 1.4 solar masses, and stellar black holes with a few to a few dozen solar masses, are thought to be the final state for the evolution of massive stars.[105] Usually a galaxy has one supermassive black hole with a few million to a few billion solar masses in its center,[106] and its presence is thought to have played an important role in the formation of the galaxy and larger cosmic structures.[107]
Astronomically, the most important property of compact objects is that they provide a supremely efficient mechanism for converting gravitational energy into electromagnetic radiation.[108] Accretion, the falling of dust or gaseous matter onto stellar or supermassive black holes, is thought to be responsible for some spectacularly luminous astronomical objects, notably diverse kinds of active galactic nuclei on galactic scales and stellar-size objects such as microquasars.[109] In particular, accretion can lead to relativistic jets, focused beams of highly energetic particles that are being flung into space at almost light speed.[110] General relativity plays a central role in modelling all these phenomena,[111] and observations provide strong evidence for the existence of black holes with the properties predicted by the theory.[112]
Computer model of a supermassive black hole, this simulation is approximately what a person would see in reality made from equations of general relativity for the film Interstellar. All of the light shown here comes from the horizontal accretion disc. Gravity bends light from the back of the black hole to form the apparent vertical ring.
Main article: Physical cosmology
R_{\mu\nu} - {\textstyle 1 \over 2}R\,g_{\mu\nu} + \Lambda\ g_{\mu\nu} = \frac{8\pi G}{c^{4}}\, T_{\mu\nu}
where g_{\mu\nu} is the spacetime metric.[115] Isotropic and homogeneous solutions of these enhanced equations, the Friedmann–Lemaître–Robertson–Walker solutions,[116] allow physicists to model a universe that has evolved over the past 14 billion years from a hot, early Big Bang phase.[117] Once a small number of parameters (for example the universe's mean matter density) have been fixed by astronomical observation,[118] further observational data can be used to put the models to the test.[119] Predictions, all successful, include the initial abundance of chemical elements formed in a period of primordial nucleosynthesis,[120] the large-scale structure of the universe,[121] and the existence and properties of a "thermal echo" from the early cosmos, the cosmic background radiation.[122]
Astronomical observations of the cosmological expansion rate allow the total amount of matter in the universe to be estimated, although the nature of that matter remains mysterious in part. About 90% of all matter appears to be so-called dark matter, which has mass (or, equivalently, gravitational influence), but does not interact electromagnetically and, hence, cannot be observed directly.[123] There is no generally accepted description of this new kind of matter, within the framework of known particle physics[124] or otherwise.[125] Observational evidence from redshift surveys of distant supernovae and measurements of the cosmic background radiation also show that the evolution of our universe is significantly influenced by a cosmological constant resulting in an acceleration of cosmic expansion or, equivalently, by a form of energy with an unusual equation of state, known as dark energy, the nature of which remains unclear.[126]
A so-called inflationary phase,[127] an additional phase of strongly accelerated expansion at cosmic times of around 10^{-33} seconds, was hypothesized in 1980 to account for several puzzling observations that were unexplained by classical cosmological models, such as the nearly perfect homogeneity of the cosmic background radiation.[128] Recent measurements of the cosmic background radiation have resulted in the first evidence for this scenario.[129] However, there is a bewildering variety of possible inflationary scenarios, which cannot be restricted by current observations.[130] An even larger question is the physics of the earliest universe, prior to the inflationary phase and close to where the classical models predict the big bang singularity. An authoritative answer would require a complete theory of quantum gravity, which has not yet been developed[131] (cf. the section on quantum gravity, below).
Time travel
Kurt Gödel showed that closed timelike curve solutions to Einstein's equations exist which allow for loops in time. The solutions require extreme physical conditions unlikely ever to occur in practice, and it remains an open question whether further laws of physics will eliminate them completely. Since then other—similarly impractical—GR solutions containing CTCs have been found, such as the Tipler cylinder and traversable wormholes.
Advanced concepts
Causal structure and global geometry
Main article: Causal structure
Penrose–Carter diagram of an infinite Minkowski universe
Aware of the importance of causal structure, Roger Penrose and others developed what is known as global geometry. In global geometry, the object of study is not one particular solution (or family of solutions) to Einstein's equations. Rather, relations that hold true for all geodesics, such as the Raychaudhuri equation, and additional non-specific assumptions about the nature of matter (usually in the form of so-called energy conditions) are used to derive general results.[133]
There are other types of horizons. In an expanding universe, an observer may find that some regions of the past cannot be observed ("particle horizon"), and some regions of the future cannot be influenced (event horizon).[140] Even in flat Minkowski space, when described by an accelerated observer (Rindler space), there will be horizons associated with a semi-classical radiation known as Unruh radiation.[141]
Main article: Spacetime singularity
Another general feature of general relativity is the appearance of spacetime boundaries known as singularities. Spacetime can be explored by following up on timelike and lightlike geodesics—all possible ways that light and particles in free fall can travel. But some solutions of Einstein's equations have "ragged edges"—regions known as spacetime singularities, where the paths of light and falling particles come to an abrupt end, and geometry becomes ill-defined. In the more interesting cases, these are "curvature singularities", where geometrical quantities characterizing spacetime curvature, such as the Ricci scalar, take on infinite values.[142] Well-known examples of spacetimes with future singularities—where worldlines end—are the Schwarzschild solution, which describes a singularity inside an eternal static black hole,[143] or the Kerr solution with its ring-shaped singularity inside an eternal rotating black hole.[144] The Friedmann–Lemaître–Robertson–Walker solutions and other spacetimes describing universes have past singularities on which worldlines begin, namely Big Bang singularities, and some have future singularities (Big Crunch) as well.[145]
Given that these examples are all highly symmetric—and thus simplified—it is tempting to conclude that the occurrence of singularities is an artifact of idealization.[146] The famous singularity theorems, proved using the methods of global geometry, say otherwise: singularities are a generic feature of general relativity, and unavoidable once the collapse of an object with realistic matter properties has proceeded beyond a certain stage[147] and also at the beginning of a wide class of expanding universes.[148] However, the theorems say little about the properties of singularities, and much of current research is devoted to characterizing these entities' generic structure (hypothesized e.g. by the so-called BKL conjecture).[149] The cosmic censorship hypothesis states that all realistic future singularities (no perfect symmetries, matter with realistic properties) are safely hidden away behind a horizon, and thus invisible to all distant observers. While no formal proof yet exists, numerical simulations offer supporting evidence of its validity.[150]
Evolution equations
To understand Einstein's equations as partial differential equations, it is helpful to formulate them in a way that describes the evolution of the universe over time. This is done in so-called "3+1" formulations, where spacetime is split into three space dimensions and one time dimension. The best-known example is the ADM formalism.[152] These decompositions show that the spacetime evolution equations of general relativity are well-behaved: solutions always exist, and are uniquely defined, once suitable initial conditions have been specified.[153] Such formulations of Einstein's field equations are the basis of numerical relativity.[154]
Global and quasi-local quantities
Nevertheless, there are possibilities to define a system's total mass, either using a hypothetical "infinitely distant observer" (ADM mass)[156] or suitable symmetries (Komar mass).[157] If one excludes from the system's total mass the energy being carried away to infinity by gravitational waves, the result is the so-called Bondi mass at null infinity.[158] Just as in classical physics, it can be shown that these masses are positive.[159] Corresponding global definitions exist for momentum and angular momentum.[160] There have also been a number of attempts to define quasi-local quantities, such as the mass of an isolated system formulated using only quantities defined within a finite region of space containing that system. The hope is to obtain a quantity useful for general statements about isolated systems, such as a more precise formulation of the hoop conjecture.[161]
Relationship with quantum theory
If general relativity were considered to be one of the two pillars of modern physics, then quantum theory, the basis of understanding matter from elementary particles to solid state physics, would be the other.[162] However, how to reconcile quantum theory with general relativity is still an open question.
Quantum field theory in curved spacetime
Ordinary quantum field theories, which form the basis of modern elementary particle physics, are defined in flat Minkowski space, which is an excellent approximation when it comes to describing the behavior of microscopic particles in weak gravitational fields like those found on Earth.[163] In order to describe situations in which gravity is strong enough to influence (quantum) matter, yet not strong enough to require quantization itself, physicists have formulated quantum field theories in curved spacetime. These theories rely on general relativity to describe a curved background spacetime, and define a generalized quantum field theory to describe the behavior of quantum matter within that spacetime.[164] Using this formalism, it can be shown that black holes emit a blackbody spectrum of particles known as Hawking radiation, leading to the possibility that they evaporate over time.[165] As briefly mentioned above, this radiation plays an important role for the thermodynamics of black holes.[166]
Quantum gravity
Main article: Quantum gravity
The demand for consistency between a quantum description of matter and a geometric description of spacetime,[167] as well as the appearance of singularities (where curvature length scales become microscopic), indicate the need for a full theory of quantum gravity: for an adequate description of the interior of black holes, and of the very early universe, a theory is required in which gravity and the associated geometry of spacetime are described in the language of quantum physics.[168] Despite major efforts, no complete and consistent theory of quantum gravity is currently known, even though a number of promising candidates exist.[169]
Attempts to generalize ordinary quantum field theories, used in elementary particle physics to describe fundamental interactions, so as to include gravity have led to serious problems. At low energies, this approach proves successful, in that it results in an acceptable effective (quantum) field theory of gravity.[170] At very high energies, however, the result are models devoid of all predictive power ("non-renormalizability").[171]
Simple spin network of the type used in loop quantum gravity
One attempt to overcome these limitations is string theory, a quantum theory not of point particles, but of minute one-dimensional extended objects.[172] The theory promises to be a unified description of all particles and interactions, including gravity;[173] the price to pay is unusual features such as six extra dimensions of space in addition to the usual three.[174] In what is called the second superstring revolution, it was conjectured that both string theory and a unification of general relativity and supersymmetry known as supergravity[175] form part of a hypothesized eleven-dimensional model known as M-theory, which would constitute a uniquely defined and consistent theory of quantum gravity.[176]
Another approach starts with the canonical quantization procedures of quantum theory. Using the initial-value-formulation of general relativity (cf. evolution equations above), the result is the Wheeler–deWitt equation (an analogue of the Schrödinger equation) which, regrettably, turns out to be ill-defined.[177] However, with the introduction of what are now known as Ashtekar variables,[178] this leads to a promising model known as loop quantum gravity. Space is represented by a web-like structure called a spin network, evolving over time in discrete steps.[179]
Depending on which features of general relativity and quantum theory are accepted unchanged, and on what level changes are introduced,[180] there are numerous other attempts to arrive at a viable theory of quantum gravity, some examples being dynamical triangulations,[181] causal sets,[182] twistor models[183] or the path-integral based models of quantum cosmology.[184]
Current status
General relativity has emerged as a highly successful model of gravitation and cosmology, which has so far passed many unambiguous observational and experimental tests. However, there are strong indications the theory is incomplete.[186] The problem of quantum gravity and the question of the reality of spacetime singularities remain open.[187] Observational data that is taken as evidence for dark energy and dark matter could indicate the need for new physics.[188] Even taken as is, general relativity is rich with possibilities for further exploration. Mathematical relativists seek to understand the nature of singularities and the fundamental properties of Einstein's equations,[189] and increasingly powerful computer simulations (such as those describing merging black holes) are run.[190] The race for the first direct detection of gravitational waves continues,[191] in the hope of creating opportunities to test the theory's validity for much stronger gravitational fields than has been possible to date.[192] Almost a hundred years after its publication, general relativity remains a highly active area of research.[193]
See also
4. ^ Einstein 1917, cf. Pais 1982, ch. 15e
7. ^ Pais 1982, pp. 253–254
8. ^ Kennefick 2005, Kennefick 2007
9. ^ Pais 1982, ch. 16
13. ^ Section Cosmology and references therein; the historical development is in Overbye 1999
14. ^ The following exposition re-traces that of Ehlers 1973, sec. 1
15. ^ Arnold 1989, ch. 1
16. ^ Ehlers 1973, pp. 5f
17. ^ Will 1993, sec. 2.4, Will 2006, sec. 2
18. ^ Wheeler 1990, ch. 2
20. ^ Ehlers 1973, pp. 10f
24. ^ Ehlers 1973, sec. 2.3
25. ^ Ehlers 1973, sec. 1.4, Schutz 1985, sec. 5.1
31. ^ Kenyon 1990, sec. 7.4
34. ^ At least approximately, cf. Poisson 2004
35. ^ Wheeler 1990, p. xi
36. ^ Wald 1984, sec. 4.4
37. ^ Wald 1984, sec. 4.1
39. ^ section 5 in ch. 12 of Weinberg 1972
40. ^ Introductory chapters of Stephani et al. 2003
43. ^ Chandrasekhar 1983, ch. 3,5,6
44. ^ Narlikar 1993, ch. 4, sec. 3.3
46. ^ Lehner 2002
47. ^ For instance Wald 1984, sec. 4.4
48. ^ Will 1993, sec. 4.1 and 4.2
49. ^ Will 2006, sec. 3.2, Will 1993, ch. 4
56. ^ Stairs 2003 and Kramer 2004
58. ^ Ohanian & Ruffini 1994, pp. 164–172
61. ^ Blanchet 2006, sec. 1.3
65. ^ Will 1993, sec. 7.1 and 7.2
66. ^ These have been indirectly observed through the loss of energy in binary pulsar systems such as the Hulse–Taylor binary, the subject of the 1993 Nobel Prize in physics. A number of projects are underway to attempt to observe directly the effects of gravitational waves. For an overview, see Misner, Thorne & Wheeler 1973, part VIII. Unlike electromagnetic waves, the dominant contribution for gravitational waves is not the dipole, but the quadrupole; see Schutz 2001
68. ^ For example Jaranowski & Królak 2005
69. ^ Rindler 2001, ch. 13
70. ^ Gowdy 1971, Gowdy 1974
73. ^ Rindler 2001, sec. 11.9
74. ^ Will 1993, pp. 177–181
77. ^ Kramer et al. 2006
81. ^ Kramer 2004
84. ^ Bertotti, Ciufolini & Bender 1987, Nordtvedt 2003
85. ^ Kahn 2007
89. ^ Ciufolini & Pavlis 2004, Ciufolini, Pavlis & Peron 2006, Iorio 2009
90. ^ Iorio L. (August 2006), "COMMENTS, REPLIES AND NOTES: A note on the evidence of the gravitomagnetic field of Mars", Classical Quantum Gravity 23 (17): 5451–5454, arXiv:gr-qc/0606092, Bibcode:2006CQGra..23.5451I, doi:10.1088/0264-9381/23/17/N01
91. ^ Iorio L. (June 2010), "On the Lense–Thirring test with the Mars Global Surveyor in the gravitational field of Mars", Central European Journal of Physics 8 (3): 509–513, arXiv:gr-qc/0701146, Bibcode:2010CEJPh...8..509I, doi:10.2478/s11534-009-0117-6
94. ^ Walsh, Carswell & Weymann 1979
96. ^ Roulet & Mollerach 1997
97. ^ Narayan & Bartelmann 1997, sec. 3.7
98. ^ Barish 2005, Bartusiak 2000, Blair & McNamara 1997
99. ^ Hough & Rowan 2000
100. ^ Hobbs, George; Archibald, A.; Arzoumanian, Z.; Backer, D.; Bailes, M.; Bhat, N. D. R.; Burgay, M.; Burke-Spolaor, S. et al. (2010), "The international pulsar timing array project: using pulsars as a gravitational wave detector", Classical and Quantum Gravity 27 (8): 084013, arXiv:0911.5206, Bibcode:2010CQGra..27h4013H, doi:10.1088/0264-9381/27/8/084013
101. ^ Danzmann & Rüdiger 2003
103. ^ Thorne 1995
104. ^ Cutler & Thorne 2002
105. ^ Miller 2002, lectures 19 and 21
106. ^ Celotti, Miller & Sciama 1999, sec. 3
107. ^ Springel et al. 2005 and the accompanying summary Gnedin 2005
108. ^ Blandford 1987, sec. 8.2.4
113. ^ Dalal et al. 2006
114. ^ Barack & Cutler 2004
115. ^ Originally Einstein 1917; cf. Pais 1982, pp. 285–288
116. ^ Carroll 2001, ch. 2
118. ^ E.g. with WMAP data, see Spergel et al. 2003
121. ^ Lahav & Suto 2004, Bertschinger 1998, Springel et al. 2005
129. ^ Spergel et al. 2007, sec. 5,6
131. ^ Brandenberger 2007, sec. 2
138. ^ Bekenstein 1973, Bekenstein 1974
140. ^ Narlikar 1993, sec. 4.4.4, 4.4.5
147. ^ Namely when there are trapped null surfaces, cf. Penrose 1965
148. ^ Hawking 1966
151. ^ Hawking & Ellis 1973, sec. 7.1
155. ^ Misner, Thorne & Wheeler 1973, §20.4
156. ^ Arnowitt, Deser & Misner 1962
158. ^ For a pedagogical introduction, see Wald 1984, sec. 11.2
160. ^ Townsend 1997, ch. 5
164. ^ Wald 1994, Birrell & Davies 1984
166. ^ Wald 2001, ch. 3
168. ^ Schutz 2003, p. 407
169. ^ A timeline and overview can be found in Rovelli 2000
170. ^ Donoghue 1995
171. ^ In particular, a technique known as renormalization, an integral part of deriving predictions which take into account higher-energy contributions, cf. Weinberg 1996, ch. 17, 18, fails in this case; cf. Goroff & Sagnotti 1985
174. ^ Green, Schwarz & Witten 1987, sec. 4.2
175. ^ Weinberg 2000, ch. 31
176. ^ Townsend 1996, Duff 1996
177. ^ Kuchař 1973, sec. 3
180. ^ Isham 1994, Sorkin 1997
181. ^ Loll 1998
182. ^ Sorkin 2005
183. ^ Penrose 2004, ch. 33 and refs therein
184. ^ Hawking 1987
185. ^ Ashtekar 2007, Schwarz 2007
187. ^ section Quantum gravity, above
188. ^ section Cosmology, above
189. ^ Friedrich 2005
193. ^ See, e.g., the electronic review journal Living Reviews in Relativity
Further reading
Popular books
Beginning undergraduate textbooks
Advanced undergraduate textbooks
• Ludyk, Günter (2013). Einstein in Matrix Form (1st ed. ed.). Berlin: Springer. ISBN 9783642357978.
Graduate-level textbooks
External links
• Courses
• Lectures
• Tutorials |
5921bd54c5dff5d7 | Superficially Dirac spinor resp. Dirac gamma matrices and quaternions and bicomplex numbers seems to be very similar objects.
• all can be expressed by unitary 4x4 matrices so they seem to represent kind of rotation in 4D space.
• all can be expressed as 2x2 matrix of complex numbers
• So are Dirac spinors just a subset of quaternions or not?
• Or what is the relation, and what are the distinctions?
• And what are the physical consequences?
I wonder why this questions is not discussed anywhere in relation to Dirac equation, since this is the first question I would have. Since the Schrödinger equation is expressed using complex numbers, and the Dirac equation is its 4D version, I would naturally think first about quaternions as the 4D analog of complex numbers.
This also leads to natural question if physically meaningful Dirac equation can be expressed using quaternions instead of Dirac spinor? Or what physical consequences (like alternative physics) it would lead to?
• 1
$\begingroup$ You might want to read Quaternionic Quantum Mechanics and Quantum Fields by Stephen L. Adler: books.google.co.uk/books?id=irt7nOFaR3sC $\endgroup$ – J.G. Jul 26 '18 at 12:45
• $\begingroup$ There's an easily accessible discussion in arxiv.org/pdf/1003.0075.pdf which touches briefly on your question: "In Dirac formalism, an electron, for instance, is described by a four component wavefunction (spinors). In the present formalism the particle is described by a quaternion wavefunction having also four components. Here, one component is a scalar and the three components are vector components. We may however ask the question that: are the two formalisms equivalent to each other?" $\endgroup$ – iSeeker Jul 29 '18 at 18:35
• $\begingroup$ Further to previous comment, I've just come across "Spinors in Spacetime Algebra and Euclidean 4-Space", available via researchgate.net/profile/Garret_Sobczyk which has sections on "Geometric spinors", "Quaternion spinors" and "Classical Dirac spinors" from a mathematical/geometric algebraic perspective. $\endgroup$ – iSeeker Aug 1 '18 at 17:47
Your Answer
Browse other questions tagged or ask your own question. |
77ef36b9db08772f | The Principle of Least Action re-visited
As I was posting some remarks on the Exercises that come with Feynman’s Lectures, I was thinking I should do another post on the Principle of Least Action, and how it is used in quantum mechanics. It is an interesting matter, because the Principle of Least Action sort of connects classical and quantum mechanics.
Let us first re-visit the Principle in classical mechanics. The illustrations which Feynman uses in his iconic exposé on it are copied below. You know what they depict: some object that goes up in the air, and then comes back down because of… Well… Gravity. Hence, we have a force field and, therefore, some potential which gives our object some potential energy. The illustration is nice because we can apply it any (uniform) force field, so let’s analyze it a bit more in depth.
We know the actual trajectory – which Feynman writes as x(t)x(t) + η(t) so as to distinguish it from some other nearby path x(t) – will minimize the value of the following integral:action integral
In the mentioned post, I try to explain what the formula actually means by breaking it up in two separate integrals: one with the kinetic energy in the integrand and – you guessed it 🙂 – one with the potential energy. We can choose any reference point for our potential energy, of course, but to better reflect the energy conservation principle, we assume PE = 0 at the highest point. This ensures that the sum of the kinetic and the potential energy is zero. For a mass of 5 kg (think of the ubiquitous cannon ball), and a (maximum) height of 50 m, we got the following graph.
new graph 2
Just to make sure, here is how we calculate KE and PE as a function of time:
KE and PE
We can, of course, also calculate the action as a function of time:
action integralNote the integrand: KE − PE = m·v2. Strange, isn’t it? It’s like E = m·c2, right? We get a weird cubic function, which I plotted below (blue). I added the function for the height (but in millimeter) because of the different scales.action graph final
So what’s going on? The action concept is interesting. As the product of force, distance and time, it makes intuitive sense: it’s force over distance over time. To cover some distance in some force field, energy will be used or spent but, clearly, the time that is needed should matter as well, right? Yes. But the question is: how, exactly? Let’s analyze what happens from = 0 to = 3.2 seconds, so that’s the trajectory from = 0 to the highest point (= 50 m). The action that is required to bring our 5 kg object there would be equal to F·h·t = m·g·h·t = 5×9.8×50×3.2 = 7828.9 J·s. [I use non-rounded values in my calculations.] However, our action integral tells us it’s only 5219.6 J·s. The difference (2609.3 J·s) is explained by the initial velocity and, hence, the initial kinetic energy, which we got for free, so to speak, and which, over the time interval, is spent as action. So our action integral gives us a net value, so to speak.
To be precise, we can calculate the time rate of change of the kinetic energy as d(KE)/dt = −1533.7 + 480.2·t, so that’s a linear function of time. The graph below shows how it works. The time rate of change is initially negative, as kinetic energy gets spent and increases the potential energy of our object. At the maximum height, the time of rate of change is zero. The object then starts falling, and the time rate of change becomes positive, as the velocity of our object goes from zero to… Well… The velocity is a linear function of time as well: v0 − g·t, remember? Hence, at = v0/g = 31.3/9.8 = 3.2 s, the velocity becomes negative so our cannon ball is, effectively, falling down. Of course, as it falls down and gains speed, it covers more and more distance per second and, therefore, the associated action also goes up exponentially. Just re-define our starting point at = 3.2 s. The m·v0t·(v0 − gt) term is zero at that point, and so then it’s only the m·g2·t3/3 term that counts.
KE time rate final
So… Yes. That’s clear enough. But it still doesn’t answer the fundamental question: how does that minimization of S (or the maximization of −S) work, exactly? Well… It’s not like Nature knows it wants to go from point to point b, and then sort of works out some least action algorithm. No. The true path is given by the force law which, at every point in spacetime, will accelerate, or decelerate, our object at a rate that is equal to the ratio of the force and the mass of our object. In this case, we write: = F/= m·g/m = g, so that’s the acceleration of gravity. That’s the only real thing: all of the above is just math, some mental construct, so to speak.
Of course, this acceleration, or deceleration, then gives the velocity and the kinetic energy. Hence, once again, it’s not like we’re choosing some average for our kinetic energy: the force (gravity, in this particular case) just give us that average. Likewise, the potential energy depends on the position of our object, which we get from… Well… Where it starts and where it goes, so it also depends on the velocity and, hence, the acceleration or deceleration from the force field. So there is no optimization. No teleology. Newton’s force law gives us the true path. If we drop something down, it will go down in a straight line, because any deviation from it would add to the distance. A more complicated illustration is Fermat’s Principle of Least Time, which combines distance and time. But we won’t go into any further detail here. Just note that, in classical mechanics, the true path can, effectively, be associated with a minimum value for that action integral: any other path will be associated with a higher S. So we’re done with classical mechanics here. What about the Principle of Least Action in quantum mechanics?
The Principle of Least Action in quantum mechanics
We have the uncertainty in quantum mechanics: there is no unique path. However, we can, effectively, associate each possible path with a definite amount of action, which we will also write as S. However, instead of talking velocities, we’ll usually want to talk momentum. Photons have no rest mass (m0 = 0), but they do have momentum because of their energy: for a photon, the E = m·c2 equation can be rewritten as E = p·c, and the Einstein-Planck relation for photons tells us the photon energy (E) is related to the frequency (f): E = h·f. Now, for a photon, the wavelength is given by = c/λ. Hence, p = E/c = h·f/c= h/λ = ħ·k.
OK. What’s the action integral? What’s the kinetic and potential energy? Let’s just try the energy: E = m·c2. It reflects the KE − PE = m·v2 formula we used above. Of course, the energy of a photon does not vary, so the value of our integral is just the energy times the travel time, right? What is the travel time? Let’s do things properly by using vector notations here, so we will have two position vectors rand r2 for point and b respectively. We can then define a vector pointing from r1 to r2, which we will write as r12. The distance between the two points is then, obviously, equal to|r12| = √r122 = r12. Our photon travels at the speed of light, so the time interval will be equal to = r12/c. So we get a very simple formula for the action: = E·t = p·c·= p·c·r12/c = p·r12. Now, it may or may not make sense to assume that the direction of the momentum of our photon and the direction of r12 are somewhat different, so we’ll want to re-write this as a vector dot product: S = p·r12. [Of course, you know the pr12 dot product equals |p|∙|r12cosθ = p∙r12·cosθ, with θ the angle between p and r12. If the angle is the same, then cosθ is equal to 1. If the angle is ± π/2, then it’s 0.]
So now we minimize the action so as to determine the actual path? No. We have this weird stopwatch stuff in quantum mechanics. We’ll use this S = p·r12 value to calculate a probability amplitude. So we’ll associate trajectories with amplitudes, and we just use the action values to do so. This is how it works (don’t ask me why – not now, at least):
1. We measure action in units of ħ, because… Well… Planck’s constant is a pretty fundamental unit of action, right? 🙂 So we write θ = S/ħ p·r12/ħ.
2. θ usually denotes an angle, right? Right. θ = p·r12/ħ is the so-called phase of… Well… A proper wavefunction:
ψ(pr12) = a·ei·θ = (1/r12ei·pr12
Wow ! I realize you may never have seen this… Well… It’s my derivation of what physicists refer to as the propagator function for a photon. If you google it, you may see it written like this (most probably not, however, as it’s usually couched in more abstract math):propagatorThis formulation looks slightly better because it uses Diracs bra-ket notation: the initial state of our photon is written as 〈 r1| and its final state is, accordingly, |r2〉. But it’s the same: it’s the amplitude for our photon to go from point to point b. In case you wonder, the 1/r12 coefficient is there to take care of the inverse square law. I’ll let you think about that for yourself. It’s just like any other physical quantity (or intensity, if you want): they get diluted as the distance increases. [Note that we get the inverse square (1/r122) when calculating a probability, which we do by taking the absolute square of our amplitude: |(1/r12ei·pr12|2 = |1/r122)|2·|ei·pr12|2 = 1/r122.]
So… Well… Now we are ready to understand Feynman’s own summary of his path integral formulation of quantum mechanics: explanation words:
“Here is how it works: Suppose that for all paths, S is very large compared to ħ. One path contributes a certain amplitude. For a nearby path, the phase is quite different, because with an enormous even a small change in means a completely different phase—because ħ is so tiny. So nearby paths will normally cancel their effects out in taking the sum—except for one region, and that is when a path and a nearby path all give the same phase in the first approximation (more precisely, the same action within ħ). Only those paths will be the important ones.”
You are now, finally, ready to understand that wonderful animation that’s part of the Wikipedia article on Feynman’s path integral formulation of quantum mechanics. Check it out, and let the author (not me, but a guy who identifies himself as Juan David) I think it’s great ! 🙂
Explaining diffraction
All of the above is nice, but how does it work? What’s the geometry? Let me be somewhat more adventurous here. So we have our formula for the amplitude of a photon to go from one point to another:propagatorThe formula is far too simple, if only because it assumes photons always travel at the speed of light. As explained in an older post of mine, a photon also has an amplitude to travel slower or faster than (I know that sounds crazy, but it is what it is) and a more sophisticated propagator function will acknowledge that and, unsurprisingly, ensure the spacetime intervals that are more light-like make greater contributions to the ‘final arrow’, as Feynman (or his student, Ralph Leighton, I should say) put it in his Strange Theory of Light and Matter. However, then we’d need to use four-vector notation and we don’t want to do that here. The simplified formula above serves the purpose. We can re-write it as:
ψ(pr12) = a·ei·θ = (1/r12ei·S = ei·pr12/r12
Again, S = p·r12 is just the amount of action we calculate for the path. Action is energy over some time (1 N·m·s = 1 J·s), or momentum over some distance (1 kg·(m/s)·m = 1 N·(s2/m)·(m/s)·m) = 1 N·m·s). For a photon traveling at the speed of light, we have E = p·c, and r12/c, so we get a very simple formula for the action: = E·t = p·r12. Now, we know that, in quantum mechanics, we have to add the amplitudes for the various paths between r1 and r2 so we get a ‘final arrow’ whose absolute square gives us the probability of… Well… Our photon going from r1 and r2. You also know that we don’t really know what actually happens in-between: we know amplitudes interfere, but that’s what we’re modeling when adding the arrows. Let me copy one of Feynman’s famous drawings so we’re sure we know what we’re talking about.adding-arrowsOur simplified approach (the assumption of light traveling at the speed of light) reduces our least action principle to a least time principle: the arrows associated with the path of least time and the paths immediately left and right of it that make the biggest contribution to the final arrow. Why? Think of the stopwatch metaphor: these stopwatches arrive around the same time and, hence, their hands point more or less in the same direction. It doesn’t matter what direction – as long as it’s more or less the same.
Now let me copy the illustrations he uses to explain diffraction. Look at them carefully, and read the explanation below.
When the slit is large, our photon is likely to travel in a straight line. There are many other possible paths – crooked paths – but the amplitudes that are associated with those other paths cancel each other out. In contrast, the straight-line path and, importantly, the nearby paths, are associated with amplitudes that have the same phase, more or less.
However, when the slit is very narrow, there is a problem. As Feynman puts it, “there are not enough arrows to cancel each other out” and, therefore, the crooked paths are also associated with sizable probabilities. Now how does that work, exactly? Not enough arrows? Why? Let’s have a look at it.
The phase (θ) of our amplitudes a·ei·θ = (1/r12ei·S is measured in units of ħ: θ = S/ħ. Hence, we should measure the variation in in units of ħ. Consider two paths, for example: one for which the action is equal to S, and one for which the action is equal to + δ+ π·ħ, so δ= π·ħ. They will cancel each other out:
ei·S/ħ/r12 + e(S + δS)/ħ/r12 = (1/r12)·(ei·S/ħ/r12 + ei·(S+π·ħ)/ħ/r12 )
= (1/r12)·(ei·S/ħ + ei·S/ħ·ei·π) = (1/r12)·(ei·S/ħ − ei·S/ħ) = 0
So nearby paths will interfere constructively, so to speak, by making the final arrow larger. In order for that to happen, δS should be smaller than 2πħ/3 ≈ 2ħ, as shown below.
alternative paths
Why? That’s just the way the addition of angles work. Look at the illustration below: if the red arrow is the amplitude to which we are adding another, any amplitude whose phase angle is smaller than 2πħ/3 ≈ 2ħ will add something to its length. That’s what the geometry of the situation tells us. [If you have time, you can perhaps find some algebraic proof: let me know the result!]
anglesWe need to note a few things here. First, unlike what you might think, the amplitudes of the higher and lower path in the drawing do not cancel. On the contrary, the action is the same, so their magnitudes just add up. Second, if this logic is correct, we will have alternating zones with paths that interfere positively and negatively, as shown below.interference 2
Interesting geometry. How relevant are these zones as we move out from the center, steadily increasing δS? I am not quite sure. I’d have to get into the math of it all, which I don’t want to do in a blog like this. What I do want to do is re-examine is Feynman’s intuitive explanation of diffraction: when the slit is very narrow, “there are not enough arrows to cancel each other out.”
Huh? What’s that? Can’t we add more paths? It’s a tricky question. We are measuring action in units of ħ, but do we actually think action comes in units of ħ? I am not sure. It would make sense, intuitively, but… Well… There’s uncertainty on the energy (E) and the momentum (p) of our photon, right? And how accurately can we measure the distance? So there’s some randomness everywhere. Having said that, the whole argument does requires us to assume action effectively comes in units of ħħ is, effectively, the scaling factor here.
So how can we have more paths? More arrows? I don’t think so. We measure as energy over some time, or as momentum over some distance, and we express all these quantities in old-fashioned SI units: newton for the force, meter for the distance, and second for the time. If we want smaller arrows, we’ll have to use other units, but then the numerical value for ħ will change too! So… Well… No. I don’t think so. And it’s not because of the normalization rule (all probabilities have to add up to one, so we do some have some re-scaling for that). That doesn’t matter, really. What matters is the physics behind the formula, and the formula tells us the physical reality is ħ. So the geometry of the situation is what it is.
Hmm… I guess that, at this point, we should wrap up our rather intuitive discussion here, and resort to the mathematical formalism of Feynman’s path integral formulation, but you can find that elsewhere.
Post scriptum: I said I would show how the Principle of Least Action is relevant to both classical as well as quantum mechanics. Well… Let me quote the Master once more:
“So in the limiting case in which Planck’s constant ħ goes to zero, the correct quantum-mechanical laws can be summarized by simply saying: ‘Forget about all these probability amplitudes. The particle does go on a special path, namely, that one for which does not vary in the first approximation.’”
So that’s how the Principle of Least Action sort of unifies quantum mechanics as well as classical mechanics. 🙂
Post scriptum 2: In my next post, I’ll be doing some calculations. They will answer the question as to how relevant those zones of positive and negative interference further away from the straight-line path. I’ll give a numerical example which shows the 1/r12 factor does its job. 🙂 Just have a look at it. 🙂
Quantum math: the rules – all of them! :-)
In my previous post, I made no compromise, and used all of the rules one needs to calculate quantum-mechanical stuff:
However, I didn’t explain them. These rules look simple enough, but let’s analyze them now. They’re simple and not at the same time, indeed.
[I] The first equation uses the Kronecker delta, which sounds fancy but it’s just a simple shorthand: δij = δji is equal to 1 if i = j, and zero if i ≠ j, with and j representing base states. Equation (I) basically says that base states are all different. For example, the angular momentum in the x-direction of a spin-1/2 particle – think of an electron or a proton – is either +ħ/2 or −ħ/2, not something in-between, or some mixture. So 〈 +x | +x 〉 = 〈 −x | −x 〉 = 1 and 〈 +x | −x 〉 = 〈 −x | +x 〉 = 0.
We’re talking base states here, of course. Base states are like a coordinate system: we settle on an x-, y- and z-axis, and a unit, and any point is defined in terms of an x-, y– and z-number. It’s the same here, except we’re talking ‘points’ in four-dimensional spacetime. To be precise, we’re talking constructs evolving in spacetime. To be even more precise, we’re talking amplitudes with a temporal as well as a spatial frequency, which we’ll often represent as:
ei·θ ei·(ω·t − k ∙x) = a·e(i/ħ)·(E·t − px)
The coefficient in front (a) is just a normalization constant, ensuring all probabilities add up to one. It may not be a constant, actually: perhaps it just ensure our amplitude stays within some kind of envelope, as illustrated below.
Photon wave
As for the ω = E/ħ and k = p/ħ identities, these are the de Broglie equations for a matter-wave, which the young Comte jotted down as part of his 1924 PhD thesis. He was inspired by the fact that the E·t − px factor is an invariant four-vector product (E·t − px = pμxμ) in relativity theory, and noted the striking similarity with the argument of any wave function in space and time (ω·t − k ∙x) and, hence, couldn’t resist equating both. Louis de Broglie was inspired, of course, by the solution to the blackbody radiation problem, which Max Planck and Einstein had convincingly solved by accepting that the ω = E/ħ equation holds for photons. As he wrote it:
“When I conceived the first basic ideas of wave mechanics in 1923–24, I was guided by the aim to perform a real physical synthesis, valid for all particles, of the coexistence of the wave and of the corpuscular aspects that Einstein had introduced for photons in his theory of light quanta in 1905.” (Louis de Broglie, quoted in Wikipedia)
Looking back, you’d of course want the phase of a wavefunction to be some invariant quantity, and the examples we gave our previous post illustrate how one would expect energy and momentum to impact its temporal and spatial frequency. But I am digressing. Let’s look at the second equation. However, before we move on, note that minus sign in the exponent of our wavefunction: a·ei·θ. The phase turns counter-clockwise. That’s just the way it is. I’ll come back to this.
[II] The φ and χ symbols do not necessarily represent base states. In fact, Feynman illustrates this law using a variety of examples including both polarized as well as unpolarized beams, or ‘filtered’ as well as ‘unfiltered’ states, as he calls it in the context of the Stern-Gerlach apparatuses he uses to explain what’s going on. Let me summarize his argument here.
I discussed the Stern-Gerlach experiment in my post on spin and angular momentum, but the Wikipedia article on it is very good too. The principle is illustrated below: a inhomogeneous magnetic field – note the direction of the gradient ∇B = (∂B/∂x, ∂B/∂y, ∂B/∂z) – will split a beam of spin-one particles into three beams. [Matter-particles with spin one are rather rare (Lithium-6 is an example), but three states (rather than two only, as we’d have when analyzing spin-1/2 particles, such as electrons or protons) allow for more play in the analysis. 🙂 In any case, the analysis is easily generalized.]
stern-gerlach simple
The splitting of the beam is based, of course, on the quantized angular momentum in the z-direction (i.e. the direction of the gradient): its value is either ħ, 0, or −ħ. We’ll denote these base states as +, 0 or −, and we should note they are defined in regard to an apparatus with a specific orientation. If we call this apparatus S, then we can denote these base states as +S, 0S and −S respectively.
The interesting thing in Feynman’s analysis is the imagined modified Stern-Gerlach apparatus, which – I am using Feynman‘s words here 🙂 – “puts Humpty Dumpty back together.” It looks a bit monstruous, but it’s easy enough to understand. Quoting Feynman once more: “It consists of a sequence of three high-gradient magnets. The first one (on the left) is just the usual Stern-Gerlach magnet and splits the incoming beam of spin-one particles into three separate beams. The second magnet has the same cross section as the first, but is twice as long and the polarity of its magnetic field is opposite the field in magnet 1. The second magnet pushes in the opposite direction on the atomic magnets and bends their paths back toward the axis, as shown in the trajectories drawn in the lower part of the figure. The third magnet is just like the first, and brings the three beams back together again, so that leaves the exit hole along the axis.”
stern-gerlach modified
Now, we can use this apparatus as a filter by inserting blocking masks, as illustrated below.
But let’s get back to the lesson. What about the second ‘Law’ of quantum math? Well… You need to be able to imagine all kinds of situations now. The rather simple set-up below is one of them: we’ve got two of these apparatuses in series now, S and T, with T tilted at the angle α with respect to the first.
I know: you’re getting impatient. What about it? Well… We’re finally ready now. Let’s suppose we’ve got three apparatuses in series, with the first and the last one having the very same orientation, and the one in the middle being tilted. We’ll denote them by S, T and S’ respectively. We’ll also use masks: we’ll block the 0 and − state in the S-filter, like in that illustration above. In addition, we’ll block the + and − state in the T apparatus and, finally, the 0 and − state in the S’ apparatus. Now try to imagine what happens: how many particles will get through?
Just try to think about it. Make some drawing or something. Please!
OK… The answer is shown below. Despite the filtering in S, the +S particles that come out do have an amplitude to go through the 0T-filter, and so the number of atoms that come out will be some fraction (α) of the number of atoms (N) that came out of the +S-filter. Likewise, some other fraction (β) will make it through the +S’-filter, so we end up with βαN particles.
ratio 2
Now, I am sure that, if you’d tried to guess the answer yourself, you’d have said zero rather than βαN but, thinking about it, it makes sense: it’s not because we’ve got some angular momentum in one direction that we have none in the other. When everything is said and done, we’re talking components of the total angular momentum here, don’t we? Well… Yes and no. Let’s remove the masks from T. What do we get?
Come on: what’s your guess? N?
[…] You’re right. It’s N. Perfect. It’s what’s shown below.
ratio 3
Now, that should boost your confidence. Let’s try the next scenario. We block the 0 and − state in the S-filter once again, and the + and − state in the T apparatus, so the first two apparatuses are the same as in our first example. But let’s change the S’ apparatus: let’s close the + and − state there now. Now try to imagine what happens: how many particles will get through?
Come on! You think it’s a trap, isn’t it? It’s not. It’s perfectly similar: we’ve got some other fraction here, which we’ll write as γαN, as shown below.
ratio 1Next scenario: S has the 0 and − gate closed once more, and T is fully open, so it has no masks. But, this time, we set S’ so it filters the 0-state with respect to it. What do we get? Come on! Think! Please!
The answer is zero, as shown below.
ratio 4
Does that make sense to you? Yes? Great! Because many think it’s weird: they think the T apparatus must ‘re-orient’ the angular momentum of the particles. It doesn’t: if the filter is wide open, then “no information is lost”, as Feynman puts it. Still… Have a look at it. It looks like we’re opening ‘more channels’ in the last example: the S and S’ filter are the same, indeed, and T is fully open, while it selected for 0-state particles before. But no particles come through now, while with the 0-channel, we had γαN.
Hmm… It actually is kinda weird, won’t you agree? Sorry I had to talk about this, but it will make you appreciate that second ‘Law’ now: we can always insert a ‘wide-open’ filter and, hence, split the beams into a complete set of base states − with respect to the filter, that is − and bring them back together provided our filter does not produce any unequal disturbances on the three beams. In short, the passage through the wide-open filter should not result in a change of the amplitudes. Again, as Feynman puts it: the wide-open filter should really put Humpty-Dumpty back together again. If it does, we can effectively apply our ‘Law’:
second law
For an example, I’ll refer you to my previous post. This brings me to the third and final ‘Law’.
[III] The amplitude to go from state φ to state χ is the complex conjugate of the amplitude to to go from state χ to state φ:
〈 χ | φ 〉 = 〈 φ | χ 〉*
This is probably the weirdest ‘Law’ of all, even if I should say, straight from the start, we can actually derive it from the second ‘Law’, and the fact that all probabilities have to add up to one. Indeed, a probability is the absolute square of an amplitude and, as we know, the absolute square of a complex number is also equal to the product of itself and its complex conjugate:
|z|= |z|·|z| = z·z*
[You should go through the trouble of reviewing the difference between the square and the absolute square of a complex number. Just write z as a + ib and calculate (a + ib)= a2 + 2ab+ b2 , as opposed to |z|= a2 + b2. Also check what it means when writing z as r·eiθ = r·(cosθ + i·sinθ).]
Let’s applying the probability rule to a two-filter set-up, i.e. the situation with the S and the tilted T filter which we described above, and let’s assume we’ve got a pure beam of +S particles entering the wide-open T filter, so our particles can come out in either of the three base states with respect to T. We can then write:
〈 +T | +S 〉+ 〈 0T | +S 〉+ 〈 −T | +S 〉= 1
⇔ 〈 +T | +S 〉〈 +T | +S 〉* + 〈 0T | +S 〉〈 0T | +S 〉* + 〈 −T | +S 〉〈 −T | +S 〉* = 1
Of course, we’ve got two other such equations if we start with a 0S or a −S state. Now, we take the 〈 χ | φ 〉 = ∑ 〈 χ | i 〉〈 i | φ 〉 ‘Law’, and substitute χ and φ for +S, and all states for the base states with regard to T. We get:
〈 +S | +S 〉 = 1 = 〈 +S | +T 〉〈 +T | +S 〉 + 〈 +S | 0T 〉〈 0T | +S 〉 + 〈 +S | –T 〉〈 −T | +S 〉
These equations are consistent only if:
〈 +S | +T 〉 = 〈 +T | +S 〉*,
〈 +S | 0T 〉 = 〈 0T | +S 〉*,
〈 +S | −T 〉 = 〈 −T | +S 〉*,
which is what we wanted to prove. One can then generalize to any state φ and χ. However, proving the result is one thing. Understanding it is something else. One can write down a number of strange consequences, which all point to Feynman‘s rather enigmatic comment on this ‘Law’: “If this Law were not true, probability would not be ‘conserved’, and particles would get ‘lost’.” So what does that mean? Well… You may want to think about the following, perhaps. It’s obvious that we can write:
|〈 φ | χ 〉|= 〈 φ | χ 〉〈 φ | χ 〉* = 〈 χ | φ 〉*〈 χ | φ 〉 = |〈 χ | φ 〉|2
This says that the probability to go from the φ-state to the χ-state is the same as the probability to go from the χ-state to the φ-state.
Now, when we’re talking base states, that’s rather obvious, because the probabilities involved are either 0 or 1. However, if we substitute for +S and −T, or some more complicated states, then it’s a different thing. My guts instinct tells me this third ‘Law’ – which, as mentioned, can be derived from the other ‘Laws’ – reflects the principle of reversibility in spacetime, which you may also interpret as a causality principle, in the sense that, in theory at least (i.e. not thinking about entropy and/or statistical mechanics), we can reverse what’s happening: we can go back in spacetime.
In this regard, we should also remember that the complex conjugate of a complex number in polar form, i.e. a complex number written as r·eiθ, is equal to r·eiθ, so the argument in the exponent gets a minus sign. Think about what this means for our a·ei·θ ei·(ω·t − k ∙x) = a·e(i/ħ)·(E·t − pxfunction. Taking the complex conjugate of this function amounts to reversing the direction of t and x which, once again, evokes that idea of going back in spacetime.
I feel there’s some more fundamental principle here at work, on which I’ll try to reflect a bit more. Perhaps we can also do something with that relationship between the multiplicative inverse of a complex number and its complex conjugate, i.e. z−1 = z*/|z|2. I’ll check it out. As for now, however, I’ll leave you to do that, and please let me know if you’ve got any inspirational ideas on this. 🙂
So… Well… Goodbye as for now. I’ll probably talk about the Hamiltonian in my next post. I think we really did a good job in laying the groundwork for the really hardcore stuff, so let’s go for that now. 🙂
Post Scriptum: On the Uncertainty Principle and other rules
After writing all of the above, I realized I should add some remarks to make this post somewhat more readable. First thing: not all of the rules are there—obviously! Most notably, I didn’t say anything about the rules for adding or multiplying amplitudes, but that’s because I wrote extensively about that already, and so I assume you’re familiar with that. [If not, see my page on the essentials.]
Second, I didn’t talk about the Uncertainty Principle. That’s because I didn’t have to. In fact, we don’t need it here. In general, all popular accounts of quantum mechanics have an excessive focus on the position and momentum of a particle, while the approach in this and my previous post is quite different. Of course, it’s Feynman’s approach to QM really. Not ‘mine’. 🙂 All of the examples and all of the theory he presents in his introductory chapters in the Third Volume of Lectures, i.e. the volume on QM, are related to things like:
• What is the amplitude for a particle to go from spin state +S to spin state −T?
• What is the amplitude for a particle to be scattered, by a crystal, or from some collision with another particle, in the θ direction?
• What is the amplitude for two identical particles to be scattered in the same direction?
• What is the amplitude for an atom to absorb or emit a photon? [See, for example, Feynman’s approach to the blackbody radiation problem.]
• What is the amplitude to go from one place to another?
In short, you read Feynman, and it’s only at the very end of his exposé, that he starts talking about the things popular books start with, such as the amplitude of a particle to be at point (x, t) in spacetime, or the Schrödinger equation, which describes the orbital of an electron in an atom. That’s where the Uncertainty Principle comes in and, hence, one can really avoid it for quite a while. In fact, one should avoid it for quite a while, because it’s now become clear to me that simply presenting the Uncertainty Principle doesn’t help all that much to truly understand quantum mechanics.
Truly understanding quantum mechanics involves understanding all of these weird rules above. To some extent, that involves dissociating the idea of the wavefunction with our conventional ideas of time and position. From the questions above, it should be obvious that ‘the’ wavefunction does actually not exist: we’ve got a wavefunction for anything we can and possibly want to measure. That brings us to the question of the base states: what are they?
Feynman addresses this question in a rather verbose section of his Lectures titled: What are the base states of the world? I won’t copy it here, but I strongly recommend you have a look at it. 🙂
I’ll end here with a final equation that we’ll need frequently: the amplitude for a particle to go from one place (r1) to another (r2). It’s referred to as a propagator function, for obvious reasons—one of them being that physicists like fancy terminology!—and it looks like this:
The shape of the e(i/ħ)·(pr12function is now familiar to you. Note the r12 in the argument, i.e. the vector pointing from r1 to r2. The pr12 dot product equals |p|∙|r12|·cosθ = p∙r12·cosθ, with θ the angle between p and r12. If the angle is the same, then cosθ is equal to 1. If the angle is π/2, then it’s 0, and the function reduces to 1/r12. So the angle θ, through the cosθ factor, sort of scales the spatial frequency. Let me try to give you some idea of how this looks like by assuming the angle between p and r12 is the same, so we’re looking at the space in the direction of the momentum only and |p|∙|r12|·cosθ = p∙r12. Now, we can look at the p/ħ factor as a scaling factor, and measure the distance x in units defined by that scale, so we write: x = p∙r12/ħ. The function then reduces to (ħ/p)·eix/x = (ħ/p)·cos(x)/x + i·(ħ/p)·sin(x)/x, and we just need to square this to get the probability. All of the graphs are drawn hereunder: I’ll let you analyze them. [Note that the graphs do not include the ħ/p factor, which you may look at as yet another scaling factor.] You’ll see – I hope! – that it all makes perfect sense: the probability quickly drops off with distance, both in the positive as well as in the negative x-direction, while it’s going to infinity when very near. [Note that the absolute square, using cos(x)/x and sin(x)/x yields the same graph as squaring 1/x—obviously!]
Light and matter
In my previous post, I discussed the de Broglie wave of a photon. It’s usually referred to as ‘the’ wave function (or the psi function) but, as I explained, for every psi – i.e. the position-space wave function Ψ(x ,t) – there is also a phi – i.e. the momentum-space wave function Φ(p, t).
In that post, I also compared it – without much formalism – to the de Broglie wave of ‘matter particles’. Indeed, in physics, we look at ‘stuff’ as being made of particles and, while the taxonomy of the particle zoo of the Standard Model of physics is rather complicated, one ‘taxonomic’ principle stands out: particles are either matter particles (known as fermions) or force carriers (known as bosons). It’s a strict separation: either/or. No split personalities.
A quick overview before we start…
Wikipedia’s overview of particles in the Standard Model (including the latest addition: the Higgs boson) illustrates this fundamental dichotomy in nature: we have the matter particles (quarks and leptons) on one side, and the bosons (i.e. the force carriers) on the other side.
Don’t be put off by my remark on the particle zoo: it’s a term coined in the 1960s, when the situation was quite confusing indeed (like more than 400 ‘particles’). However, the picture is quite orderly now. In fact, the Standard Model put an end to the discovery of ‘new’ particles, and it’s been stable since the 1970s, as experiments confirmed the reality of quarks. Indeed, all resistance to Gell-Man’s quarks and his flavor and color concepts – which are just words to describe new types of ‘charge’ – similar to electric charge but with more variety), ended when experiments by Stanford’s Linear Accelerator Laboratory (SLAC) in November 1974 confirmed the existence of the (second-generation and, hence, heavy and unstable) ‘charm’ quark (again, the names suggest some frivolity but it’s serious physical research).
As for the Higgs boson, its existence of the Higgs boson had also been predicted, since 1964 to be precise, but it took fifty years to confirm it experimentally because only something like the Large Hadron Collider could produce the required energy to find it in these particle smashing experiments – a rather crude way of analyzing matter, you may think, but so be it. [In case you harbor doubts on the Higgs particle, please note that, while CERN is the first to admit further confirmation is needed, the Nobel Prize Committee apparently found the evidence ‘evidence enough’ to finally award Higgs and others a Nobel Prize for their ‘discovery’ fifty years ago – and, as you know, the Nobel Prize committee members are usually rather conservative in their judgment. So you would have to come up with a rather complex conspiracy theory to deny its existence.]
Also note that the particle zoo is actually less complicated than it looks at first sight: the (composite) particles that are stable in our world – this world – consist of three quarks only: a proton consists of two up quarks and one down quark and, hence, is written as uud., and a neutron is two down quarks and one up quark: udd. Hence, for all practical purposes (i.e. for our discussion how light interacts with matter), only the so-called first generation of matter-particles – so that’s the first column in the overview above – are relevant.
All the particles in the second and third column are unstable. That being said, they survive long enough – a muon disintegrates after 2.2 millionths of a second (on average) – to deserve the ‘particle’ title, as opposed to a ‘resonance’, whose lifetime can be as short as a billionth of a trillionth of a second – but we’ve gone through these numbers before and so I won’t repeat that here. Why do we need them? Well… We don’t, but they are a by-product of our world view (i.e. the Standard Model) and, for some reason, we find everything what this Standard Model says should exist, even if most of the stuff (all second- and third-generation matter particles, and all these resonances, vanish rather quickly – but so that also seems to be consistent with the model). [As for a possible fourth (or higher) generation, Feynman didn’t exclude it when he wrote his 1985 Lectures on quantum electrodynamics, but, checking on Wikipedia, I find the following: “According to the results of the statistical analysis by researchers from CERN and the Humboldt University of Berlin, the existence of further fermions can be excluded with a probability of 99.99999% (5.3 sigma).” If you want to know why… Well… Read the rest of the Wikipedia article. It’s got to do with the Higgs particle.]
As for the (first-generation) neutrino in the table – the only one which you may not be familiar with – these are very spooky things but – I don’t want to scare you – relatively high-energy neutrinos are going through your and my my body, right now and here, at a rate of some hundred trillion per second. They are produced by stars (stars are huge nuclear fusion reactors, remember?), and also as a by-product of these high-energy collisions in particle accelerators of course. But they are very hard to detect: the first trace of their existence was found in 1956 only – 26 years after their existence had been postulated: the fact that Wolfgang Pauli proposed their existence in 1930 to explain how beta decay could conserve energy, momentum and spin (angular momentum) demonstrates not only the genius but also the confidence of these early theoretical quantum physicists. Most neutrinos passing through Earth are produced by our Sun. Now they are being analyzed more routinely. The largest neutrino detector on Earth is called IceCube. It sits on the South Pole – or under it, as it’s suspended under the Antarctic ice, and it regularly captures high-energy neutrinos in the range of 1 to 10 TeV.
Let me – to conclude this introduction – just quickly list and explain the bosons (i.e the force carriers) in the table above:
1. Of all of the bosons, the photon (i.e. the topic of this post), is the most straightforward: there is only type of photon, even if it comes in different possible states of polarization.
I should probably do a quick note on polarization here – even if all of the stuff that follows will make abstraction of it. Indeed, the discussion on photons that follows (largely adapted from Feynman’s 1985 Lectures on Quantum Electrodynamics) assumes that there is no such thing as polarization – because it would make everything even more complicated. The concept of polarization (linear, circular or elliptical) has a direct physical interpretation in classical mechanics (i.e. light as an electromagnetic wave). In quantum mechanics, however, polarization becomes a so-called qubit (quantum bit): leaving aside so-called virtual photons (these are short-range disturbances going between a proton and an electron in an atom – effectively mediating the electromagnetic force between them), the property of polarization comes in two basis states (0 and 1, or left and right), but these two basis states can be superposed. In ket notation: if ¦0〉 and ¦1〉 are the basis states, then any linear combination α·¦0〉 + ß·¦1〉 is also a valid state provided│α│2 + │β│= 1, in line with the need to get probabilities that add up to one.
In case you wonder why I am introducing these kets, there is no reason for it, except that I will be introducing some other tools in this post – such as Feynman diagrams – and so that’s all. In order to wrap this up, I need to note that kets are used in conjunction with bras. So we have a bra-ket notation: the ket gives the starting condition, and the bra – denoted as 〈 ¦ – gives the final condition. They are combined in statements such as 〈 particle arrives at x¦particle leaves from s〉 or – in short – 〈 x¦s〉 and, while x and s would have some real-number value, 〈 x¦s〉 would denote the (complex-valued) probability amplitude associated wit the event consisting of these two conditions (i.e the starting and final condition).
But don’t worry about it. This digression is just what it is: a digression. Oh… Just make a mental note that the so-called virtual photons (the mediators that are supposed to keep the electron in touch with the proton) have four possible states of polarization – instead of two. They are related to the four directions of space (x, y and z) and time (t). 🙂
2. Gluons, the exchange particles for the strong force, are more complicated: they come in eight so-called colors. In practice, one should think of these colors as different charges, but so we have more elementary charges in this case than just plus or minus one (±1) – as we have for the electric charge. So it’s just another type of qubit in quantum mechanics.
[Note that the so-called elementary ±1 values for electric charge are not really elementary: it’s –1/3 (for the down quark, and for the second- and third-generation strange and bottom quarks as well) and +2/3 (for the up quark as well as for the second- and third-generation charm and top quarks). That being said, electric charge takes two values only, and the ±1 value is easily found from a linear combination of the –1/3 and +2/3 values.]
3. Z and W bosons carry the so-called weak force, aka as Fermi’s interaction: they explain how one type of quark can change into another, thereby explaining phenomena such as beta decay. Beta decay explains why carbon-14 will, after a very long time (as compared to the ‘unstable’ particles mentioned above), spontaneously decay into nitrogen-14. Indeed, carbon-12 is the (very) stable isotope, while carbon-14 has a life-time of 5,730 ± 40 years ‘only’ (so one can’t call carbon-12 ‘unstable’: perhaps ‘less stable’ will do) and, hence, measuring how much carbon-14 is left in some organic substance allows us to date it (that’s what (radio)carbon-dating is about). As for the name, a beta particle can refer to an electron or a positron, so we can have β decay (e.g. the above-mentioned carbon-14 decay) as well as β+ decay (e.g. magnesium-23 into sodium-23). There’s also alpha and gamma decay but that involves different things.
As you can see from the table, W± and Zbosons are very heavy (157,000 and 178,000 times heavier than a electron!), and W± carry the (positive or negative) electric charge. So why don’t we see them? Well… They are so short-lived that we can only see a tiny decay width, just a very tiny little trace, so they resemble resonances in experiments. That’s also the reason why we see little or nothing of the weak force in real-life: the force-carrying particles mediating this force don’t get anywhere.
4. Finally, as mentioned above, the Higgs particle – and, hence, of the associated Higgs field – had been predicted since 1964 already but its existence was only (tentatively) experimentally confirmed last year. The Higgs field gives fermions, and also the W and Z bosons, mass (but not photons and gluons), and – as mentioned above – that’s why the weak force has such short range as compared to the electromagnetic and strong forces. Note, however, that the Higgs particle does actually not explain the gravitational force, so it’s not the (theoretical) graviton and there is no quantum field theory for the gravitational force as yet. Just Google it and you’ll quickly find out why: there’s theoretical as well as practical (experimental) reasons for that.
The Higgs field stands out from the other force fields because it’s a scalar field (as opposed to a vector field). However, I have no idea how this so-called Higgs mechanism (i.e. the interaction with matter particles (i.e. with the quarks and leptons, but not directly with neutrinos it would seem from the diagram below), with W and Z bosons, and with itself – but not with the massless photons and gluons) actually works. But then I still have a very long way to go on this Road to Reality.
In any case… The topic of this post is to discuss light and its interaction with matter – not the weak or strong force, nor the Higgs field.
Let’s go for it.
Amplitudes, probabilities and observable properties
Being born a boson or a fermion makes a big difference. That being said, both fermions and bosons are wavicles described by a complex-valued psi function, colloquially known as the wave function. To be precise, there will be several wave functions, and the square of their modulus (sorry for the jargon) will give you the probability of some observable property having a value in some relevant range, usually denoted by Δ. [I also explained (in my post on Bose and Fermi) how the rules for combining amplitudes differ for bosons versus fermions, and how that explains why they are what they are: matter particles occupy space, while photons not only can but also like to crowd together in, for example, a powerful laser beam. I’ll come back on that.]
For all practical purposes, relevant usually means ‘small enough to be meaningful’. For example, we may want to calculate the probability of detecting an electron in some tiny spacetime interval (Δx, Δt). [Again, ‘tiny’ in this context means small enough to be relevant: if we are looking at a hydrogen atom (whose size is a few nanometer), then Δx is likely to be a cube or a sphere with an edge or a radius of a few picometer only (a picometer is a thousandth of a nanometer, so it’s a millionth of a millionth of a meter); and, noting that the electron’s speed is approximately 2200 km per second… Well… I will let you calculate a relevant Δt. :-)]
If we want to do that, then we will need to square the modulus of the corresponding wave function Ψ(x, t). To be precise, we will have to do a summation of all the values │Ψ(x, t)│over the interval and, because x and t are real (and, hence, continuous) numbers, that means doing some integral (because an integral is the continuous version of a sum).
But that’s only one example of an observable property: position. There are others. For example, we may not be interested in the particle’s exact position but only in its momentum or energy. Well, we have another wave function for that: the momentum wave function Φ(x ,t). In fact, if you looked at my previous posts, you’ll remember the two are related because they are conjugate variables: Fourier transforms duals of one another. A less formal way of expressing that is to refer to the uncertainty principle. But this is not the time to repeat things.
The bottom line is that all particles travel through spacetime with a backpack full of complex-valued wave functions. We don’t know who and where these particles are exactly, and so we can’t talk to them – but we can e-mail God and He’ll send us the wave function that we need to calculate some probability we are interested in because we want to check – in all kinds of experiments designed to fool them – if it matches with reality.
As mentioned above, I highlighted the main difference between bosons and fermions in my Bose and Fermi post, so I won’t repeat that here. Just note that, when it comes to working with those probability amplitudes (that’s just another word for these psi and phi functions), it makes a huge difference: fermions and bosons interact very differently. Bosons are party particles: they like to crowd and will always welcome an extra one. Fermions, on the other hand, will exclude each other: that’s why there’s something referred to as the Fermi exclusion principle in quantum mechanics. That’s why fermions make matter (matter needs space) and bosons are force carriers (they’ll just call friends to help when the load gets heavier).
Light versus matter: Quantum Electrodynamics
OK. Let’s get down to business. This post is about light, or about light-matter interaction. Indeed, in my previous post (on Light), I promised to say something about the amplitude of a photon to go from point A to B (because – as I wrote in my previous post – that’s more ‘relevant’, when it comes to explaining stuff, than the amplitude of a photon to actually be at point x at time t), and so that’s what I will do now.
In his 1985 Lectures on Quantum Electrodynamics (which are lectures for the lay audience), Feynman writes the amplitude of a photon to go from point A to B as P(A to B) – and the P stands for photon obviously, not for probability. [I am tired of repeating that you need to square the modulus of an amplitude to get a probability but – here you are – I have said it once more.] That’s in line with the other fundamental wave function in quantum electrodynamics (QED): the amplitude of an electron to go from A to B, which is written as E(A to B). [You got it: E just stands for electron, not for our electric field vector.]
I also talked about the third fundamental amplitude in my previous post: the amplitude of an electron to absorb or emit a photon. So let’s have a look at these three. As Feynman says: ““Out of these three amplitudes, we can make the whole world, aside from what goes on in nuclei, and gravitation, as always!”
Well… Thank you, Mr Feynman: I’ve always wanted to understand the World (especially if you made it).
The photon-electron coupling constant j
Let’s start with the last of those three amplitudes (or wave functions): the amplitude of an electron to absorb or emit a photon. Indeed, absorbing or emitting makes no difference: we have the same complex number for both. It’s a constant – denoted by j (for junction number) – equal to –0.1 (a bit less actually but it’s good enough as an approximation in the context of this blog).
Huh? Minus 0.1? That’s not a complex number, is it? It is. Real numbers are complex numbers too: –0.1 is 0.1eiπ in polar coordinates. As Feynman puts it: it’s “a shrink to about one-tenth, and half a turn.” The ‘shrink’ is the 0.1 magnitude of this vector (or arrow), and the ‘half-turn’ is the angle of π (i.e. 180 degrees). He obviously refers to multiplying (no adding here) j with other amplitudes, e.g. P(A, C) and E(B, C) if the coupling is to happen at or near C. And, as you’ll remember, multiplying complex numbers amounts to adding their phases, and multiplying their modulus (so that’s adding the angles and multiplying lengths).
Let’s introduce a Feynman diagram at this point – drawn by Feynman himself – which shows three possible ways of two electrons exchanging a photon. We actually have two couplings here, and so the combined amplitude will involve two j‘s. In fact, if we label the starting point of the two lines representing our electrons as 1 and 2 respectively, and their end points as 3 and 4, then the amplitude for these events will be given by:
E(1 to 5)·j·E(5 to 3)·E(2 to 6)·j·E(6 to 3)
As for how that j factor works, please do read the caption of the illustration below: the same j describes both emission as well as absorption. It’s just that we have both an emission as well as an as absorption here, so we have a j2 factor here, which is less than 0.1·0.1 = 0.01. At this point, it’s worth noting that it’s obvious that the amplitudes we’re talking about here – i.e. for one possible way of an exchange like the one below happening – are very tiny. They only become significant when we add many of these amplitudes, which – as explained below – is what has to happen: one has to consider all possible paths, calculate the amplitudes for them (through multiplication), and then add all these amplitudes, to then – finally – square the modulus of the combined ‘arrow’ (or amplitude) to get some probability of something actually happening. [Again, that’s the best we can do: calculate probabilities that correspond to experimentally measured occurrences. We cannot predict anything in the classical sense of the word.]
Feynman diagram of photon-electron coupling
A Feynman diagram is not just some sketchy drawing. For example, we have to care about scales: the distance and time units are equivalent (so distance would be measured in light-seconds or, else, time would be measured in units equivalent to the time needed for light to travel one meter). Hence, particles traveling through time (and space) – from the bottom of the graph to the top – will usually not be traveling at an angle of more than 45 degrees (as measured from the time axis) but, from the graph above, it is clear that photons do. [Note that electrons moving through spacetime are represented by plain straight lines, while photons are represented by wavy lines. It’s just a matter of convention.]
More importantly, a Feynman diagram is a pictorial device showing what needs to be calculated and how. Indeed, with all the complexities involved, it is easy to lose track of what should be added and what should be multiplied, especially when it comes to much more complicated situations like the one described above (e.g. making sense of a scattering event). So, while the coupling constant j (aka as the ‘charge’ of a particle – but it’s obviously not the electric charge) is just a number, calculating an actual E(A to B) amplitudes is not easy – not only because there are many different possible routes (paths) but because (almost) anything can happen. Let’s have a closer look at it.
E(A to B)
As Feynman explains in his 1985 QED Lectures: “E(A to B) can be represented as a giant sum of a lot of different ways an electron can go from point A to B in spacetime: the electron can take a ‘one-hop flight’, going directly from point A to B; it could take a ‘two-hop flight’, stopping at an intermediate point C; it could take a ‘three-hop flight’ stopping at points D and E, and so on.”
Fortunately, the calculation re-uses known values: the amplitude for each ‘hop’ – from C to D, for example – is P(F to G) – so that’s the amplitude of a photon (!) to go from F to G – even if we are talking an electron here. But there’s a difference: we also have to multiply the amplitudes for each ‘hop’ with the amplitude for each ‘stop’, and that’s represented by another number – not j but n2. So we have an infinite series of terms for E(A to B): P(A to B) + P(A to C)·n2·P(C to B) + P(A to D)·n2·P(D to E)·n2·P(E to B) + … for all possible intermediate points C, D, E, and so on, as per the illustration below.
E(A to B)
You’ll immediately ask: what’s the value of n? It’s quite important to know it, because we want to know how big these n2netcetera terms are. I’ll be honest: I have not come to terms with that yet. According to Feynman (QED, p. 125), it is the ‘rest mass’ of an ‘ideal’ electron: an ‘ideal’ electron is an electron that doesn’t know Feynman’s amplitude theory and just goes from point to point in spacetime using only the direct path. 🙂 Hence, it’s not a probability amplitude like j: a proper probability amplitude will always have a modulus less than 1, and so when we see exponential terms like j2, j4,… we know we should not be all that worried – because these sort of vanish (go to zero) for sufficiently large exponents. For E(A to B), we do not have such vanishing terms. I will not dwell on this right here, but I promise to discuss it in the Post Scriptum of this post. The frightening possibility is that n might be a number larger than one.
[As we’re freewheeling a bit anyway here, just a quick note on conventions: I should not be writing j in bold-face, because it’s a (complex- or real-valued) number and symbols representing numbers are usually not written in bold-face: vectors are written in bold-face. So, while you can look at a complex number as a vector, well… It’s just one of these inconsistencies I guess. The problem with using bold-face letters to represent complex numbers (like amplitudes) is that they suggest that the ‘dot’ in a product (e.g. j·j) is an actual dot project (aka as a scalar product or an inner product) of two vectors. That’s not the case. We’re multiplying complex numbers here, and so we’re just using the standard definition of a product of complex numbers. This subtlety probably explains why Feynman prefers to write the above product as P(A to B) + P(A to C)*n2*P(C to B) + P(A to D)*n2*P(D to E)*n2*P(E to B) + … But then I find that using that asterisk to represent multiplication is a bit funny (although it’s a pretty common thing in complex math) and so I am not using it. Just be aware that a dot in a product may not always mean the same type of multiplication: multiplying complex numbers and multiplying vectors is not the same. […] And I won’t write j in bold-face anymore.]
P(A to B)
Regardless of the value for n, it’s obvious we need a functional form for P(A to B), because that’s the other thing (other than n) that we need to calculate E(A to B). So what’s the amplitude of a photon to go from point A to B?
Well… The function describing P(A to B) is obviously some wave function – so that’s a complex-valued function of x and t. It’s referred to as a (Feynman) propagator: a propagator function gives the probability amplitude for a particle to travel from one place to another in a given time, or to travel with a certain energy and momentum. [So our function for E(A to B) will be a propagator as well.] You can check out the details on it on Wikipedia. Indeed, I could insert the formula here, but believe me if I say it would only confuse you. The points to note is that:
1. The propagator is also derived from the wave equation describing the system, so that’s some kind of differential equation which incorporates the relevant rules and constraints that apply to the system. For electrons, that’s the Schrödinger equation I presented in my previous post. For photons… Well… As I mentioned in my previous post, there is ‘something similar’ for photons – there must be – but I have not seen anything that’s equally ‘simple’ as the Schrödinger equation for photons. [I have Googled a bit but it’s obvious we’re talking pretty advanced quantum mechanics here – so it’s not the QM-101 course that I am currently trying to make sense of.]
2. The most important thing (in this context at least) is that the key variable in this propagator (i.e. the Feynman propagator for the photon) is I: that spacetime interval which I mentioned in my previous post already:
I = Δr– Δt2 = (z2– z1)+ (y2– y1)+ (x2– x1)– (t2– t1)2
In this equation, we need to measure the time and spatial distance between two points in spacetime in equivalent units (these ‘points’ are usually referred to as four-vectors), so we’d use light-seconds for the unit of distance or, for the unit of time, the time it takes for light to travel one meter. [If we don’t want to transform time or distance scales, then we have to write I as I = c2Δt2 – Δr2.] Now, there are three types of intervals:
1. For time-like intervals, we have a negative value for I, so Δt> Δr2. For two events separated by a time-like interval, enough time passes between them so there could be a cause–effect relationship between the two events. In a Feynman diagram, the angle between the time axis and the line between the two events will be less than 45 degrees from the vertical axis. The traveling electrons in the Feynman diagrams above are an example.
2. For space-like intervals, we have a positive value for I, so Δt< Δr2. Events separated by space-like intervals cannot possibly be causally connected. The photons traveling between point 5 and 6 in the first Feynman diagram are an example, but then photons do have amplitudes to travel faster than light.
3. Finally, for light-like intervals, I = 0, or Δt2 = Δr2. The points connected by the 45-degree lines in the illustration below (which Feynman uses to introduce his Feynman diagrams) are an example of points connected by light-like intervals.
[Note that we are using the so-called space-like convention (+++–) here for I. There’s also a time-like convention, i.e. with +––– as signs: I = Δt2 – Δrso just check when you would consult other sources on this (which I recommend) and if you’d feel I am not getting the signs right.]
Spacetime intervalsNow, what’s the relevance of this? To calculate P(A to B), we have to add the amplitudes for all possible paths that the photon can take, and not in space, but in spacetime. So we should add all these vectors (or ‘arrows’ as Feynman calls them) – an infinite number of them really. In the meanwhile, you know it amounts to adding complex numbers, and that infinite sums are done by doing integrals, but let’s take a step back: how are vectors added?
Well…That’s easy, you’ll say… It’s the parallelogram rule… Well… Yes. And no. Let me take a step back here to show how adding a whole range of similar amplitudes works.
The illustration below shows a bunch of photons – real or imagined – from a source above a water surface (the sun for example), all taking different paths to arrive at a detector under the water (let’s say some fish looking at the sky from under the water). In this case, we make abstraction of all the photons leaving at different times and so we only look at a bunch that’s leaving at the same point in time. In other words, their stopwatches will be synchronized (i.e. there is no phase shift term in the phase of their wave function) – let’s say at 12 o’clock when they leave the source. [If you think this simplification is not acceptable, well… Think again.]
When these stopwatches hit the retina of our poor fish’s eye (I feel we should put a detector there, instead of a fish), they will stop, and the hand of each stopwatch represents an amplitude: it has a modulus (its length) – which is assumed to be the same because all paths are equally likely (this is one of the first principles of QED) – but their direction is very different. However, by now we are quite familiar with these operations: we add all the ‘arrows’ indeed (or vectors or amplitudes or complex numbers or whatever you want to call them) and get one big final arrow, shown at the bottom – just above the caption. Look at it very carefully.
adding arrows
If you look at the so-called contribution made by each of the individual arrows, you can see that it’s the arrows associated with the path of least time and the paths immediately left and right of it that make the biggest contribution to the final arrow. Why? Because these stopwatches arrive around the same time and, hence, their hands point more or less in the same direction. It doesn’t matter what direction – as long as it’s more or less the same.
[As for the calculation of the path of least time, that has to do with the fact that light is slowed down in water. Feynman shows why in his 1985 Lectures on QED, but I cannot possibly copy the whole book here ! The principle is illustrated below.] Least time principle
So, where are we? This digressions go on and on, don’t they? Let’s go back to the main story: we want to calculate P(A to B), remember?
As mentioned above, one of the first principles in QED is that all paths – in spacetime – are equally likely. So we need to add amplitudes for every possible path in spacetime using that Feynman propagator function. You can imagine that will be some kind of integral which you’ll never want to solve. Fortunately, Feynman’s disciples have done that for you already. The results is quite predictable: the grand result is that light has a tendency to travel in straight lines and at the speed of light.
WHAT!? Did Feynman get a Nobel prize for trivial stuff like that?
Yes. The math involved in adding amplitudes over all possible paths not only in space but also in time uses the so-called path integral formulation of quantum mechanics and so that’s got Feynman’s signature on it, and that’s the main reason why he got this award – together with Julian Schwinger and Sin-Itiro Tomonaga: both much less well known than Feynman, but so they shared the burden. Don’t complain about it. Just take a look at the ‘mechanics’ of it.
We already mentioned that the propagator has the spacetime interval I in its denominator. Now, the way it works is that, for values of I equal or close to zero, so the paths that are associated with light-like intervals, our propagator function will yield large contributions in the ‘same’ direction (wherever that direction is), but for the spacetime intervals that are very much time- or space-like, the magnitude of our amplitude will be smaller and – worse – our arrow will point in the ‘wrong’ direction. In short, the arrows associated with the time- and space-like intervals don’t add up to much, especially over longer distances. [When distances are short, there are (relatively) few arrows to add, and so the probability distribution will be flatter: in short, the likelihood of having the actual photon travel faster or slower than speed is higher.]
Contribution interval
Does this make sense? I am not sure, but I did what I promised to do. I told you how P(A to B) gets calculated; and from the formula for E(A to B), it is obvious that we can then also calculate E(A to B) provided we have a value for n. However, that value n is determined experimentally, just like the value of j, in order to ensure this amplitude theory yields probabilities that match the probabilities we observe in all kinds of crazy experiments that try to prove or disprove the theory; and then we can use these three amplitude formulas “to make the whole world”, as Feynman calls it, except the stuff that goes on inside of nuclei (because that’s the domain of the weak and strong nuclear force) and gravitation, for which we have a law (Newton’s Law) but no real ‘explanation’. [Now, you may wonder if this QED explanation of light is really all that good, but Mr Feynman thinks it is, and so I have no reason to doubt that – especially because there’s surely not anything more convincing lying around as far as I know.]
So what remains to be told? Lots of things, even within the realm of expertise of quantum electrodynamics. Indeed, Feynman applies the basics as described above to a number of real-life phenomena – quite interesting, all of it ! – but, once again, it’s not my goal to copy all of his Lectures here. [I am only hoping to offer some good summaries of key points in some attempt to convince myself that I am getting some of it at least.] And then there is the strong force, and the weak force, and the Higgs field, and so and so on. But that’s all very strange and new territory which I haven’t even started to explore. I’ll keep you posted as I am making my way towards it.
Post scriptum: On the values of j and n
In this post, I promised I would write something about how we can find j and n because I realize it would just amount to copy three of four pages out of that book I mentioned above, and which inspired most of this post. Let me just say something more about that remarkable book, and then quote a few lines on what the author of that book – the great Mr Feynman ! – thinks of the math behind calculating these two constants (the coupling constant j, and the ‘rest mass’ of an ‘ideal’ electron). Now, before I do that, I should repeat that he actually invented that math (it makes use of a mathematical approximation method called perturbation theory) and that he got a Nobel Prize for it.
First, about the book. Feynman’s 1985 Lectures on Quantum Electrodynamics are not like his 1965 Lectures on Physics. The Lectures on Physics are proper courses for undergraduate and even graduate students in physics. This little 1985 book on QED is just a series of four lectures for a lay audience, conceived in honor of Alix G. Mautner. She was a friend of Mr Feynman’s who died a few years before he gave and wrote these ‘lectures’ on QED. She had a degree in English literature and would ask Mr Feynman regularly to explain quantum mechanics and quantum electrodynamics in a way she would understand. While they had known each other for about 22 years, he had apparently never taken enough time to do so, as he writes in his Introduction to these Alix G. Mautner Memorial Lectures: “So here are the lectures I really [should have] prepared for Alix, but unfortunately I can’t tell them to her directly, now.”
The great Richard Phillips Feynman himself died only three years later, in February 1988 – not of one but two rare forms of cancer. He was only 69 years old when he died. I don’t know if he was aware of the cancer(s) that would kill him, but I find his fourth and last lecture in the book, Loose Ends, just fascinating. Here we have a brilliant mind deprecating the math that earned him a Nobel Prize and without which the Standard Model would be unintelligible. I won’t try to paraphrase him. Let me just quote him. [If you want to check the quotes, the relevant pages are page 125 to 131):
The math behind calculating these constants] is a “dippy process” and “having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent“. He adds: “It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization [“the shell game that we play to find n and j” as he calls it] is not mathematically legitimate.” […] Now, Mr Feynman writes this about quantum electrodynamics, not about “the rest of physics” (and so that’s quantum chromodynamics (QCD) – the theory of the strong interactions – and quantum flavordynamics (QFD) – the theory of weak interactions) which, he adds, “has not been checked anywhere near as well as electrodynamics.”
That’s a pretty damning statement, isn’t it? In one of my other posts (see: The End of the Road to Reality?), I explore these comments a bit. However, I have to admit I feel I really need to get back to math in order to appreciate these remarks. I’ve written way too much about physics anyway now (as opposed to the my first dozen of posts – which were much more math-oriented). So I’ll just have a look at some more stuff indeed (such as perturbation theory), and then I’ll get back blogging. Indeed, I’ve written like 20 posts or so in a few months only – so I guess I should shut up for while now !
In the meanwhile, you’re more than welcome to comment of course ! |
31e7cdc749ccf429 | Astrophysics Source Code Library
Making codes discoverable since 1999
Browsing Codes
Results 1-2030 of 2030 (2002 ASCL, 28 submitted)
Title Date
Abstract Compact
Per Page
[ascl:1505.015] 2dfdr: Data reduction software
[ascl:1808.007] 2DSF: Vectorized Structure Function Algorithm
[ascl:1303.016] 2MASS Kit: 2MASS Catalog Server Kit
[submitted] 3D texturized model of MARS (MOLA) regions
[ascl:1804.018] 3DView: Space physics data visualizer
[ascl:1708.020] 4DAO: DAOSPEC interface
[ascl:1312.011] A_phot: Photon Asymmetry
[ascl:1110.009] AAOGlimpse: Three-dimensional Data Viewer
[ascl:1401.007] abundance: High Redshift Cluster Abundance
[submitted] Accretion Disk Radial Structure Models
A collection of radial structure models of various accretion disk solutions. Each model implements a common interface that gives the radial dependence of selected geometrical, physical and thermodynamic quantities of the accretion flow.
[ascl:1302.003] ACS: ALMA Common Software
[ascl:1502.004] ADAM: All-Data Asteroid Modeling
[ascl:1305.004] AdaptaHOP: Subclump finder
[ascl:1609.024] AdaptiveBin: Adaptive Binning
[ascl:1203.001] AE: ACIS Extract
[ascl:1812.004] aesop: ARC Echelle Spectroscopic Observation Pipeline
[ascl:1102.009] AHF: Amiga's Halo Finder
Cosmological simulations are the key tool for investigating the different processes involved in the formation of the universe from small initial density perturbations to galaxies and clusters of galaxies observed today. The identification and analysis of bound objects, halos, is one of the most important steps in drawing useful physical information from simulations. In the advent of larger and larger simulations, a reliable and parallel halo finder, able to cope with the ever-increasing data files, is a must. In this work we present the freely available MPI parallel halo finder AHF. We provide a description of the algorithm and the strategy followed to handle large simulation data. We also describe the parameters a user may choose in order to influence the process of halo finding, as well as pointing out which parameters are crucial to ensure untainted results from the parallel approach. Furthermore, we demonstrate the ability of AHF to scale to high-resolution simulations.
[ascl:1310.003] AIDA: Adaptive Image Deconvolution Algorithm
[ascl:9911.003] AIPS: Astronomical Image Processing System
[ascl:1609.012] AIPY: Astronomical Interferometry in PYthon
AIPY collects together tools for radio astronomical interferometry. In addition to pure-python phasing, calibration, imaging, and deconvolution code, this package includes interfaces to MIRIAD (ascl:1106.007) and HEALPix (ascl:1107.018), and math/fitting routines from SciPy.
[ascl:1107.006] AIRES: AIRshower Extended Simulations
[ascl:1310.004] AIRY: Astronomical Image Restoration in interferometrY
[ascl:1402.005] Aladin Lite: Lightweight sky atlas for browsers
Aladin Lite is a lightweight version of the Aladin tool, running in the browser and geared towards simple visualization of a sky region. It allows visualization of image surveys (JPEG multi-resolution HEALPix all-sky surveys) and permits superimposing tabular (VOTable) and footprints (STC-S) data. Aladin Lite is powered by HTML5 canvas technology and is easily embeddable on any web page and can also be controlled through a Javacript API.
[ascl:1112.019] Aladin: Interactive Sky Atlas
[ascl:1708.008] ALCHEMIC: Advanced time-dependent chemical kinetics
ALCHEMIC solves chemical kinetics problems, including gas-grain interactions, surface reactions, deuterium fractionization, and transport phenomena and can model the time-dependent chemical evolution of molecular clouds, hot cores, corinos, and protoplanetary disks.
[ascl:1512.005] ALFA: Automated Line Fitting Algorithm
ALFA fits emission line spectra of arbitrary wavelength coverage and resolution, fully automatically. It uses a catalog of lines which may be present to construct synthetic spectra, the parameters of which are then optimized by means of a genetic algorithm. Uncertainties are estimated using the noise structure of the residuals. An emission line spectrum containing several hundred lines can be fitted in a few seconds using a single processor of a typical contemporary desktop or laptop PC. Data cubes in FITS format can be analysed using multiple processors, and an analysis of tens of thousands of deep spectra obtained with instruments such as MUSE will take a few hours.
[ascl:1804.021] allantools: Allan deviation calculation
[ascl:1106.001] AlterBBN: A program for calculating the BBN abundances of the elements in alternative cosmologies
AlterBBN evaluates the abundances of the elements generated by Big-Bang nucleosynthesis (BBN). This program computes the abundances of the elements in the standard model of cosmology and allows the user to alter the assumptions of the cosmological model to study their consequences on the abundances of the elements. In particular the baryon-to-photon ratio and the effective number of neutrinos, as well as the expansion rate and the entropy content of the Universe during BBN can be modified in AlterBBN. Such features allow the user to test the cosmological models by confronting them to BBN constraints.
[ascl:1503.006] AMADA: Analysis of Multidimensional Astronomical DAtasets
[ascl:1010.003] AMBER: Data Reduction Software
[ascl:1404.007] AMBIG: Automated Ambiguity-Resolution Code
[ascl:1007.006] AMIGA: Adaptive Mesh Investigations of Galaxy Assembly
AMIGA is a publicly available adaptive mesh refinement code for (dissipationless) cosmological simulations. It combines an N-body code with an Eulerian grid-based solver for the full set of magnetohydrodynamics (MHD) equations in order to conduct simulations of dark matter, baryons and magnetic fields in a self-consistent way in a fully cosmological setting. Our numerical scheme includes effective methods to ensure proper capturing of shocks and highly supersonic flows and a divergence-free magnetic field. The high accuracy of the code is demonstrated by a number of numerical tests.
[ascl:1502.017] AMIsurvey: Calibration and imaging pipeline for radio data
AMIsurvey is a fully automated calibration and imaging pipeline for data from the AMI-LA radio observatory; it has two key dependencies. The first is drive-ami, included in this entry. Drive-ami is a Python interface to the specialized AMI-REDUCE calibration pipeline, which applies path delay corrections, automatic flags for interference, pointing errors, shadowing and hardware faults, applies phase and amplitude calibrations, Fourier transforms the data into the frequency domain, and writes out the resulting data in uvFITS format. The second is chimenea, which implements an automated imaging algorithm to convert the calibrated uvFITS into science-ready image maps. AMIsurvey links the calibration and imaging stages implemented within these packages together, configures the chimenea algorithm with parameters appropriate to data from AMI-LA, and provides a command-line interface.
[ascl:1107.007] AMUSE: Astrophysical Multipurpose Software Environment
AMUSE is an open source software framework for large-scale simulations in astrophysics, in which existing codes for gravitational dynamics, stellar evolution, hydrodynamics and radiative transport can be easily coupled and placed in the appropriate observational context.
[ascl:1708.028] ANA: Astrophysical Neutrino Anisotropy
[ascl:1402.019] ANAigm: Analytic model for attenuation by the intergalactic medium
ANAigm offers an updated version of the Madau model for the attenuation by the intergalactic neutral hydrogen against the radiation from distant objects. This new model is written in Fortran90 and predicts, for some redshifts, more than 0.5--1 mag different attenuation magnitudes through usual broad-band filters relative to the original Madau model.
[ascl:1807.012] AngPow: Fast computation of accurate tomographic power spectra
AngPow computes the auto (z1 = z2) and cross (z1 ≠ z2) angular power spectra between redshift bins (i.e. Cℓ(z1,z2)). The developed algorithm is based on developments on the Chebyshev polynomial basis and on the Clenshaw-Curtis quadrature method. AngPow is flexible and can handle any user-defined power spectra, transfer functions, bias functions, and redshift selection windows. The code is fast enough to be embedded inside programs exploring large cosmological parameter spaces through the Cℓ(z1,z2) comparison with data.
[ascl:9909.002] ANGSIZ: A general and practical method for calculating cosmological distances
The calculation of distances is of fundamental importance in extragalactic astronomy and cosmology. However, no practical implementation for the general case has previously been available. We derive a second-order differential equation for the angular size distance valid not only in all homogeneous Friedmann-Lemaitre cosmological models, parametrised by $lambda_{0}$ and $Omega_{0}$, but also in inhomogeneous 'on-average' Friedmann-Lemaitre models, where the inhomogeneity is given by the (in the general case redshift-dependent) parameter $eta$. Since most other distances can be obtained trivially from the angular size distance, and since the differential equation can be efficiently solved numerically, this offers for the first time a practical method for calculating distances in a large class of cosmological models. We also briefly discuss our numerical implementation, which is publicly available.
[ascl:1411.019] Anmap: Image and data analysis
Anmap analyses and processes images and spectral data. Originally written for use in radio astronomy, much of its functionality is applicable to other disciplines; additional algorithms and analysis procedures allow direct use in, for example, NMR imaging and spectroscopy. Anmap emphasizes the analysis of data to extract quantitative results for comparison with theoretical models and/or other experimental data. To achieve this, Anmap provides a wide range of tools for analysis, fitting and modelling (including standard image and data processing algorithms). It also provides a powerful environment for users to develop their own analysis/processing tools either by combining existing algorithms and facilities with the very powerful command (scripting) language or by writing new routines in FORTRAN that integrate seamlessly with the rest of Anmap.
[ascl:1802.008] AntiparticleDM: Discriminating between Majorana and Dirac Dark Matter
AntiparticleDM calculates the prospects of future direct detection experiments to discriminate between Majorana and Dirac Dark Matter (i.e., to determine whether Dark Matter is its own antiparticle). Direct detection event rates and mock data generation are dealt with by a variation of the WIMpy code.
[ascl:1010.017] AOFlagger: RFI Software
The radio frequency interference code AOFlagger automatically flags data and can be used to analyze the data in a measurement. The purpose of flagging is to mark samples that are affected by interfering sources such as radio stations, airplanes, electrical fences or other transmitting interferers.
[ascl:1208.017] APLpy: Astronomical Plotting Library in Python
[ascl:1308.005] APPSPACK: Asynchronous Parallel Pattern Search
APPSPACK is serial or parallel, derivative-free optimization software for solving nonlinear unconstrained, bound-constrained, and linearly-constrained optimization problems, with possibly noisy and expensive objective functions.
[ascl:1408.021] APS: Active Parameter Searching
[ascl:1208.003] APT: Aperture Photometry Tool
[ascl:1007.005] Arcetri Spectral Code for Thin Plasmas
[ascl:1107.011] ARCHANGEL: Galaxy Photometry System
[ascl:1205.009] ARES: Automatic Routine for line Equivalent widths in stellar Spectra
ARES was developed for the measurement of Equivalent Width of absortion lines in stellar spectra; it can also be used to determine fundamental spectroscopic stellar parameters.The code reads a 1D FITS spectra and fits the requested lines in order to calculate the Equivalent width. The code is written in C++ based on the standard method of determining EWs. It automates the manual procedure that one normally carries out when using interactive routines such as the splot routine implemented in IRAF.
[ascl:1505.005] ARoME: Analytical Rossiter-McLaughlin Effects
[ascl:1311.010] ARPACK: Solving large scale eigenvalue problems
ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w <- Av requires order n rather than the usual order n2 floating point operations. This software is based upon an algorithmic variant of the Arnoldi process called the Implicitly Restarted Arnoldi Method (IRAM). When the matrix A is symmetric it reduces to a variant of the Lanczos process called the Implicitly Restarted Lanczos Method (IRLM). These variants may be viewed as a synthesis of the Arnoldi/Lanczos process with the Implicitly Shifted QR technique that is suitable for large scale problems. For many standard problems, a matrix factorization is not required; only the action of the matrix on a vector is needed. ARPACK is capable of solving large scale symmetric, nonsymmetric, and generalized eigenproblems from significant application areas.
[ascl:1802.004] ARTIP: Automated Radio Telescope Image Processing Pipeline
The Automated Radio Telescope Image Processing Pipeline (ARTIP) automates the entire process of flagging, calibrating, and imaging for radio-interferometric data. ARTIP starts with raw data, i.e. a measurement set and goes through multiple stages, such as flux calibration, bandpass calibration, phase calibration, and imaging to generate continuum and spectral line images. Each stage can also be run independently. The pipeline provides continuous feedback to the user through various messages, charts and logs. It is written using standard python libraries and the CASA package. The pipeline can deal with datasets with multiple spectral windows and also multiple target sources which may have arbitrary combinations of flux/bandpass/phase calibrators.
[ascl:1204.016] ASCfit: Automatic Stellar Coordinate Fitting Package
A modular software package for automatically fitting astrometric world coordinates (WCS) onto raw optical or infrared FITS images. Image stars are identified with stars in a reference catalog (USNO-A2 or 2MASS), and coordinates derived as a simple linear transformation from (X,Y) pixels to (RA,DEC) to the accuracy level of the reference catalog used. The package works with both optical and infrared images, at sidereal and non-sidereal tracking rates.
[ascl:1804.001] ASERA: A Spectrum Eye Recognition Assistant
[ascl:1603.009] Asfgrid: Asteroseismic parameters for a star
asfgrid computes asteroseismic parameters for a star with given stellar parameters and vice versa. Written in Python, it determines delta_nu, nu_max or masses via interpolation over a grid.
[ascl:1609.020] Askaryan Module: Askaryan electric fields predictor
The Askaryan Module is a C++ class that predicts the electric fields that Askaryan-based detectors detect; it is computationally efficient and accurate, performing fully analytic calculations requiring no a priori MC analysis to compute the entire field, for any frequencies, times, or viewing angles chosen by the user.
[ascl:1807.030] ASP: Ames Stereo Pipeline
ASP (Ames Stereo Pipeline) provides fully automated geodesy and stereogrammetry tools for processing stereo imagery captured from satellites (around Earth and other planets), robotic rovers, aerial cameras, and historical imagery, with and without accurate camera pose information. It produces cartographic products, including digital elevation models (DEMs), ortho-projected imagery, 3D models, and bundle-adjusted networks of cameras. ASP's data products are suitable for science analysis, mission planning, and public outreach.
[ascl:1112.017] ASpec: Astronomical Spectrum Analysis Package
ASpec is a spectrum and line analysis package developed at STScI. ASpec is designed as an add-on package for IRAF and incorporates a variety of analysis techniques for astronomical spectra. ASpec operates on spectra from a wide variety of ground-based and space-based instruments and allows simultaneous handling of spectra from different wavelength regimes. The package accommodates non-linear dispersion relations and provides a variety of functions, individually or in combination, with which to fit spectral features and the continuum. It also permits the masking of known bad data. ASpec provides a powerful, intuitive graphical user interface implemented using the IRAF Object Manager and customized to handle: data input/output (I/O); on-line help; selection of relevant features for analysis; plotting and graphical interaction; and data base management.
[ascl:1209.015] Aspects: Probabilistic/positional association of catalogs of sources
Given two catalogs K and K' of n and n' astrophysical sources, respectively, Aspects (Association positionnelle/probabiliste de catalogues de sources) computes, for any objects MiK and M'jK', the probability that M'j is a counterpart of Mi, i.e. that they are the same source. To determine this probability of association, the code takes into account the coordinates and the positional uncertainties of all the objects. Aspects also computes the probability P(Ai, 0 | C ∩ C') that Mi has no counterpart.
Aspects is written in Fortran 95; the required Fortran 90 Numerical Recipes routines used in version 1.0 have been replaced with free equivalents in version 2.0.
[ascl:1510.006] ASPIC: STARLINK image processing package
[ascl:1903.011] AsPy: Aspherical fluctuations on the spherical collapse background
AsPy computes the determinants of aspherical fluctuations on the spherical collapse background. Written in Python, this procedure includes analytic factorization and cancellation of the so-called `IR-divergences'—spurious enhanced contributions that appear in the dipole sector and are associated with large bulk flows.
[ascl:1404.016] AST: World Coordinate Systems in Astronomy
[ascl:1505.002] ASteCA: Automated Stellar Cluster Analysis
ASteCA (Automated Stellar Cluster Analysis), written in Python, fully automates standard tests applied on star clusters in order to determine their characteristics, including center, radius, and stars' membership probabilities. It also determines associated intrinsic/extrinsic parameters, including metallicity, age, reddening, distance, total mass, and binarity fraction, among others.
[ascl:1403.023] ASTERIX: X-ray Data Processing System
ASTERIX is a general purpose X-ray data reduction package optimized for ROSAT data reduction. ASTERIX uses the Starlink software environment (ascl:1110.012).
[ascl:1607.016] astLib: Tools for research astronomers
[ascl:1907.032] Astro-SCRAPPY: Speedy Cosmic Ray Annihilation Package in Python
Astro-SCRAPPY detects cosmic rays in images (numpy arrays), based on Pieter van Dokkum's L.A.Cosmic algorithm and originally adapted from written by Malte Tewes. This implementation is optimized for speed, resulting in slight difference from the original code, such as automatic recognition of saturated stars (rather than treating such stars as large cosmic rays, and use of a separable median filter instead of the true median filter. Astro-SCRAPPY is an AstroPy (ascl:1304.002) affiliated package.
[ascl:1705.016] astroABC: Approximate Bayesian Computation Sequential Monte Carlo sampler
astroABC is a Python implementation of an Approximate Bayesian Computation Sequential Monte Carlo (ABC SMC) sampler for parameter estimation. astroABC allows for massive parallelization using MPI, a framework that handles spawning of processes across multiple nodes. It has the ability to create MPI groups with different communicators, one for the sampler and several others for the forward model simulation, which speeds up sampling time considerably. For smaller jobs the Python multiprocessing option is also available.
[ascl:1906.001] Astroalign: Asterism-matching alignment of astronomical images
Astroalign tries to register (align) two stellar astronomical images, especially when there is no WCS information available. It does so by finding similar 3-point asterisms (triangles) in both images and deducing the affine transformation between them. Generic registration routines try to match feature points, using corner detection routines to make the point correspondence. These generally fail for stellar astronomical images since stars have very little stable structure so are, in general, indistinguishable from each other. Asterism matching is more robust and closer to the human way of matching stellar images. Astroalign can match images of very different field of view, point-spread function, seeing and atmospheric conditions. It may require special care or may not work on images of extended objects with few point-like sources or in crowded fields.
[ascl:1311.003] AstroAsciiData: ASCII table Python module
ASCII tables continue to be one of the most popular and widely used data exchange formats in astronomy. AstroAsciiData, written in Python, imports all reasonably well-formed ASCII tables. It retains formatting of data values, allows column-first access, supports SExtractor style headings, performs column sorting, and exports data to other formats, including FITS, Numpy/Numarray, and LaTeX table format. It also offers interchangeable comment character, column delimiter and null value.
[ascl:1104.002] AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics
AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries.
AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell.
[ascl:1512.007] AstroBlend: Visualization package for use with Blender
AstroBlend is a visualization package for use in the three dimensional animation and modeling software, Blender. It reads data in via a text file or can use pre-fab isosurface files stored as OBJ or Wavefront files. AstroBlend supports a variety of codes such as FLASH (ascl:1010.082), Enzo (ascl:1010.072), and Athena (ascl:1010.014), and combines artistic 3D models with computational astrophysics datasets to create models and animations.
[ascl:1507.010] Astrochem: Abundances of chemical species in the interstellar medium
Astrochem computes the abundances of chemical species in the interstellar medium, as function of time. It studies the chemistry in a variety of astronomical objects, including diffuse clouds, dense clouds, photodissociation regions, prestellar cores, protostars, and protostellar disks. Astrochem reads a network of chemical reactions from a text file, builds up a system of kinetic rates equations, and solves it using a state-of-the-art stiff ordinary differential equation (ODE) solver. The Jacobian matrix of the system is computed implicitly, so the resolution of the system is extremely fast: large networks containing several thousands of reactions are usually solved in a few seconds. A variety of gas phase process are considered, as well as simple gas-grain interactions, such as the freeze-out and the desorption via several mechanisms (thermal desorption, cosmic-ray desorption and photo-desorption). The computed abundances are written in a HDF5 file, and can be plotted in different ways with the tools provided with Astrochem. Chemical reactions and their rates are written in a format which is meant to be easy to read and to edit. A tool to convert the chemical networks from the OSU and KIDA databases into this format is also provided. Astrochem is written in C, and its source code is distributed under the terms of the GNU General Public License (GPL).
[ascl:1804.004] AstroCV: Astronomy computer vision library
AstroCV processes and analyzes big astronomical datasets, and is intended to provide a community repository of high performance Python and C++ algorithms used for image processing and computer vision. The library offers methods for object recognition, segmentation and classification, with emphasis in the automatic detection and classification of galaxies.
[ascl:1907.016] astrodendro: Astronomical data dendrogram creator
Astrodendro, written in Python, creates dendrograms for exploring and displaying hierarchical structures in observed or simulated astronomical data. It handles noisy data by allowing specification of the minimum height of a structure and the minimum number of pixels needed for an independent structure. Astrodendro allows interactive viewing of computed dendrograms and can also produce publication-quality plots with the non-interactive plotting interface.
[ascl:1010.013] AstroGK: Astrophysical Gyrokinetics Code
[ascl:1309.001] AstroImageJ: ImageJ for Astronomy
AstroImageJ is generic ImageJ (ascl:1206.013) with customizations to the base code and a packaged set of astronomy specific plugins. It reads and writes FITS images with standard headers, displays astronomical coordinates for images with WCS, supports photometry for developing color-magnitude data, offers flat field, scaled dark, and non-linearity processing, and includes tools for precision photometry that can be used during real-time data acquisition.
[ascl:1502.022] AstroLines: Astrophysical line list generator in the H-band
AstroLines adjusts spectral line parameters (gf and damping constant) starting from an initial line list. Written in IDL and tailored to the APO Galactic Evolution Experiment (APOGEE), it runs a slightly modified version of MOOG (ascl:1202.009) to compare synthetic spectra with FTS spectra of the Sun and Arcturus.
[ascl:1406.008] ASTROM: Basic astrometry program
[ascl:1203.012] Astrometrica: Astrometric data reduction of CCD images
Astrometrica is an interactive software tool for scientific grade astrometric data reduction of CCD images. The current version of the software is for the Windows 32bit operating system family. Astrometrica reads FITS (8, 16 and 32 bit integer files) and SBIG image files. The size of the images is limited only by available memory. It also offers automatic image calibration (Dark Frame and Flat Field correction), automatic reference star identification, automatic moving object detection and identification, and access to new-generation star catalogs (PPMXL, UCAC 3 and CMC-14), in addition to online help and other features. Astrometrica is shareware, available for use for a limited period of time (100 days) for free; special arrangements can be made for educational projects.
[ascl:1407.018] AstroML: Machine learning and data mining in astronomy
Written in Python, AstroML is a library of statistical and machine learning routines for analyzing astronomical data in python, loaders for several open astronomical datasets, and a large suite of examples of analyzing and visualizing astronomical datasets. An optional companion library, astroML_addons, is available; it requires a C compiler and contains faster and more efficient implementations of certain algorithms in compiled code.
[ascl:1802.009] astroplan: Observation planning package for astronomers
[ascl:1402.003] astroplotlib: Astronomical library of plots
[ascl:1805.024] ASTROPOP: ASTROnomical Polarimetry and Photometry pipeline
[ascl:1304.002] Astropy: Community Python library for astronomy
[ascl:1207.007] Astropysics: Astrophysics utilities for python
[ascl:1407.007] ASTRORAY: General relativistic polarized radiative transfer code
ASTRORAY employs a method of ray tracing and performs polarized radiative transfer of (cyclo-)synchrotron radiation. The radiative transfer is conducted in curved space-time near rotating black holes described by Kerr-Schild metric. Three-dimensional general relativistic magneto hydrodynamic (3D GRMHD) simulations, in particular performed with variations of the HARM code, serve as an input to ASTRORAY. The code has been applied to reproduce the sub-mm synchrotron bump in the spectrum of Sgr A*, and to test the detectability of quasi-periodic oscillations in its light curve. ASTRORAY can be readily applied to model radio/sub-mm polarized spectra of jets and cores of other low-luminosity active galactic nuclei. For example, ASTRORAY is uniquely suitable to self-consistently model Faraday rotation measure and circular polarization fraction in jets.
[ascl:1010.023] AstroSim: Collaborative Visualization of an Astrophysics Simulation in Second Life
AstroSim is a Second Life based prototype application for synchronous collaborative visualization targeted at astronomers.
[ascl:1507.019] AstroStat: Statistical analysis tool
AstroStat performs statistical analysis on data and is compatible with Virtual Observatory (VO) standards. It accepts data in a variety of formats and performs various statistical tests using a menu driven interface. Analyses, performed in R, include exploratory tests, visualizations, distribution fitting, correlation and causation, hypothesis testing, multivariate analysis and clustering. AstroStat is available in two versions with an identical interface and features: as a web service that can be run using any standard browser and as an offline application.
[ascl:1608.005] AstroVis: Visualizing astronomical data cubes
[ascl:1406.001] ASURV: Astronomical SURVival Statistics
ASURV (Astronomical SURVival Statistics) provides astronomy survival analysis for right- and left-censored data including the maximum-likelihood Kaplan-Meier estimator and several univariate two-sample tests, bivariate correlation measures, and linear regressions. ASURV is written in FORTRAN 77, and is stand-alone and does not call any specialized libraries.
[ascl:1402.026] athena: Tree code for second-order correlation functions
athena is a 2d-tree code that estimates second-order correlation functions from input galaxy catalogues. These include shear-shear correlations (cosmic shear), position-shear (galaxy-galaxy lensing) and position-position (spatial angular correlation). Written in C, it includes a power-spectrum estimator implemented in Python; this script also calculates the aperture-mass dispersion. A test data set is available.
[ascl:1505.006] Athena3D: Flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics
Written in FORTRAN, Athena3D, based on Athena (ascl:1010.014), is an implementation of a flux-conservative Godunov-type algorithm for compressible magnetohydrodynamics. Features of the Athena3D code include compressible hydrodynamics and ideal MHD in one, two or three spatial dimensions in Cartesian coordinates; adiabatic and isothermal equations of state; 1st, 2nd or 3rd order reconstruction using the characteristic variables; and numerical fluxes computed using the Roe scheme. In addition, it offers the ability to add source terms to the equations and is parallelized based on MPI.
[ascl:1110.015] atlant: Advanced Three Level Approximation for Numerical Treatment of Cosmological Recombination
atlant is a public numerical code for fast calculations of cosmological recombination of primordial hydrogen-helium plasma is presented. This code is based on the three-level approximation (TLA) model of recombination and allows us to take into account some "fine'' physical effects of cosmological recombination simultaneously with using fudge factors.
[ascl:1303.024] ATLAS12: Opacity sampling model atmosphere program
ATLAS12 is an opacity sampling model atmosphere program to allow computation of models with individual abundances using line data. ATLAS12 is able to compute the same models as ATLAS9 which uses pretabulated opacities, plus models with arbitrary abundances. ATLAS12 sampled fluxes are quite accurate for predicting the total flux except in the intermediate or narrow bandpass intervals because the sample size is too small.
[ascl:1607.003] Atlas2bgeneral: Two-body resonance calculator
[ascl:1607.004] Atlas3bgeneral: Three-body resonance calculator
[ascl:1710.017] ATLAS9: Model atmosphere program with opacity distribution functions
ATLAS9 computes model atmospheres using a fixed set of pretabulated opacities, allowing one to work on huge numbers of stars and interpolate in large grids of models to determine parameters quickly. The code works with two different sets of opacity distribution functions (ODFs), one with “big” wavelength intervals covering the whole spectrum and the other with 1221 “little” wavelength intervals covering the whole spectrum. The ODFs use a 12-step representation; the radiation field is computed starting with the highest step and working down. If a lower step does not matter because the line opacity is small relative to the continuum at all depths, all the lower steps are lumped together and not computed to save time.
[ascl:1703.013] Atmospheric Athena: 3D Atmospheric escape model with ionizing radiative transfer
Atmospheric Athena simulates hydrodynamic escape from close-in giant planets in 3D. It uses the Athena hydrodynamics code (ascl:1010.014) with a new ionizing radiative transfer implementation to self-consistently model photoionization driven winds from the planet. The code is fully compatible with static mesh refinement and MPI parallelization and can handle arbitrary planet potentials and stellar initial conditions.
[ascl:1708.001] ATOOLS: A command line interface to the AST library
The ATOOLS package of applications provides an interface to the AST library (ascl:1404.016), allowing quick experiments to be performed from the shell. It manipulates descriptions of coordinate frames and mappings in the form of AST objects and performs other functions, with each application within the package corresponding closely to one of the functions in the AST library.
[ascl:1405.009] ATV: Image display tool
[ascl:1406.004] Autoastrom: Autoastrometry for Mosaics
[ascl:1904.007] AutoBayes: Automatic design of customized analysis algorithms and programs
AutoBayes automatically generates customized algorithms from compact, declarative specifications in the data analysis domain, taking a statistical model as input and creating documented and optimized C/C++ code. The synthesis process uses Bayesian networks to enable problem decompositions and guide the algorithm derivation. Program schemas encapsulate advanced algorithms and data structures, and a symbolic-algebraic system finds closed-form solutions for problems and emerging subproblems. AutoBayes has been used to analyze planetary nebulae images taken by the Hubble Space Telescope, and can be applied to other scientific data analysis tasks.
[ascl:1109.016] aXe: Spectral Extraction and Visualization Software
aXe is a spectroscopic data extraction software package that was designed to handle large format spectroscopic slitless images such as those from the Wide Field Camera 3 (WFC3) and the Advanced Camera for Surveys (ACS) on HST. aXe is a PyRAF/IRAF package that consists of several tasks and is distributed as part of the Space Telescope Data Analysis System (STSDAS). The various aXe tasks perform specific parts of the extraction and calibration process and are successively used to produce extracted spectra.
[ascl:1605.004] BACCHUS: Brussels Automatic Code for Characterizing High accUracy Spectra
BACCHUS (Brussels Automatic Code for Characterizing High accUracy Spectra) derives stellar parameters (Teff, log g, metallicity, microturbulence velocity and rotational velocity), equivalent widths, and abundances. The code includes on the fly spectrum synthesis, local continuum normalization, estimation of local S/N, automatic line masking, four methods for abundance determinations, and a flagging system aiding line selection. BACCHUS relies on the grid of MARCS model atmospheres, Masseron's model atmosphere thermodynamic structure interpolator, and the radiative transfer code Turbospectrum (ascl:1205.004).
[ascl:1708.010] BAGEMASS: Bayesian age and mass estimates for transiting planet host stars
BAGEMASS calculates the posterior probability distribution for the mass and age of a star from its observed mean density and other observable quantities using a grid of stellar models that densely samples the relevant parameter space. It is written in Fortran and requires FITSIO (ascl:1010.001).
[ascl:1312.008] BAMBI: Blind Accelerated Multimodal Bayesian Inference
BAMBI (Blind Accelerated Multimodal Bayesian Inference) is a Bayesian inference engine that combines the benefits of SkyNet (ascl:1312.007) with MultiNest (ascl:1109.006). It operated by simultaneously performing Bayesian inference using MultiNest and learning the likelihood function using SkyNet. Once SkyNet has learnt the likelihood to sufficient accuracy, inference finishes almost instantaneously.
[ascl:1408.020] bamr: Bayesian analysis of mass and radius observations
bamr is an MPI implementation of a Bayesian analysis of neutron star mass and radius data that determines the mass versus radius curve and the equation of state of dense matter. Written in C++, bamr provides some EOS models. This code requires O2scl (ascl:1408.019) be installed before compilation.
[ascl:1905.014] Bandmerge: Merge data from different wavebands
Bandmerge takes in ASCII tables of positions and fluxes of detected astronomical sources in 2-7 different wavebands, and write out a single table of the merged data. The tool was designed to work with source lists generated by the Spitzer Science Center's MOPEX software, although it can be "fooled" into running on other data as well.
[ascl:1801.001] BANYAN_Sigma: Bayesian classifier for members of young stellar associations
BANYAN_Sigma calculates the membership probability that a given astrophysical object belongs to one of the currently known 27 young associations within 150 pc of the Sun, using Bayesian inference. This tool uses the sky position and proper motion measurements of an object, with optional radial velocity (RV) and distance (D) measurements, to derive a Bayesian membership probability. By default, the priors are adjusted such that a probability threshold of 90% will recover 50%, 68%, 82% or 90% of true association members depending on what observables are input (only sky position and proper motion, with RV, with D, with both RV and D, respectively). The algorithm is implemented in a Python package, in IDL, and is also implemented as an interactive web page.
[ascl:1402.025] BAOlab: Baryon Acoustic Oscillations software
Using the 2-point correlation function, BAOlab aids the study of Baryon Acoustic Oscillations (BAO). The code generates a model-dependent covariance matrix which can change the results both for BAO detection and for parameter constraints.
[ascl:1403.013] BAOlab: Image processing program
BAOlab is an image processing package written in C that should run on nearly any UNIX system with just the standard C libraries. It reads and writes images in standard FITS format; 16- and 32-bit integer as well as 32-bit floating-point formats are supported. Multi-extension FITS files are currently not supported. Among its tools are ishape for size measurements of compact sources, mksynth for generating synthetic images consisting of a background signal including Poisson noise and a number of pointlike sources, imconvol for convolving two images (a “source” and a “kernel”) with each other using fast fourier transforms (FFTs) and storing the output as a new image, and kfit2d for fitting a two-dimensional King model to an image.
[ascl:1807.018] BARYCORR: Python interface for barycentric RV correction
BARYCORR is a Python interface for ZBARYCORR (ascl:1807.017); it requires the measured redshift and returns the corrected barycentric velocity and time correction.
[ascl:1808.001] Barycorrpy: Barycentric velocity calculation and leap second management
barycorrpy (BCPy) is a Python implementation of Wright and Eastman's 2014 code (ascl:1807.017) that calculates precise barycentric corrections well below the 1 cm/s level. This level of precision is required in the search for 1 Earth mass planets in the Habitable Zones of Sun-like stars by the Radial Velocity (RV) method, where the maximum semi-amplitude is about 9 cm/s. BCPy was developed for the pipeline for the next generation Doppler Spectrometers - Habitable-zone Planet Finder (HPF) and NEID. An automated leap second management routine improves upon the one available in Astropy. It checks for and downloads a new leap second file before converting from the UT time scale to TDB. The code also includes a converter for JDUTC to BJDTDB.
[ascl:1601.017] BASCS: Bayesian Separation of Close Sources
[ascl:1208.010] BASE: Bayesian Astrometric and Spectroscopic Exoplanet Detection and Characterization Tool
BASE is a novel program for the combined or separate Bayesian analysis of astrometric and radial-velocity measurements of potential exoplanet hosts and binary stars. The tool fulfills two major tasks of exoplanet science, namely the detection of exoplanets and the characterization of their orbits. BASE was developed to provide the possibility of an integrated Bayesian analysis of stellar astrometric and Doppler-spectroscopic measurements with respect to their binary or planetary companions’ signals, correctly treating the astrometric measurement uncertainties and allowing to explore the whole parameter space without the need for informative prior constraints. The tool automatically diagnoses convergence of its Markov chain Monte Carlo (MCMC[2]) sampler to the posterior and regularly outputs status information. For orbit characterization, BASE delivers important results such as the probability densities and correlations of model parameters and derived quantities. BASE is a highly configurable command-line tool developed in Fortran 2008 and compiled with GFortran. Options can be used to control the program’s behaviour and supply information such as the stellar mass or prior information. Any option can be supplied in a configuration file and/or on the command line.
[ascl:1308.006] BASIN: Beowulf Analysis Symbolic INterface
BASIN (Beowulf Analysis Symbolic INterface) is a flexible, integrated suite of tools for multiuser parallel data analysis and visualization that allows researchers to harness the power of Beowulf PC clusters and multi-processor machines without necessarily being experts in parallel programming. It also includes general tools for data distribution and parallel operations on distributed data for developing libraries for specific tasks.
[ascl:1505.027] BAYES-X: Bayesian inference tool for the analysis of X-ray observations of galaxy clusters
The great majority of X-ray measurements of cluster masses in the literature assume parametrized functional forms for the radial distribution of two independent cluster thermodynamic properties, such as electron density and temperature, to model the X-ray surface brightness. These radial profiles (e.g. β-model) have an amplitude normalization parameter and two or more shape parameters. BAYES-X uses a cluster model to parametrize the radial X-ray surface brightness profile and explore the constraints on both model parameters and physical parameters. Bayes-X is programmed in Fortran and uses MultiNest (ascl:1109.006) as the Bayesian inference engine.
[ascl:1407.015] BayesFlare: Bayesian method for detecting stellar flares
BayesFlare identifies flaring events in light curves released by the Kepler mission; it identifies even weak events by making use of the flare signal shape. The package contains functions to perform Bayesian hypothesis testing comparing the probability of light curves containing flares to that of them containing noise (or non-flare-like) artifacts. BayesFlare includes functions in its amplitude-marginalizer suite to account for underlying sinusoidal variations in light curve data; it includes such variations in the signal model, and then analytically marginalizes over them.
[ascl:1209.001] Bayesian Blocks: Detecting and characterizing local variability in time series
Bayesian Blocks is a time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes within bursts, and generally characterizing intensity variations. The input is raw time series data, in almost any form. Three data modes are elaborated: (1) time-tagged events, (2) binned counts, and (3) measurements at arbitrary times with normal errors. The output is the most probable segmentation of the observation interval into sub-intervals during which the signal is perceptibly constant, i.e. has no statistically significant variations. The idea is not that the source is deemed to actually have this discontinuous, piecewise constant form, rather that such an approximate and generic model is often useful. Treatment of data gaps, variable exposure, extension to piecewise linear and piecewise exponential representations, multi-variate time series data, analysis of variance, data on the circle, other data modes, and dispersed data are included.
This implementation is exact and replaces the greedy, approximate, and outdated algorithm implemented in BLOCK.
[ascl:1711.004] BayesVP: Full Bayesian Voigt profile fitting
BayesVP offers a Bayesian approach for modeling Voigt profiles in absorption spectroscopy. The code fits the absorption line profiles within specified wavelength ranges and generates posterior distributions for the column density, Doppler parameter, and redshifts of the corresponding absorbers. The code uses publicly available efficient parallel sampling packages to sample posterior and thus can be run on parallel platforms. BayesVP supports simultaneous fitting for multiple absorption components in high-dimensional parameter space. The package includes additional utilities such as explicit specification of priors of model parameters, continuum model, Bayesian model comparison criteria, and posterior sampling convergence check.
[ascl:1907.011] beamconv: Cosmic microwave background detector data simulator
beamconv simulates the scanning of the CMB sky while incorporating realistic beams and scan strategies. It uses (spin-)spherical harmonic representations of the (polarized) beam response and sky to generate simulated CMB detector signal timelines. Beams can be arbitrarily shaped. Pointing timelines can be read in or calculated on the fly; optionally, the results can be binned on the sphere.
[ascl:1905.006] beamModelTester: Model evaluation for fixed antenna phased array radio telescopes
beamModelTester enables evaluation of models of the variation in sensitivity and apparent polarization of fixed antenna phased array radio telescopes. The sensitivity of such instruments varies with respect to the orientation of the source to the antenna, resulting in variation in sensitivity over altitude and azimuth that is not consistent with respect to frequency due to other geometric effects. In addition, the different relative orientation of orthogonal pairs of linear antennae produces a difference in sensitivity between the antennae, leading to an artificial apparent polarization. Comparing the model with observations made using the given telescope makes it possible evaluate the model's performance; the results of this evaluation can provide a figure of merit for the model and guide improvements to it. This system also enables plotting of results from a single station observation on a variety of parameters.
[ascl:1104.013] BEARCLAW: Boundary Embedded Adaptive Refinement Conservation LAW package
The BEARCLAW package is a multidimensional, Eulerian AMR-capable computational code written in Fortran to solve hyperbolic systems for astrophysical applications. It is part of AstroBEAR, a hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications which allows simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates.
[ascl:1306.006] BEHR: Bayesian Estimation of Hardness Ratios
BEHR is a standalone command-line C program designed to quickly estimate the hardness ratios and their uncertainties for astrophysical sources. It is especially useful in the Poisson regime of low counts, and computes the proper uncertainty regardless of whether the source is detected in both passbands or not.
[submitted] BELLAMY: A cross-matching package for the cynical astronomer
BELLAMY is a cross-matching algorithm designed primarily for radio images, that aims to match all sources in the supplied target catalogue to sources in a reference catalogue by calculating the probability of a match. BELLAMY utilises not only the position of a source on the sky, but also the flux data to calculate this probability, determining the most probable match in the reference catalog to the target source. Additionally, BELLAMY attempts to undo any spatial distortion that may be affecting the target catalogue, by creating a model of the offsets of matched sources which is then applied to unmatched sources. This combines to produce an iterative cross-matching algorithm that provides the user with an obvious measure of how confident they should be with the results of a cross-match.
[ascl:1306.013] Bessel: Fast Bessel Function Jn(z) Routine for Large n,z
Bessel, written in the C programming language, uses an accurate scheme for evaluating Bessel functions of high order. It has been extensively tested against a number of other routines, demonstrating its accuracy and efficiency.
[ascl:1402.015] BF_dist: Busy Function fitting
The "busy function" accurately describes the characteristic double-horn HI profile of many galaxies. Implemented in a C/C++ library and Python module called BF_dist, it is a continuous, differentiable function that consists of only two basic functions, the error function, erf(x), and a polynomial, |x|^n, of degree n >= 2. BF_dist offers great flexibility in fitting a wide range of HI profiles from the Gaussian profiles of dwarf galaxies to the broad, asymmetric double-horn profiles of spiral galaxies, and can be used to parametrize observed HI spectra of galaxies and the construction of spectral templates for simulations and matched filtering algorithms accurately and efficiently.
[ascl:1504.020] BGLS: A Bayesian formalism for the generalised Lomb-Scargle periodogram
BGLS calculates the Bayesian Generalized Lomb-Scargle periodogram. It takes as input arrays with a time series, a dataset and errors on those data, and returns arrays with sampled periods and the periodogram values at those periods.
[ascl:1806.002] BHDD: Primordial black hole binaries code
[ascl:1206.005] bhint: High-precision integrator for stellar systems
bhint is a post-Newtonian, high-precision integrator for stellar systems surrounding a super-massive black hole. The algorithm makes use of the fact that the Keplerian orbits in such a potential can be calculated directly and are only weakly perturbed. For a given average number of steps per orbit, bhint is almost a factor of 100 more accurate than the standard Hermite method.
[ascl:1802.013] BHMcalc: Binary Habitability Mechanism Calculator
BHMcalc provides renditions of the instantaneous circumbinary habital zone (CHZ) and also calculates BHM properties of the system including those related to the rotational evolution of the stellar components and the combined XUV and SW fluxes as measured at different distances from the binary. Moreover, it provides numerical results that can be further manipulated and used to calculate other properties.
[ascl:9910.006] BHSKY: Visual distortions near a black hole
BHSKY (copyright 1999 by Robert J. Nemiroff) computes the visual distortion effects visible to an observer traveling around and descending near a non-rotating black hole. The codes are general relativistically accurate and incorporate concepts such as large-angle deflections, image magnifications, multiple imaging, blue-shifting, and the location of the photon sphere. Once star.dat is edited to define the position and orientation of the observer relative to the black hole, bhsky_table should be run to create a table of photon deflection angles. Next bhsky_image reads this table and recomputes the perceived positions of stars in star.num, the Yale Bright Star Catalog. Lastly, bhsky_camera plots these results. The code currently tracks only the two brightest images of each star, and hence becomes noticeably incomplete within 1.1 times the Schwarzschild radius.
[ascl:1501.009] BIANCHI: Bianchi VIIh Simulations
BIANCHI provides functionality to support the simulation of Bianchi Type VIIh induced temperature fluctuations in CMB maps of a universe with shear and rotation. The implementation is based on the solutions to the Bianchi models derived by Barrow et al. (1985), which do not incorporate any dark energy component. Functionality is provided to compute the induced fluctuations on the sphere directly in either real or harmonic space.
[ascl:1312.004] BIE: Bayesian Inference Engine
The Bayesian Inference Engine (BIE) is an object-oriented library of tools written in C++ designed explicitly to enable Bayesian update and model comparison for astronomical problems. To facilitate "what if" exploration, BIE provides a command line interface (written with Bison and Flex) to run input scripts. The output of the code is a simulation of the Bayesian posterior distribution from which summary statistics e.g. by taking moments, or determine confidence intervals and so forth, can be determined. All of these quantities are fundamentally integrals and the Markov Chain approach produces variates $ heta$ distributed according to $P( heta|D)$ so moments are trivially obtained by summing of the ensemble of variates.
[ascl:1711.021] Bifrost: Stream processing framework for high-throughput applications
Bifrost is a stream processing framework that eases the development of high-throughput processing CPU/GPU pipelines. It is designed for digital signal processing (DSP) applications within radio astronomy. Bifrost uses a flexible ring buffer implementation that allows different signal processing blocks to be connected to form a pipeline. Each block may be assigned to a CPU core, and the ring buffers are used to transport data to and from blocks. Processing blocks may be run on either the CPU or GPU, and the ring buffer will take care of memory copies between the CPU and GPU spaces.
[ascl:1208.007] Big MACS: Accurate photometric calibration
[ascl:1901.011] Bilby: Bayesian inference library
[ascl:1710.008] Binary: Accretion disk evolution
Binary computes the evolution of an accretion disc interacting with a binary system. It has been developed and used to study the coupled evolution of supermassive BH binaries and gaseous accretion discs.
[ascl:1312.012] BINGO: BI-spectra and Non-Gaussianity Operator
The BI-spectra and Non-Gaussianity Operator (BINGO) code, written in Fortran, computes the scalar bi-spectrum and the non-Gaussianity parameter fNL in single field inflationary models involving the canonical scalar field. BINGO can calculate all the different contributions to the bi-spectrum and the parameter fNL for an arbitrary triangular configuration of the wavevectors.
[ascl:1011.008] Binsim: Visualising Interacting Binaries in 3D
[ascl:1208.002] BINSYN: Simulating Spectra and Light Curves of Binary Systems with or without Accretion Disks
The BINSYN program suite is a collection of programs for analysis of binary star systems with or without an optically thick accretion disk. BINSYN produces synthetic spectra of individual binary star components plus a synthetic spectrum of the system. If the system includes an accretion disk, BINSYN also produces a separate synthetic spectrum of the disk face and rim. A system routine convolves the synthetic spectra with filter profiles of several photometric standards to produce absolute synthetic photometry output. The package generates synthetic light curves and determines an optimized solution for system parameters.
[ascl:1512.008] Bisous model: Detecting filamentary pattern in point processes
The Bisous model is a marked point process that models multi-dimensional patterns. The Bisous filament finder works directly with galaxy distribution data and the model intrinsically takes into account the connectivity of the filamentary network. The Bisous model generates the visit map (the probability to find a filament at a given point) together with the filament orientation field; these two fields are used to extract filament spines from the data.
[ascl:1411.027] BKGE: Fermi-LAT Background Estimator
The Fermi-LAT Background Estimator (BKGE) is a publicly available open-source tool that can estimate the expected background of the Fermi-LAT for any observational conguration and duration. It produces results in the form of text files, ROOT files, gtlike source-model files (for LAT maximum likelihood analyses), and PHA I/II FITS files (for RMFit/XSpec spectral fitting analyses). Its core is written in C++ and its user interface in Python.
[ascl:1906.002] Blimpy: Breakthrough Listen I/O Methods for Python
Blimpy (Breakthrough Listen I/O Methods for Python) provides utilities for viewing and interacting with the data formats used within the Breakthrough Listen program, including Sigproc filterbank (.fil) and HDF5 (.h5) files that contain dynamic spectra (aka 'waterfalls'), and guppi raw (.raw) files that contain voltage-level data. Blimpy can also extract, calibrate, and visualize data and a suite of command-line utilities are also available.
[ascl:1208.009] BLOBCAT: Software to Catalog Blobs
BLOBCAT is a source extraction software that utilizes the flood fill algorithm to detect and catalog blobs, or islands of pixels representing sources, in 2D astronomical images. The software is designed to process radio-wavelength images of both Stokes I intensity and linear polarization, the latter formed through the quadrature sum of Stokes Q and U intensities or as a by-product of rotation measure synthesis. BLOBCAT corrects for two systematic biases to enable the flood fill algorithm to accurately measure flux densities for Gaussian sources. BLOBCAT exhibits accurate measurement performance in total intensity and, in particular, linear polarization, and is particularly suited to the analysis of large survey data.
[ascl:9909.005] BLOCK: A Bayesian block method to analyze structure in photon counting data
Bayesian Blocks is a time-domain algorithm for detecting localized structures (bursts), revealing pulse shapes, and generally characterizing intensity variations. The input is raw counting data, in any of three forms: time-tagged photon events, binned counts, or time-to-spill data. The output is the most probable segmentation of the observation into time intervals during which the photon arrival rate is perceptibly constant, i.e. has no statistically significant variations. The idea is not that the source is deemed to have this discontinuous, piecewise constant form, rather that such an approximate and generic model is often useful. The analysis is based on Bayesian statistics.
This code is obsolete and yields approximate results; see Bayesian Blocks instead for an algorithm guaranteeing exact global optimization.
[ascl:1607.008] BLS: Box-fitting Least Squares
[ascl:1709.009] bmcmc: MCMC package for Bayesian data analysis
bmcmc is a general purpose Markov Chain Monte Carlo package for Bayesian data analysis. It uses an adaptive scheme for automatic tuning of proposal distributions. It can also handle Bayesian hierarchical models by making use of the Metropolis-Within-Gibbs scheme.
[ascl:1210.030] BOOTTRAN: Error Bars for Keplerian Orbital Parameters
BOOTTRAN calculates error bars for Keplerian orbital parameters for both single- and multiple-planet systems. It takes the best-fit parameters and radial velocity data (BJD, velocity, errors) and calculates the error bars from sampling distribution estimated via bootstrapping. It is recommended to be used together with the RVLIN package, which find best-fit Keplerian orbital parameters. Both RVLIN and BOOTTRAN are compatible with multiple-telescope data. BOOTTRAN also calculates the transit time and secondary eclipse time and their associated error bars. The algorithm is described in the appendix of the associated article.
[ascl:1108.019] BOREAS: Mass Loss Rate of a Cool, Late-type Star
The basic mechanisms responsible for producing winds from cool, late-type stars are still largely unknown. We take inspiration from recent progress in understanding solar wind acceleration to develop a physically motivated model of the time-steady mass loss rates of cool main-sequence stars and evolved giants. This model follows the energy flux of magnetohydrodynamic turbulence from a subsurface convection zone to its eventual dissipation and escape through open magnetic flux tubes. We show how Alfven waves and turbulence can produce winds in either a hot corona or a cool extended chromosphere, and we specify the conditions that determine whether or not coronal heating occurs. These models do not utilize arbitrary normalization factors, but instead predict the mass loss rate directly from a star's fundamental properties. We take account of stellar magnetic activity by extending standard age-activity-rotation indicators to include the evolution of the filling factor of strong photospheric magnetic fields. We compared the predicted mass loss rates with observed values for 47 stars and found significantly better agreement than was obtained from the popular scaling laws of Reimers, Schroeder, and Cuntz. The algorithm used to compute cool-star mass loss rates is provided as a self-contained and efficient IDL computer code. We anticipate that the results from this kind of model can be incorporated straightforwardly into stellar evolution calculations and population synthesis techniques.
[ascl:1108.011] BPZ: Bayesian Photometric Redshift Code
Photometric redshift estimation is becoming an increasingly important technique, although the currently existing methods present several shortcomings which hinder their application. Most of those drawbacks are efficiently eliminated when Bayesian probability is consistently applied to this problem. The use of prior probabilities and Bayesian marginalization allows the inclusion of valuable information, e.g. the redshift distributions or the galaxy type mix, which is often ignored by other methods. In those cases when the a priori information is insufficient, it is shown how to `calibrate' the prior distributions, using even the data under consideration. There is an excellent agreement between the 108 HDF spectroscopic redshifts and the predictions of the method, with a rms error Delta z/(1+z_spec) = 0.08 up to z<6 and no systematic biases nor outliers. The results obtained are more reliable than those of standard techniques even when the latter include near-IR colors. The Bayesian formalism developed here can be generalized to deal with a wide range of problems which make use of photometric redshifts, e.g. the estimation of individual galaxy characteristics as the metallicity, dust content, etc., or the study of galaxy evolution and the cosmological parameters from large multicolor surveys. Finally, using Bayesian probability it is possible to develop an integrated statistical method for cluster mass reconstruction which simultaneously considers the information provided by gravitational lensing and photometric redshifts.
[ascl:1806.025] BRATS: Broadband Radio Astronomy ToolS
BRATS (Broadband Radio Astronomy ToolS) provides tools for the spectral analysis of broad-bandwidth radio data and legacy support for narrowband telescopes. It can fit models of spectral ageing on small spatial scales, offers automatic selection of regions based on user parameters (e.g. signal to noise), and automatic determination of the best-fitting injection index. It includes statistical testing, including Chi-squared, error maps, confidence levels and binning of model fits, and can map spectral index as a function of position. It also provides the ability to reconstruct sources at any frequency for a given model and parameter set, subtract any two FITS images and output residual maps, easily combine and scale FITS images in the image plane, and resize radio maps.
[ascl:1412.005] BRUCE/KYLIE: Pulsating star spectra synthesizer
BRUCE and KYLIE, written in Fortran 77, synthesize the spectra of pulsating stars. BRUCE constructs a point-sampled model for the surface of a rotating, gravity-darkened star, and then subjects this model to perturbations arising from one or more non-radial pulsation modes. Departures from adiabaticity can be taken into account, as can the Coriolis force through adoption of the so-called traditional approximation. BRUCE writes out a time-sequence of perturbed surface models. This sequence is read in by KYLIE, which synthesizes disk-integrated spectra for the models by co-adding the specific intensity emanating from each visible point toward the observer. The specific intensity is calculated by interpolation in a large temperature-gravity-wavelength-angle grid of pre-calculated intensity spectra.
[ascl:1407.016] Brut: Automatic bubble classifier
Brut, written in Python, identifies bubbles in infrared images of the Galactic midplane; it uses a database of known bubbles from the Milky Way Project and Spitzer images to build an automatic bubble classifier. The classifier is based on the Random Forest algorithm, and uses the WiseRF implementation of this algorithm.
[ascl:1903.004] brutifus: Python module to post-process datacubes from integral field spectrographs
brutifus aids in post-processing datacubes from integral field spectrographs. The set of Python routines in the package handle generic tasks, such as the registration of a datacube WCS solution with the Gaia catalogue, the correction of Galactic reddening, or the subtraction of the nebular/stellar continuum on a spaxel-per-spaxel basis, with as little user interactions as possible. brutifus is modular, in that the order in which the post-processing routines are run is entirely customizable.
[ascl:1303.014] BSE: Binary Star Evolution
BSE is a rapid binary star evolution code. It can model circularization of eccentric orbits and synchronization of stellar rotation with the orbital motion owing to tidal interaction in detail. Angular momentum loss mechanisms, such as gravitational radiation and magnetic braking, are also modelled. Wind accretion, where the secondary may accrete some of the material lost from the primary in a wind, is allowed with the necessary adjustments made to the orbital parameters in the event of any mass variations. Mass transfer occurs if either star fills its Roche lobe and may proceed on a nuclear, thermal or dynamical time-scale. In the latter regime, the radius of the primary increases in response to mass-loss at a faster rate than the Roche-lobe of the star. Prescriptions to determine the type and rate of mass transfer, the response of the secondary to accretion and the outcome of any merger events are in place in BSE.
[ascl:9904.001] BSGMODEL: The Bahcall-Soneira Galaxy Model
BSGMODEL is used to construct the disk and spheroid components of the Galaxy from which the distribution of visible stars and mass in the Galaxy is calculated. The computer files accessible here are available for export use. The modifications are described in comment lines in the software. The Galaxy model software has been installed and used by different people for a large variety of purposes (see, e. g., the the review "Star Counts and Galactic Structure'', Ann. Rev. Astron. Ap. 24, 577, 1986 ).
[ascl:1204.003] BUDDA: BUlge/Disk Decomposition Analysis
Budda is a Fortran code developed to perform a detailed structural analysis on galaxy images. It is simple to use and gives reliable estimates of the galaxy structural parameters, which can be used, for instance, in Fundamental Plane studies. Moreover, it has a powerful ability to reveal hidden sub-structures, like inner disks, secondary bars and nuclear rings.
[ascl:1610.010] BurnMan: Lower mantle mineral physics toolkit
BurnMan determines seismic velocities for the lower mantle. Written in Python, BurnMan calculates the isotropic thermoelastic moduli by solving the equations-of-state for a mixture of minerals defined by the user. The user may select from a list of minerals applicable to the lower mantle included or can define one. BurnMan provides choices in methodology, both for the EoS and for the multiphase averaging scheme and the results can be visually or quantitatively compared to observed seismic models.
[ascl:1806.026] BWED: Brane-world extra dimensions
Braneworld-extra-dimensions places constraints on the size of the AdS5 radius of curvature within the Randall-Sundrum brane-world model in light of the near-simultaneous detection of the gravitational wave event GW170817 and its optical counterpart, the short γ-ray burst event GRB170817A. The code requires a (supplied) patch to the Montepython cosmological MCMC sampler (ascl:1805.027) to sample the posterior distribution of the 4-dimensional parameter space in VBV17 and obtain constraints on the parameters.
[ascl:1610.011] BXA: Bayesian X-ray Analysis
[ascl:1211.005] C-m Emu: Concentration-mass relation emulator
The concentration-mass relation for dark matter-dominated halos is one of the essential results expected from a theory of structure formation. C-m Emu is a simple numerical code for the c-M relation as a function of cosmological parameters for wCDM models generates the best-fit power-law model for each redshift separately and then interpolate between the redshifts. This produces a more accurate answer at each redshift at the minimal cost of running a fast code for every c -M prediction instead of using one fitting formula. The emulator is constructed from 37 individual models, with three nested N-body gravity-only simulations carried out for each model. The mass range covered by the emulator is 2 x 10^{12} M_sun < M <10^{15} M_sun with a corresponding redshift range of z=0 -1. Over this range of mass and redshift, as well as the variation of cosmological parameters studied, the mean halo concentration varies from c ~ 2 to c ~ 8. The distribution of the concentration at fixed mass is Gaussian with a standard deviation of one-third of the mean value, almost independent of cosmology, mass, and redshift over the ranges probed by the simulations.
[ascl:1610.006] C3: Command-line Catalogue Crossmatch for modern astronomical surveys
The Command-line Catalogue Cross-matching (C3) software efficiently performs the positional cross-match between massive catalogues from modern astronomical surveys, whose size have rapidly increased in the current data-driven science era. Based on a multi-core parallel processing paradigm, it is executed as a stand-alone command-line process or integrated within any generic data reduction/analysis pipeline. C3 provides its users with flexibility in portability, parameter configuration, catalogue formats, angular resolution, region shapes, coordinate units and cross-matching types.
[ascl:1102.013] Cactus: HPC infrastructure and programming tools
Cactus provides computational scientists and engineers with a collaborative, modular and portable programming environment for parallel high performance computing. Cactus can make use of many other technologies for HPC, such as Samrai, HDF5, PETSc and PAPI, and several application domains such as numerical relativity, computational fluid dynamics and quantum gravity are developing open community toolkits for Cactus.
[ascl:1303.017] CADRE: CArma Data REduction pipeline
CADRE, the Combined Array for Millimeter-wave Astronomy (CARMA) data reduction pipeline, gives investigators a first look at a fully reduced set of their data. It runs automatically on all data produced by the telescope as they arrive in the data archive. The pipeline is written in python and uses python wrappers for MIRIAD subroutines for direct access to the data. It applies passband, gain and flux calibration to the data sets and produces a set of continuum and spectral line maps in both MIRIAD and FITS format.
[ascl:1807.015] CAESAR: Compact And Extended Source Automated Recognition
CAESAR extracts and parameterizes both compact and extended sources from astronomical radio interferometric maps. The processing pipeline is a series of stages that can run on multiple cores and processors. After local background and rms map computation, compact sources are extracted with flood-fill and blob finder algorithms, processed (selection + deblending), and fitted using a 2D gaussian mixture model. Extended source search is based on a pre-filtering stage, allowing image denoising, compact source removal and enhancement of diffuse emission, followed by a final segmentation. Different algorithms are available for image filtering and segmentation. The outputs delivered to the user include source fitted and shape parameters, regions and contours. Written in C++, CAESAR is designed to handle the large-scale surveys planned with the Square Kilometer Array (SKA) and its precursors.
[ascl:1505.001] CALCEPH: Planetary ephemeris files access code
CALCEPH accesses binary planetary ephemeris files, including INPOPxx, JPL DExxx ,and SPICE ephemeris files. It provides a C Application Programming Interface (API) and, optionally, a Fortran 77 or 2003 interface to be called by the application. Two groups of functions enable the access to the ephemeris files, single file access functions, provided to make transition easier from the JPL functions, such as PLEPH, to this library, and many ephemeris file at the same time. Although computers have different endianess (order in which integers are stored as bytes in computer memory), CALCEPH can handles the binary ephemeris files with any endianess by automatically swaps the bytes when it performs read operations on the ephemeris file.
[ascl:1210.010] CALCLENS: Curved-sky grAvitational Lensing for Cosmological Light conE simulatioNS
CALCLENS, written in C and employing widely available software libraries, efficiently computes weak gravitational lensing shear signals from large N-body light cone simulations over a curved sky. The algorithm properly accounts for the sky curvature and boundary conditions, is able to produce redshift-dependent shear signals including corrections to the Born approximation by using multiple-plane ray tracing, and properly computes the lensed images of source galaxies in the light cone. The key feature of this algorithm is a new, computationally efficient Poisson solver for the sphere that combines spherical harmonic transform and multgrid methods. As a result, large areas of sky (~10,000 square degrees) can be ray traced efficiently at high-resolution using only a few hundred cores on widely available machines. Coupled with realistic galaxy populations placed in large N-body light cone simulations, CALCLENS is ideally suited for the construction of synthetic weak lensing shear catalogs to be used to test for systematic effects in data analysis procedures for upcoming large-area sky surveys.
[ascl:1105.013] CAMB Sources: Number Counts, Lensing & Dark-age 21cm Power Spectra
We relate the observable number of sources per solid angle and redshift to the underlying proper source density and velocity, background evolution and line-of-sight potentials. We give an exact result in the case of linearized perturbations assuming general relativity. This consistently includes contributions of the source density perturbations and redshift distortions, magnification, radial displacement, and various additional linear terms that are small on sub-horizon scales. In addition we calculate the effect on observed luminosities, and hence the result for sources observed as a function of flux, including magnification bias and radial-displacement effects. We give the corresponding linear result for a magnitude-limited survey at low redshift, and discuss the angular power spectrum of the total count distribution. We also calculate the cross-correlation with the CMB polarization and temperature including Doppler source terms, magnification, redshift distortions and other velocity effects for the sources, and discuss why the contribution of redshift distortions is generally small. Finally we relate the result for source number counts to that for the brightness of line radiation, for example 21-cm radiation, from the sources.
[ascl:1102.026] CAMB: Code for Anisotropies in the Microwave Background
We present a fully covariant and gauge-invariant calculation of the evolution of anisotropies in the cosmic microwave background (CMB) radiation. We use the physically appealing covariant approach to cosmological perturbations, which ensures that all variables are gauge-invariant and have a clear physical interpretation. We derive the complete set of frame-independent, linearised equations describing the (Boltzmann) evolution of anisotropy and inhomogeneity in an almost Friedmann-Robertson-Walker (FRW) cold dark matter (CDM) universe. These equations include the contributions of scalar, vector and tensor modes in a unified manner. Frame-independent equations for scalar and tensor perturbations, which are valid for any value of the background curvature, are obtained straightforwardly from the complete set of equations. We discuss the scalar equations in detail, including the integral solution and relation with the line of sight approach, analytic solutions in the early radiation dominated era, and the numerical solution in the standard CDM model. Our results confirm those obtained by other groups, who have worked carefully with non-covariant methods in specific gauges, but are derived here in a completely transparent fashion.
[ascl:1801.007] cambmag: Magnetic Fields in CAMB
cambmag is a modification to CAMB (ascl:1102.026) that calculates the compensated magnetic mode in the scalar, vector and tensor case. Previously CAMB included code only for the vectors. It also corrects for tight-coupling issues and adds in the ability to include massive neutrinos when calculating vector modes.
[ascl:1502.015] Camelus: Counts of Amplified Mass Elevations from Lensing with Ultrafast Simulations
Camelus provides a prediction on weak lensing peak counts from input cosmological parameters. Written in C, it samples halos from a mass function and assigns a profile, carries out ray-tracing simulations, and then counts peaks from ray-tracing maps. The creation of the ray-tracing simulations requires less computing time than N-body runs and the results is in good agreement with full N-body simulations.
[ascl:1106.017] CAOS: Code for Adaptive Optics Systems
[ascl:1404.011] CAP_LOESS_1D & CAP_LOESS_2D: Recover mean trends from noisy data
CAP_LOESS_1D and CAP_LOESS_2D provide improved implementations of the one-dimensional (Clevelend 1979) and two-dimensional (Cleveland & Devlin 1988) Locally Weighted Regression (LOESS) methods to recover the mean trends of the population from noisy data in one or two dimensions. They include a robust approach to deal with outliers (bad data). The software is available in both IDL and Python versions.
[ascl:1505.003] caret: Classification and Regression Training
caret (Classification And REgression Training) provides functions for training and plotting classification and regression models. It contains tools for data splitting, pre-processing, feature selection, model tuning using resampling, and variable importance estimation, as well as other functionality.
[ascl:1404.009] carma_pack: MCMC sampler for Bayesian inference
carma_pack is an MCMC sampler for performing Bayesian inference on continuous time autoregressive moving average models. These models may be used to model time series with irregular sampling. The MCMC sampler utilizes an adaptive Metropolis algorithm combined with parallel tempering.
[ascl:1611.016] Carpet: Adaptive Mesh Refinement for the Cactus Framework
Carpet is an adaptive mesh refinement and multi-patch driver for the Cactus Framework (ascl:1102.013). Cactus is a software framework for solving time-dependent partial differential equations on block-structured grids, and Carpet acts as driver layer providing adaptive mesh refinement, multi-patch capability, as well as parallelization and efficient I/O.
[ascl:1107.013] CASA: Common Astronomy Software Applications
CASA, the Common Astronomy Software Applications package, is being developed with the primary goal of supporting the data post-processing needs of the next generation of radio astronomical telescopes such as ALMA and EVLA. The package can process both interferometric and single dish data. The CASA infrastructure consists of a set of C++ tools bundled together under an iPython interface as a set of data reduction tasks. This structure provides flexibility to process the data via task interface or as a python script. In addition to the data reduction tasks, many post-processing tools are available for even more flexibility and special purpose reduction needs.
[ascl:1905.023] CASI-2D: Convolutional Approach to Shell Identification - 2D
CASI-2D (Convolutional Approach to Shell Identification) identifies stellar feedback signatures using data from magneto-hydrodynamic simulations of turbulent molecular clouds with embedded stellar sources and deep learning techniques. Specifically, a deep neural network is applied to dense regression and segmentation on simulated density and synthetic 12 CO observations to identify shells, sometimes referred to as "bubbles," and other structures of interest in molecular cloud data.
[ascl:1402.013] CASSIS: Interactive spectrum analysis
CASSIS (Centre d'Analyse Scientifique de Spectres Infrarouges et Submillimetriques), written in Java, is suited for broad-band spectral surveys to speed up the scientific analysis of high spectral resolution observations. It uses a local spectroscopic database made of the two molecular spectroscopic databases JPL and CDMS, as well as the atomic spectroscopic database NIST. Its tools include a LTE model and the RADEX model connected to the LAMDA molecular collisional database. CASSIS can build a line list fitting the various transitions of a given species and to directly produce rotational diagrams from these lists. CASSIS is fully integrated into HIPE, the Herschel Interactive Processing Environment, as a plug-in.
[ascl:1804.013] CAT-PUMA: CME Arrival Time Prediction Using Machine learning Algorithms
CAT-PUMA (CME Arrival Time Prediction Using Machine learning Algorithms) quickly and accurately predicts the arrival of Coronal Mass Ejections (CMEs) of CME arrival time. The software was trained via detailed analysis of CME features and solar wind parameters using 182 previously observed geo-effective partial-/full-halo CMEs and uses algorithms of the Support Vector Machine (SVM) to make its predictions, which can be made within minutes of providing the necessary input parameters of a CME.
[ascl:1206.008] Catena: Ensemble of stars orbit integration
Catena integrates the orbits of an ensemble of stars using the chain-regularization method (Mikkola & Aarseth) with an embedded Runge-Kutta integration method of 9(8)th order (Prince & Dormand).
[ascl:1810.013] catsHTM: Catalog cross-matching tool
[ascl:1904.012] CausticFrog: 1D Lagrangian Simulation Package
CausticFrog models the reaction of a system of orbiting particles to instantaneous mass loss. It applies to any spherically symmetric potential, and follows the radial evolution of shells of mass. CausticFrog tracks the inner and outer edge of each shell, whose radius evolves as a test particle. The amount of mass in each shell is fixed but multiple shells can overlap leading to higher densities.
[ascl:1403.021] CCDPACK: CCD Data Reduction Package
[ascl:1510.007] ccdproc: CCD data reduction software
[ascl:1901.003] CCL: Core Cosmology Library
[ascl:1208.006] ccogs: Cosmological Calculations on the GPU
[ascl:1904.006] CDAWeb: Coordinated Data Analysis Web
[ascl:1308.015] Ceph_code: Cepheid light-curves fitting
[ascl:1901.001] cFE: Core Flight Executive
[ascl:1010.001] CFITSIO: A FITS File Subroutine Library
[ascl:1904.003] CGS: Collisionless Galactic Simulator
[ascl:1411.024] CGS3DR: UKIRT CGS3 data reduction software
[ascl:1703.015] Charm: Cosmic history agnostic reconstruction method
[ascl:1412.002] Cheetah: Starspot modeling code
[ascl:1311.006] CIAO: Chandra Interactive Analysis of Observations
[ascl:1803.002] CIFOG: Cosmological Ionization Fields frOm Galaxies
[ascl:1111.004] CIGALE: Code Investigating GALaxy Emission
[ascl:1708.002] CINE: Comet INfrared Excitation
[ascl:1202.001] CISM_DX: Visualization and analysis tool
[ascl:1106.020] CLASS: Cosmic Linear Anisotropy Solving System
[ascl:1407.010] CLE: Coronal line synthesis
[ascl:1107.014] Clumpfind: Determining Structure in Molecular Clouds
[ascl:1106.018] CMB B-modes from Faraday Rotation
[ascl:1106.023] CMBACT: CMB from ACTive sources
[ascl:9909.004] CMBFAST: A microwave anisotropy code
• rendering of the field of Stokes vectors
• control over sphere lighting
• export an arbitrarily large rendered texture
• variety of preset colormaps
[ascl:1907.022] CMDPT: Color Magnitude Diagrams Plot Tool
[ascl:1109.020] CMFGEN: Probing the Universe through Spectroscopy
[ascl:1101.005] CMHOG: Code for Ideal Compressible Hydrodynamics
The radiation transport is computed with either one of three modules:
[ascl:1505.010] COBS: COnstrained B-Splines
[ascl:1406.017] COCO: Conversion of Celestial Coordinates
[ascl:1602.021] COLAcode: COmoving Lagrangian Acceleration code
[ascl:1802.014] collapse: Spherical-collapse model code
[ascl:1606.007] COMB: Compact embedded object simulations
[ascl:1708.024] ComEst: Completeness Estimator
[ascl:1404.008] Comet: Multifunction VOEvent broker
[ascl:1403.015] computePk: Power spectrum computation
[ascl:9905.001] CONSKY: A Sky CCD Integration Simulation
[ascl:1609.023] contbin: Contour binning and accumulative smoothing
[ascl:1304.022] Copter: Cosmological perturbation theory
[ascl:1211.004] CORRFIT: Cross-Correlation Routines
[ascl:1202.006] CORSIKA: An Air Shower Simulation Program
[ascl:1010.040] Cosmic String Simulations
[ascl:1304.006] CosmicEmuLog: Cosmological Power Spectra Emulator
CosmicEmuLog is a simple Python emulator for cosmological power spectra. In addition to the power spectrum of the conventional overdensity field, it emulates the power spectra of the log-density as well as the Gaussianized density. It models fluctuations in the power spectrum at each k as a linear combination of contributions from fluctuations in each cosmological parameter. The data it uses for emulation consist of ASCII files of the mean power spectrum, together with derivatives of the power spectrum with respect to the five cosmological parameters in the space spanned by the Coyote Universe suite. This data can also be used for Fisher matrix analysis. At present, CosmicEmuLog is restricted to redshift 0.
[ascl:1601.008] CosmicPy: Interactive cosmology computations
[ascl:9910.004] COSMICS: Cosmological initial conditions and microwave anisotropy codes
COSMICS is a package of Fortran programs useful for computing transfer functions and microwave background anisotropy for cosmological models, and for generating gaussian random initial conditions for nonlinear structure formation simulations of such models. Four programs are provided: linger_con and linger_syn integrate the linearized equations of general relativity, matter, and radiation in conformal Newtonian and synchronous gauge, respectively; deltat integrates the photon transfer functions computed by the linger codes to produce photon anisotropy power spectra; and grafic tabulates normalized matter power spectra and produces constrained or unconstrained samples of the matter density field.
[ascl:1505.013] cosmoabc: Likelihood-free inference for cosmology
Approximate Bayesian Computation (ABC) enables parameter inference for complex physical systems in cases where the true likelihood function is unknown, unavailable, or computationally too expensive. It relies on the forward simulation of mock data and comparison between observed and synthetic catalogs. cosmoabc is a Python Approximate Bayesian Computation (ABC) sampler featuring a Population Monte Carlo variation of the original ABC algorithm, which uses an adaptive importance sampling scheme. The code can be coupled to an external simulator to allow incorporation of arbitrary distance and prior functions. When coupled with the numcosmo library, it has been used to estimate posterior probability distributions over cosmological parameters based on measurements of galaxy clusters number counts without computing the likelihood function.
[ascl:1511.019] CosmoBolognaLib: Open source C++ libraries for cosmological calculations
CosmoBolognaLib contains numerical libraries for cosmological calculations; written in C++, it is intended to define a common numerical environment for cosmological investigations of the large-scale structure of the Universe. The software aids in handling real and simulated astronomical catalogs by measuring one-point, two-point and three-point statistics in configuration space and performing cosmological analyses. These open source libraries can be included in either C++ or Python codes.
[ascl:1303.003] CosmoHammer: Cosmological parameter estimation with the MCMC Hammer
CosmoHammer is a Python framework for the estimation of cosmological parameters. The software embeds the Python package emcee by Foreman-Mackey et al. (2012) and gives the user the possibility to plug in modules for the computation of any desired likelihood. The major goal of the software is to reduce the complexity when one wants to extend or replace the existing computation by modules which fit the user's needs as well as to provide the possibility to easily use large scale computing environments. CosmoHammer can efficiently distribute the MCMC sampling over thousands of cores on modern cloud computing infrastructure.
[ascl:1110.024] CosmoMC SNLS: CosmoMC Plug-in to Analyze SNLS3 SN Data
This module is a plug-in for CosmoMC and requires that software. Though programmed to analyze SNLS3 SN data, it can also be used for other SN data provided the inputs are put in the right form. In fact, this is probably a good idea, since the default treatment that comes with CosmoMC is flawed. Note that this requires fitting two additional SN nuisance parameters (alpha and beta), but this is significantly faster than attempting to marginalize over them internally.
[ascl:1106.025] CosmoMC: Cosmological MonteCarlo
[ascl:1110.019] CosmoNest: Cosmological Nested Sampling
CosmoNest is an algorithm for cosmological model selection. Given a model, defined by a set of parameters to be varied and their prior ranges, and data, the algorithm computes the evidence (the marginalized likelihood of the model in light of the data). The Bayes factor, which is proportional to the relative evidence of two models, can then be used for model comparison, i.e. to decide whether a model is an adequate description of data, or whether the data require a more complex model.
For convenience, CosmoNest, programmed in Fortran, is presented here as an optional add-on to CosmoMC, which is widely used by the cosmological community to perform parameter fitting within a model using a Markov-Chain Monte-Carlo (MCMC) engine. For this reason it can be run very easily by anyone who is able to compile and run CosmoMC. CosmoNest implements a different sampling strategy, geared for computing the evidence very accurately and efficiently. It also provides posteriors for parameter fitting as a by-product.
[ascl:1408.018] CosmoPhotoz: Photometric redshift estimation using generalized linear models
CosmoPhotoz determines photometric redshifts from galaxies utilizing their magnitudes. The method uses generalized linear models which reproduce the physical aspects of the output distribution. The code can adopt gamma or inverse gaussian families, either from a frequentist or a Bayesian perspective. A set of publicly available libraries and a web application are available. This software allows users to apply a set of GLMs to their own photometric catalogs and generates publication quality plots with no involvement from the user. The code additionally provides a Shiny application providing a simple user interface.
[ascl:1212.006] CosmoPMC: Cosmology sampling with Population Monte Carlo
CosmoPMC is a Monte-Carlo sampling method to explore the likelihood of various cosmological probes. The sampling engine is implemented with the package pmclib. It is called Population MonteCarlo (PMC), which is a novel technique to sample from the posterior. PMC is an adaptive importance sampling method which iteratively improves the proposal to approximate the posterior. This code has been introduced, tested and applied to various cosmology data sets.
[ascl:1304.017] CosmoRec: Cosmological Recombination code
CosmoRec solves the recombination problem including recombinations to highly excited states, corrections to the 2s-1s two-photon channel, HI Lyn-feedback, n>2 two-photon profile corrections, and n≥2 Raman-processes. The code can solve the radiative transfer equation of the Lyman-series photon field to obtain the required modifications to the rate equations of the resolved levels, and handles electron scattering, the effect of HeI intercombination transitions, and absorption of helium photons by hydrogen. It also allows accounting for dark matter annihilation and optionally includes detailed helium radiative transfer effects.
[ascl:1705.001] COSMOS: Carnegie Observatories System for MultiObject Spectroscopy
COSMOS (Carnegie Observatories System for MultiObject Spectroscopy) reduces multislit spectra obtained with the IMACS and LDSS3 spectrographs on the Magellan Telescopes. It can be used for the quick-look analysis of data at the telescope as well as for pipeline reduction of large data sets. COSMOS is based on a precise optical model of the spectrographs, which allows (after alignment and calibration) an accurate prediction of the location of spectra features. This eliminates the line search procedure which is fundamental to many spectral reduction programs, and allows a robust data pipeline to be run in an almost fully automatic mode, allowing large amounts of data to be reduced with minimal intervention.
[ascl:1409.012] CosmoSIS: Cosmological parameter estimation
[ascl:1701.004] CosmoSlik: Cosmology sampler of likelihoods
CosmoSlik quickly puts together, runs, and analyzes an MCMC chain for analysis of cosmological data. It is highly modular and comes with plugins for CAMB (ascl:1102.026), CLASS (ascl:1106.020), the Planck likelihood, the South Pole Telescope likelihood, other cosmological likelihoods, emcee (ascl:1303.002), and more. It offers ease-of-use, flexibility, and modularity.
[ascl:1311.009] CosmoTherm: Thermalization code
CosmoTherm allows precise computation of CMB spectral distortions caused by energy release in the early Universe. Different energy-release scenarios (e.g., decaying or annihilating particles) are implemented using the Green's function of the cosmological thermalization problem, allowing fast computation of the distortion signal. The full thermalization problem can be solved on a case-by-case basis for a wide range of energy-release scenarios using the full PDE solver of CosmoTherm. A simple Monte-Carlo toolkit is included for parameter estimation and forecasts using the Green's function method.
[ascl:1504.010] CosmoTransitions: Cosmological Phase Transitions
CosmoTransitions analyzes early-Universe finite-temperature phase transitions with multiple scalar fields. The code enables analysis of the phase structure of an input theory, determines the amount of supercooling at each phase transition, and finds the bubble-wall profiles of the nucleated bubbles that drive the transitions.
[ascl:1307.010] cosmoxi2d: Two-point galaxy correlation function calculation
Cosmoxi2d is written in C and computes the theoretical two-point galaxy correlation function as a function of cosmological and galaxy nuisance parameters. It numerically evaluates the model described in detail in Reid and White 2011 (arxiv:1105.4165) and Reid et al. 2012 (arxiv:1203.6641) for the multipole moments (up to ell = 4) for the observed redshift space correlation function of biased tracers as a function of cosmological (though an input linear matter power spectrum, growth rate f, and Alcock-Paczynski geometric factors alphaperp and alphapar) as well as nuisance parameters describing the tracers (bias and small scale additive velocity dispersion, isotropicdisp1d).
This model works best for highly biased tracers where the 2nd order bias term is small. On scales larger than 100 Mpc, the code relies on 2nd order Lagrangian Perturbation theory as detailed in Matsubara 2008 (PRD 78, 083519), and uses the analytic version of Reid and White 2011 on smaller scales.
[ascl:1512.013] CounterPoint: Zeeman-split absorption lines
CounterPoint works in concert with MoogStokes (ascl:1308.018). It applies the Zeeman effect to the atomic lines in the region of study, splitting them into the correct number of Zeeman components and adjusting their relative intensities according to the predictions of Quantum Mechanics, and finally creates a Moog-readable line list for use with MoogStokes. CounterPoint has the ability to use VALD and HITRAN line databases for both atomic and molecular lines.
[ascl:1904.028] covdisc: Disconnected covariance of 2-point functions in large-scale structure of the Universe
covdisc computes the disconnected part of the covariance matrix of 2-point functions in large-scale structure studies, accounting for the survey window effect. This method works for both power spectrum and correlation function, and applies to the covariances for various probes including the multi- poles and the wedges of 3D clustering, the angular and the projected statistics of clustering and lensing, as well as their cross covariances.
[ascl:1808.003] CPF: Corral Pipeline Framework
[ascl:1402.010] CPL: Common Pipeline Library
The Common Pipeline Library (CPL) is a set of ISO-C libraries that provide a comprehensive, efficient and robust software toolkit to create automated astronomical data reduction pipelines. Though initially developed as a standardized way to build VLT instrument pipelines, the CPL may be more generally applied to any similar application. The code also provides a variety of general purpose image- and signal-processing functions, making it an excellent framework for the creation of more generic data handling packages. The CPL handles low-level data types (images, tables, matrices, strings, property lists, etc.) and medium-level data access methods (a simple data abstraction layer for FITS files). It also provides table organization and manipulation, keyword/value handling and management, and support for dynamic loading of recipe modules using programs such as EsoRex (ascl:1504.003).
[ascl:1101.008] CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics
We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).
[ascl:1111.002] CRBLASTER: A Parallel-Processing Computational Framework for Embarrassingly-Parallel Image-Analysis Algorithms
The development of parallel-processing image-analysis codes is generally a challenging task that requires complicated choreography of interprocessor communications. If, however, the image-analysis algorithm is embarrassingly parallel, then the development of a parallel-processing implementation of that algorithm can be a much easier task to accomplish because, by definition, there is little need for communication between the compute processes. I describe the design, implementation, and performance of a parallel-processing image-analysis application, called CRBLASTER, which does cosmic-ray rejection of CCD (charge-coupled device) images using the embarrassingly-parallel L.A.COSMIC algorithm. CRBLASTER is written in C using the high-performance computing industry standard Message Passing Interface (MPI) library. The code has been designed to be used by research scientists who are familiar with C as a parallel-processing computational framework that enables the easy development of parallel-processing image-analysis programs based on embarrassingly-parallel algorithms. The CRBLASTER source code is freely available at the official application website at the National Optical Astronomy Observatory. Removing cosmic rays from a single 800x800 pixel Hubble Space Telescope WFPC2 image takes 44 seconds with the IRAF script running on a single core of an Apple Mac Pro computer with two 2.8-GHz quad-core Intel Xeon processors. CRBLASTER is 7.4 times faster processing the same image on a single core on the same machine. Processing the same image with CRBLASTER simultaneously on all 8 cores of the same machine takes 0.875 seconds -- which is a speedup factor of 50.3 times faster than the IRAF script. A detailed analysis is presented of the performance of CRBLASTER using between 1 and 57 processors on a low-power Tilera 700-MHz 64-core TILE64 processor.
[ascl:1308.009] CReSyPS: Stellar population synthesis code
CReSyPS (Code Rennais de Synthèse de Populations Stellaires) is a stellar population synthesis code that determines core overshooting amount for Magellanic clouds main sequence stars.
[ascl:1612.009] CRETE: Comet RadiativE Transfer and Excitation
CRETE (Comet RadiativE Transfer and Excitation) is a one-dimensional water excitation and radiation transfer code for sub-millimeter wavelengths based on the RATRAN code (ascl:0008.002). The code considers rotational transitions of water molecules given a Haser spherically symmetric distribution for the cometary coma and produces FITS image cubes that can be analyzed with tools like MIRIAD (ascl:1106.007). In addition to collisional processes to excite water molecules, the effect of infrared radiation from the Sun is approximated by effective pumping rates for the rotational levels in the ground vibrational state.
[ascl:1708.003] CRISPRED: CRISP imaging spectropolarimeter data reduction pipeline
CRISPRED reduces data from the CRISP imaging spectropolarimeter at the Swedish 1 m Solar Telescope (SST). It performs fitting routines, corrects optical aberrations from atmospheric turbulence as well as from the optics, and compensates for inter-camera misalignments, field-dependent and time-varying instrumental polarization, and spatial variation in the detector gain and in the zero level offset (bias). It has an object-oriented IDL structure with computationally demanding routines performed in C subprograms called as dynamically loadable modules (DLMs).
[ascl:1110.020] CROSS_CMBFAST: ISW-correlation Code
[ascl:1412.013] CRPropa: Numerical tool for the propagation of UHE cosmic rays, gamma-rays and neutrinos
CRPropa computes the observable properties of UHECRs and their secondaries in a variety of models for the sources and propagation of these particles. CRPropa takes into account interactions and deflections of primary UHECRs as well as propagation of secondary electromagnetic cascades and neutrinos. CRPropa makes use of the public code SOPHIA (ascl:1412.014), and the TinyXML, CFITSIO (ascl:1010.001), and CLHEP libraries. A major advantage of CRPropa is its modularity, which allows users to implement their own modules adapted to specific UHECR propagation models.
[ascl:1308.011] CRUSH: Comprehensive Reduction Utility for SHARC-2 (and more...)
CRUSH is an astronomical data reduction/imaging tool for certain imaging cameras, especially at the millimeter, sub-millimeter, and far-infrared wavelengths. It supports the SHARC-2, LABOCA, SABOCA, ASZCA, p-ArTeMiS, PolKa, GISMO, MAKO and SCUBA-2 instruments. The code is written entirely in Java, allowing it to run on virtually any platform. It is normally run from the command-line with several arguments.
[ascl:0104.002] CSENV: A code for the chemistry of CircumStellar ENVelopes
CSENV is a code that computes the chemical abundances for a desired set of species as a function of radius in a stationary, non-clumpy, CircumStellar ENVelope. The chemical species can be atoms, molecules, ions, radicals, molecular ions, and/or their specific quantum states. Collisional ionization or excitation can be incorporated through the proper chemical channels. The chemical species interact with one another and can are subject to photo-processes (dissociation of molecules, radicals, and molecular ions as well as ionization of all species). Cosmic ray ionization can be included. Chemical reaction rates are specified with possible activation temperatures and additional power-law dependences. Photo-absorption cross-sections vs. wavelength, with appropriate thresholds, can be specified for each species, while for H2+ a photoabsorption cross-section is provided as a function of wavelength and temperature. The photons originate from both the star and the external interstellar medium. The chemical species are shielded from the photons by circumstellar dust, by other species and by themselves (self-shielding). Shielding of continuum-absorbing species by these species (self and mutual shielding), line-absorbing species, and dust varies with radial optical depth. The envelope is spherical by default, but can be made bipolar with an opening solid-angle that varies with radius. In the non-spherical case, no provision is made for photons penetrating the envelope from the sides. The envelope is subject to a radial outflow (or wind), constant velocity by default, but the wind velocity can be made to vary with radius. The temperature of the envelope is specified (and thus not computed self-consistently).
[ascl:1307.015] CTI Correction Code
Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.
[ascl:1601.005] ctools: Cherenkov Telescope Science Analysis Software
[ascl:1608.008] Cuba: Multidimensional numerical integration library
[ascl:1609.010] CuBANz: Photometric redshift estimator
CuBANz is a photometric redshift estimator code for high redshift galaxies that uses the back propagation neural network along with clustering of the training set, making it very efficient. The training set is divided into several self learning clusters with galaxies having similar photometric properties and spectroscopic redshifts within a given span. The clustering algorithm uses the color information (i.e. u-g, g-r etc.) rather than the apparent magnitudes at various photometric bands, as the photometric redshift is more sensitive to the flux differences between different bands rather than the actual values. The clustering method enables accurate determination of the redshifts. CuBANz considers uncertainty in the photometric measurements as well as uncertainty in the neural network training. The code is written in C.
[ascl:1512.010] CubeIndexer: Indexer for regions of interest in data cubes
CubeIndexer indexes regions of interest (ROIs) in data cubes reducing the necessary storage space. The software can process data cubes containing megabytes of data in fractions of a second without human supervision, thus allowing it to be incorporated into a production line for displaying objects in a virtual observatory. The software forms part of the Chilean Virtual Observatory (ChiVO) and provides the capability of content-based searches on data cubes to the astronomical community.
[ascl:1208.018] CUBEP3M: High performance P3M N-body code
CUBEP3M is a high performance cosmological N-body code which has many utilities and extensions, including a runtime halo finder, a non-Gaussian initial conditions generator, a tuneable accuracy, and a system of unique particle identification. CUBEP3M is fast, has a memory imprint up to three times lower than other widely used N-body codes, and has been run on up to 20,000 cores, achieving close to ideal weak scaling even at this problem size. It is well suited and has already been used for a broad number of science applications that require either large samples of non-linear realizations or very large dark matter N-body simulations, including cosmological reionization, baryonic acoustic oscillations, weak lensing or non-Gaussian statistics.
[ascl:1111.007] CUBISM: CUbe Builder for IRS Spectra Maps
CUBISM, written in IDL, constructs spectral cubes, maps, and arbitrary aperture 1D spectral extractions from sets of mapping mode spectra taken with Spitzer's IRS spectrograph. CUBISM is optimized for non-sparse maps of extended objects, e.g. the nearby galaxy sample of SINGS, but can be used with data from any spectral mapping AOR (primarily validated for maps which are designed as suggested by the mapping HOWTO).
[ascl:1109.013] CULSP: Fast Calculation of the Lomb-Scargle Periodogram Using Graphics Processing Units
I introduce a new code for fast calculation of the Lomb-Scargle periodogram, that leverages the computing power of graphics processing units (GPUs). After establishing a background to the newly emergent field of GPU computing, I discuss the code design and narrate key parts of its source. Benchmarking calculations indicate no significant differences in accuracy compared to an equivalent CPU-based code. However, the differences in performance are pronounced; running on a low-end GPU, the code can match 8 CPU cores, and on a high-end GPU it is faster by a factor approaching thirty. Applications of the code include analysis of long photometric time series obtained by ongoing satellite missions and upcoming ground-based monitoring facilities; and Monte-Carlo simulation of periodogram statistical properties.
[ascl:1311.007] CUPID: Clump Identification and Analysis Package
[ascl:1311.008] CUPID: Customizable User Pipeline for IRS Data
Written in c, the Customizable User Pipeline for IRS Data (CUPID) allows users to run the Spitzer IRS Pipelines to re-create Basic Calibrated Data and extract calibrated spectra from the archived raw files. CUPID provides full access to all the parameters of the BCD, COADD, BKSUB, BKSUBX, and COADDX pipelines, as well as the opportunity for users to provide their own calibration files (e.g., flats or darks). CUPID is available for Mac, Linux, and Solaris operating systems.
[ascl:1405.015] CURSA: Catalog and Table Manipulation Applications
The CURSA package manipulates astronomical catalogs and similar tabular datasets. It provides facilities for browsing or examining catalogs; selecting subsets from a catalog; sorting and copying catalogs; pairing two catalogs; converting catalog coordinates between some celestial coordinate systems; and plotting finding charts and photometric calibration. It can also extract subsets from a catalog in a format suitable for plotting using other Starlink packages such as PONGO. CURSA can access catalogs held in the popular FITS table format, the Tab-Separated Table (TST) format or the Small Text List (STL) format. Catalogs in the STL and TST formats are simple ASCII text files. CURSA also includes some facilities for accessing remote on-line catalogs via the Internet. It is part of the Starlink software collection (ascl:1110.012).
[ascl:1505.016] CUTE: Correlation Utilities and Two-point Estimation
CUTE (Correlation Utilities and Two-point Estimation) extracts any two-point statistic from enormous datasets with hundreds of millions of objects, such as large galaxy surveys. The computational time grows with the square of the number of objects to be correlated; technology provides multiple means to massively parallelize this problem and CUTE is specifically designed for these kind of calculations. Two implementations are provided: one for execution on shared-memory machines using OpenMP and one that runs on graphical processing units (GPUs) using CUDA.
[ascl:1708.018] CUTEX: CUrvature Thresholding EXtractor
CuTEx analyzes images in the infrared bands and extracts sources from complex backgrounds, particularly star-forming regions that offer the challenges of crowding, having a highly spatially variable background, and having no-psf profiles such as protostars in their accreting phase. The code is composed of two main algorithms, the first an algorithm for source detection, and the second for flux extraction. The code is originally written in IDL language and it was exported in the license free GDL language. CuTEx could be used in other bands or in scientific cases different from the native case.
This software is also available as an on-line tool from the Multi-Mission Interactive Archive web pages dedicated to the Herschel Observatory.
[ascl:1606.003] Cygrid: Cython-powered convolution-based gridding module for Python
The Python module Cygrid grids (resamples) data to any collection of spherical target coordinates, although its typical application involves FITS maps or data cubes. The module supports the FITS world coordinate system (WCS) standard; its underlying algorithm is based on the convolution of the original samples with a 2D Gaussian kernel. A lookup table scheme allows parallelization of the code and is combined with the HEALPix tessellation of the sphere for fast neighbor searches. Cygrid's runtime scales between O(n) and O(nlog n), with n being the number of input samples.
[ascl:1504.018] D3PO: Denoising, Deconvolving, and Decomposing Photon Observations
D3PO (Denoising, Deconvolving, and Decomposing Photon Observations) addresses the inference problem of denoising, deconvolving, and decomposing photon observations. Its primary goal is the simultaneous but individual reconstruction of the diffuse and point-like photon flux given a single photon count image, where the fluxes are superimposed. A hierarchical Bayesian parameter model is used to discriminate between morphologically different signal components, yielding a diffuse and a point-like signal estimate for the photon flux components.
[ascl:1612.007] dacapo_calibration: Photometric calibration code
dacapo_calibration implements the DaCapo algorithm used in the Planck/LFI 2015 data release for photometric calibration. The code takes as input a set of TODs and calibrates them using the CMB dipole signal. DaCapo is a variant of the well-known family of destriping algorithms for map-making.
[ascl:1804.005] DaCHS: Data Center Helper Suite
[ascl:1507.015] DALI: Derivative Approximation for LIkelihoods
DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.
[ascl:1803.001] DaMaSCUS-CRUST: Dark Matter Simulation Code for Underground Scatterings - Crust Edition
DaMaSCUS-CRUST determines the critical cross-section for strongly interacting DM for various direct detection experiments systematically and precisely using Monte Carlo simulations of DM trajectories inside the Earth's crust, atmosphere, or any kind of shielding. Above a critical dark matter-nucleus scattering cross section, any terrestrial direct detection experiment loses sensitivity to dark matter, since the Earth crust, atmosphere, and potential shielding layers start to block off the dark matter particles. This critical cross section is commonly determined by describing the average energy loss of the dark matter particles analytically. However, this treatment overestimates the stopping power of the Earth crust; therefore, the obtained bounds should be considered as conservative. DaMaSCUS-CRUST is a modified version of DaMaSCUS (ascl:1706.003) that accounts for shielding effects and returns a precise exclusion band.
[ascl:1706.003] DaMaSCUS: Dark Matter Simulation Code for Underground Scatterings
DaMaSCUS calculates the density and velocity distribution of dark matter (DM) at any detector of given depth and latitude to provide dark matter particle trajectories inside the Earth. Provided a strong enough DM-matter interaction, the particles scatter on terrestrial atoms and get decelerated and deflected. The resulting local modifications of the DM velocity distribution and number density can have important consequences for direct detection experiments, especially for light DM, and lead to signatures such as diurnal modulations depending on the experiment's location on Earth. The code involves both the Monte Carlo simulation of particle trajectories and generation of data as well as the data analysis consisting of non-parametric density estimation of the local velocity distribution functions and computation of direct detection event rates.
[ascl:1412.004] DAMIT: Database of Asteroid Models from Inversion Techniques
DAMIT (Database of Asteroid Models from Inversion Techniques) is a database of three-dimensional models of asteroids computed using inversion techniques; it provides access to reliable and up-to-date physical models of asteroids, i.e., their shapes, rotation periods, and spin axis directions. Models from DAMIT can be used for further detailed studies of individual objects as well as for statistical studies of the whole set. The source codes for lightcurve inversion routines together with brief manuals, sample lightcurves, and the code for the direct problem are available for download.
[ascl:1807.023] DAMOCLES: Monte Carlo line radiative transfer code
The Monte Carlo code DAMOCLES models the effects of dust, composed of any combination of species and grain size distributions, on optical and NIR emission lines emitted from the expanding ejecta of a late-time (> 1 yr) supernova. The emissivity and dust distributions follow smooth radial power-law distributions; any arbitrary distribution can be specified by providing the appropriate grid. DAMOCLES treats a variety of clumping structures as specified by a clumped dust mass fraction, volume filling factor, clump size and clump power-law distribution, and the emissivity distribution may also initially be clumped. The code has a large number of variable parameters ranging from 5 dimensions in the simplest models to > 20 in the most complex cases.
[ascl:1709.005] DanIDL: IDL solutions for science and astronomy
DanIDL provides IDL functions and routines for many standard astronomy needs, such as searching for matching points between two coordinate lists of two-dimensional points where each list corresponds to a different coordinate space, estimating the full-width half-maximum (FWHM) and ellipticity of the PSF of an image, calculating pixel variances for a set of calibrated image data, and fitting a 3-parameter plane model to image data. The library also supplies astrometry, general image processing, and general scientific applications.
[ascl:1104.011] DAOPHOT: Crowded-field Stellar Photometry Package
The DAOPHOT program exploits the capability of photometrically linear image detectors to perform stellar photometry in crowded fields. Raw CCD images are prepared prior to analysis, and following the obtaining of an initial star list with the FIND program, synthetic aperture photometry is performed on the detected objects with the PHOT routine. A local sky brightness and a magnitude are computed for each star in each of the specified stellar apertures, and for crowded fields, the empirical point-spread function must then be obtained for each data frame. The GROUP routine divides the star list for a given frame into optimum subgroups, and then the NSTAR routine is used to obtain photometry for all the stars in the frame by means of least-squares profile fits.
[ascl:1706.004] Dark Sage: Semi-analytic model of galaxy evolution
DARK SAGE is a semi-analytic model of galaxy formation that focuses on detailing the structure and evolution of galaxies' discs. The code-base, written in C, is an extension of SAGE (ascl:1601.006) and maintains the modularity of SAGE. DARK SAGE runs on any N-body simulation with trees organized in a supported format and containing a minimum set of basic halo properties.
[ascl:1110.002] DarkSUSY: Supersymmetric Dark Matter Calculations
[ascl:1402.027] Darth Fader: Galaxy catalog cleaning method for redshift estimation
Darth Fader is a wavelet-based method for extracting spectral features from very noisy spectra. Spectra for which a reliable redshift cannot be measured are identified and removed from the input data set automatically, resulting in a clean catalogue that gives an extremely low rate of catastrophic failures even when the spectra have a very low S/N. This technique may offer a significant boost in the number of faint galaxies with accurately determined redshifts.
[ascl:1405.011] DATACUBE: A datacube manipulation package
DATACUBE is a command-line package for manipulating and visualizing data cubes. It was designed for integral field spectroscopy but has been extended to be a generic data cube tool, used in particular for sub-millimeter data cubes from the James Clerk Maxwell Telescope. It is part of the Starlink software collection (ascl:1110.012).
[ascl:1709.006] DCMDN: Deep Convolutional Mixture Density Network
Deep Convolutional Mixture Density Network (DCMDN) estimates probabilistic photometric redshift directly from multi-band imaging data by combining a version of a deep convolutional network with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) are applied as performance criteria. DCMDN is able to predict redshift PDFs independently from the type of source, e.g. galaxies, quasars or stars and renders pre-classification of objects and feature extraction unnecessary; the method is extremely general and allows the solving of any kind of probabilistic regression problems based on imaging data, such as estimating metallicity or star formation rate in galaxies.
[ascl:1207.006] dcr: Cosmic Ray Removal
[ascl:1212.012] ddisk: Debris disk time-evolution
ddisk is an IDL script that calculates the time-evolution of a circumstellar debris disk. It calculates dust abundances over time for a debris-disk that is produced by a planetesimal disk that is grinding away due to collisional erosion.
[ascl:1810.020] DDS: Debris Disk Radiative Transfer Simulator
[ascl:0008.001] DDSCAT: The discrete dipole approximation for scattering and absorption of light by irregular particles
DDSCAT is a freely available software package which applies the "discrete dipole approximation" (DDA) to calculate scattering and absorption of electromagnetic waves by targets with arbitrary geometries and complex refractive index. The DDA approximates the target by an array of polarizable points. DDSCAT.5a requires that these polarizable points be located on a cubic lattice. DDSCAT allows accurate calculations of electromagnetic scattering from targets with "size parameters" 2 pi a/lambda < 15 provided the refractive index m is not large compared to unity (|m-1| < 1). The DDSCAT package is written in Fortran and is highly portable. The program supports calculations for a variety of target geometries (e.g., ellipsoids, regular tetrahedra, rectangular solids, finite cylinders, hexagonal prisms, etc.). Target materials may be both inhomogeneous and anisotropic. It is straightforward for the user to import arbitrary target geometries into the code, and relatively straightforward to add new target generation capability to the package. DDSCAT automatically calculates total cross sections for absorption and scattering and selected elements of the Mueller scattering intensity matrix for specified orientation of the target relative to the incident wave, and for specified scattering directions. This User Guide explains how to use DDSCAT to carry out EM scattering calculations. CPU and memory requirements are described.
[ascl:1510.004] DEBiL: Detached Eclipsing Binary Light curve fitter
DEBiL rapidly fits a large number of light curves to a simple model. It is the central component of a pipeline for systematically identifying and analyzing eclipsing binaries within a large dataset of light curves; the results of DEBiL can be used to flag light curves of interest for follow-up analysis.
[ascl:1501.005] DECA: Decomposition of images of galaxies
DECA performs photometric analysis of images of disk and elliptical galaxies having a regular structure. It is written in Python and combines the capabilities of several widely used packages for astronomical data processing such as IRAF, SExtractor, and the GALFIT code to perform two-dimensional decomposition of galaxy images into several photometric components (bulge+disk). DECA can be applied to large samples of galaxies with different orientations with respect to the line of sight (including edge-on galaxies) and requires minimum human intervention.
[ascl:1801.006] DecouplingModes: Passive modes amplitudes
DecouplingModes calculates the amplitude of the passive modes, which requires solving the Einstein equations on superhorizon scales sourced by the anisotropic stress from the magnetic fields (prior to neutrino decoupling), and the magnetic and neutrino stress (after decoupling). The code is available as a Mathematica notebook.
[ascl:1603.015] Dedalus: Flexible framework for spectrally solving differential equations
Dedalus solves differential equations using spectral methods. It implements flexible algorithms to solve initial-value, boundary-value, and eigenvalue problems with broad ranges of custom equations and spectral domains. Its primary features include symbolic equation entry, multidimensional parallelization, implicit-explicit timestepping, and flexible analysis with HDF5. The code is written primarily in Python and features an easy-to-use interface. The numerical algorithm produces highly sparse systems for many equations which are efficiently solved using compiled libraries and MPI.
[ascl:1405.004] Defringeflat: Fringe pattern removal
[ascl:1011.012] DEFROST: A New Code for Simulating Preheating after Inflation
At the end of inflation, dynamical instability can rapidly deposit the energy of homogeneous cold inflaton into excitations of other fields. This process, known as preheating, is rather violent, inhomogeneous and non-linear, and has to be studied numerically. This paper presents a new code for simulating scalar field dynamics in expanding universe written for that purpose. Compared to available alternatives, it significantly improves both the speed and the accuracy of calculations, and is fully instrumented for 3D visualization. We reproduce previously published results on preheating in simple chaotic inflation models, and further investigate non-linear dynamics of the inflaton decay. Surprisingly, we find that the fields do not want to thermalize quite the way one would think. Instead of directly reaching equilibrium, the evolution appears to be stuck in a rather simple but quite inhomogeneous state. In particular, one-point distribution function of total energy density appears to be universal among various two-field preheating models, and is exceedingly well described by a lognormal distribution. It is tempting to attribute this state to scalar field turbulence.
[ascl:1602.012] DELightcurveSimulation: Light curve simulation code
[ascl:1904.009] deproject: Deprojection of two-dimensional annular X-ray spectra
Deproject extends Sherpa (ascl:1107.005) to facilitate deprojection of two-dimensional annular X-ray spectra to recover the three-dimensional source properties. For typical thermal models, this includes the radial temperature and density profiles. This basic method is used for X-ray cluster analysis and is the basis for the XSPEC (ascl:9910.005) model project. The deproject module is written in Python and is straightforward to use and understand. The basic physical assumption of deproject is that the extended source emissivity is constant and optically thin within spherical shells whose radii correspond to the annuli used to extract the specta. Given this assumption, one constructs a model for each annular spectrum that is a linear volume-weighted combination of shell models.
[ascl:1804.011] DESCQA: Synthetic Sky Catalog Validation Framework
[ascl:1304.007] DESPOTIC: Derive the Energetics and SPectra of Optically Thick Interstellar Clouds
DESPOTIC (Derive the Energetics and SPectra of Optically Thick Interstellar Clouds), written in Python, represents optically thick interstellar clouds using a one-zone model and calculates line luminosities, line cooling rates, and in restricted cases line profiles using an escape probability formalism. DESPOTIC calculates clouds' equilibrium gas and dust temperatures and their time-dependent thermal evolution. The code allows rapid and interactive calculation of clouds' characteristic temperatures, identification of their dominant heating and cooling mechanisms, and prediction of their observable spectra across a wide range of interstellar environments.
[ascl:1907.008] Dewarp: Distortion removal and on-sky orientation solution for LBTI detectors
Dewarp constructs pipelines to remove distortion from a detector and find the orientation with true North. It was originally written for the LBTI LMIRcam detector, but is generalizable to any project with reference sources and/or an astrometric field paired with a machine-readable file of astrometric target locations.
[ascl:1402.022] DexM: Semi-numerical simulations for very large scales
DexM (Deus ex Machina) efficiently generates density, halo, and ionization fields on very large scales and with a large dynamic range through seminumeric simulation. These properties are essential for reionization studies, especially those involving rare, massive QSOs, since one must be able to statistically capture the ionization field. DexM can also generate ionization fields directly from the evolved density field to account for the ionizing contribution of small halos. Semi-numerical simulations use more approximate physics than numerical simulations, but independently generate 3D cosmological realizations. DexM is portable and fast, and allows for explorations of wide swaths of astrophysical parameter space and an unprecedented dynamic range.
[ascl:1112.015] Dexter: Data Extractor for scanned graphs
[ascl:1904.017] dfitspy: A dfits/fitsort implementation in Python
dfitspy searches and displays metadata contained in FITS files. Written in Python, it displays the results of a metadata search and is able to grep certain values of keywords inside large samples of files in the terminal. dfitspy can be used directly with the command line interface and can also be imported as a python module into other python code or the python interpreter.
[ascl:1805.002] dftools: Distribution function fitting
[ascl:1410.001] DIAMONDS: high-DImensional And multi-MOdal NesteD Sampling
DIAMONDS (high-DImensional And multi-MOdal NesteD Sampling) provides Bayesian parameter estimation and model comparison by means of the nested sampling Monte Carlo (NSMC) algorithm, an efficient and powerful method very suitable for high-dimensional and multi-modal problems; it can be used for any application involving Bayesian parameter estimation and/or model selection in general. Developed in C++11, DIAMONDS is structured in classes for flexibility and configurability. Any new model, likelihood and prior PDFs can be defined and implemented upon a basic template.
[ascl:1607.002] DICE: Disk Initial Conditions Environment
[ascl:1801.010] DICE/ColDICE: 6D collisionless phase space hydrodynamics using a lagrangian tesselation
DICE is a C++ template library designed to solve collisionless fluid dynamics in 6D phase space using massively parallel supercomputers via an hybrid OpenMP/MPI parallelization. ColDICE, based on DICE, implements a cosmological and physical VLASOV-POISSON solver for cold systems such as dark matter (CDM) dynamics.
[ascl:1704.013] Difference-smoothing: Measuring time delay from light curves
The Difference-smoothing MATLAB code measures the time delay from the light curves of images of a gravitationally lendsed quasar. It uses a smoothing timescale free parameter, generates more realistic synthetic light curves to estimate the time delay uncertainty, and uses X2 plot to assess the reliability of a time delay measurement as well as to identify instances of catastrophic failure of the time delay estimator. A systematic bias in the measurement of time delays for some light curves can be eliminated by applying a correction to each measured time delay.
[ascl:1512.012] DiffuseModel: Modeling the diffuse ultraviolet background
DiffuseModel calculates the scattered radiation from dust scattering in the Milky Way based on stars from the Hipparcos catalog. It uses Monte Carlo to implement multiple scattering and assumes a user-supplied grid for the dust distribution. The output is a FITS file with the diffuse light over the Galaxy. It is intended for use in the UV (900 - 3000 A) but may be modified for use in other wavelengths and galaxies.
[ascl:1304.008] Diffusion.f: Diffusion of elements in stars
Diffusion.f is an exportable subroutine to calculate the diffusion of elements in stars. The routine solves exactly the Burgers equations and can include any number of elements as variables. The code has been used successfully by a number of different groups; applications include diffusion in the sun and diffusion in globular cluster stars. There are many other possible applications to main sequence and to evolved stars. The associated README file explains how to use the subroutine.
[ascl:1103.001] Difmap: Synthesis Imaging of Visibility Data
Difmap is a program developed for synthesis imaging of visibility data from interferometer arrays of radio telescopes world-wide. Its prime advantages over traditional packages are its emphasis on interactive processing, speed, and the use of Difference mapping techniques.
[ascl:1904.023] digest2: NEO binary classifier
digest2 classifies Near-Earth Object (NEO) candidates by providing a score, D2, that represents a pseudo-probability that a tracklet belongs to a given solar system orbit type. The code accurately and precisely distinguishes NEOs from non-NEOs, thus helping to identify those to be prioritized for follow-up observation. This fast, short-arc orbit classifier for small solar system bodies code is built upon the Pangloss code developed by Robert McNaught and further developed by Carl Hergenrother and Tim Spahr and Robert Jedicke's 223.f code.
[ascl:1010.031] DimReduce: Nonlinear Dimensionality Reduction of Very Large Datasets with Locally Linear Embedding (LLE) and its Variants
DimReduce is a C++ package for performing nonlinear dimensionality reduction of very large datasets with Locally Linear Embedding (LLE) and its variants. DimReduce is built for speed, using the optimized linear algebra packages BLAS, LAPACK, and ARPACK. Because of the need for storing very large matrices (1000 by 10000, for our SDSS LLE work), DimReduce is designed to use binary FITS files as inputs and outputs. This means that using the code is a bit more cumbersome. For smaller-scale LLE, where speed of computation is not as much of an issue, the Modular Data Processing toolkit may be a better choice. It is a python toolkit with some LLE functionality, which VanderPlas contributed.
This code has been rewritten and included in scikit-learn and an improved version is included in
[ascl:1405.016] DIPSO: Spectrum analysis code
[ascl:1806.015] DirectDM-mma: Dark matter direct detection
The Mathematica code DirectDM takes the Wilson coefficients of relativistic operators that couple DM to the SM quarks, leptons, and gauge bosons and matches them onto a non-relativistic Galilean invariant EFT in order to calculate the direct detection scattering rates. A Python implementation of DirectDM is also available (ascl:1806.016).
[ascl:1806.016] DirectDM-py: Dark matter direct detection
DirectDM, written in Python, takes the Wilson coefficients of relativistic operators that couple DM to the SM quarks, leptons, and gauge bosons and matches them onto a non-relativistic Galilean invariant EFT in order to calculate the direct detection scattering rates. A Mathematica implementation of DirectDM is also available (ascl:1806.015).
[ascl:1102.021] DIRT: Dust InfraRed Toolbox
DIRT is a Java applet for modelling astrophysical processes in circumstellar dust shells around young and evolved stars. With DIRT, you can select and display over 500,000 pre-run model spectral energy distributions (SEDs), find the best-fit model to your data set, and account for beam size in model fitting. DIRT also allows you to manipulate data and models with an interactive viewer, display gas and dust density and temperature profiles, and display model intensity profiles at various wavelengths.
[ascl:1209.011] DiskFit: Modeling Asymmetries in Disk Galaxies
[ascl:1603.011] DiskJockey: Protoplanetary disk modeling for dynamical mass derivation
DiskJockey derives dynamical masses for T Tauri stars using the Keplerian motion of their circumstellar disks, applied to radio interferometric data from the Atacama Large Millimeter Array (ALMA) and the Submillimeter Array (SMA). The package relies on RADMC-3D (ascl:1202.015) to perform the radiative transfer of the disk model. DiskJockey is designed to work in a parallel environment where the calculations for each frequency channel can be distributed to independent processors. Due to the computationally expensive nature of the radiative synthesis, fitting sizable datasets (e.g., SMA and ALMA) will require a substantial amount of CPU cores to explore a posterior distribution in a reasonable timeframe.
[ascl:1108.015] DISKSTRUCT: A Simple 1+1-D Disk Structure Code
DISKSTRUCT is a simple 1+1-D code for modeling protoplanetary disks. It is not based on multidimensional radiative transfer! Instead, a flaring-angle recipe is used to compute the irradiation of the disk, while the disk vertical structure at each cylindrical radius is computed in a 1-D fashion; the models computed with this code are therefore approximate. Moreover, this model cannot deal with the dust inner rim.
In spite of these simplifications and drawbacks, the code can still be very useful for disk studies, for the following reasons:
• It allows the disk structure to be studied in a 1-D vertical fashion (one radial cylinder at a time). For understanding the structure of disks, and also for using it as a basis of other models, this can be a great advantage.
• For very optically thick disks this code is likely to be much faster than the RADMC full disk model.
• Viscous internal heating of the disk is implemented and converges quickly, whereas the RADMC code is still having difficulty to deal with high optical depth combined with viscously generated internal heat.
[ascl:1708.006] DISORT: DIScrete Ordinate Radiative Transfer
[ascl:1302.015] DisPerSE: Discrete Persistent Structures Extractor
DisPerSE is open source software for the identification of persistent topological features such as peaks, voids, walls and in particular filamentary structures within noisy sampled distributions in 2D, 3D. Using DisPerSE, structure identification can be achieved through the computation of the discrete Morse-Smale complex. The software can deal directly with noisy datasets via the concept of persistence (a measure of the robustness of topological features). Although developed for the study of the properties of filamentary structures in the cosmic web of galaxy distribution over large scales in the Universe, the present version is quite versatile and should be useful for any application where a robust structure identification is required, such as for segmentation or for studying the topology of sampled functions (for example, computing persistent Betti numbers). Currently, it can be applied can work indifferently on many kinds of cell complex (such as structured and unstructured grids, 2D manifolds embedded within a 3D space, discrete point samples using delaunay tesselation, and Healpix tesselations of the sphere). The only constraint is that the distribution must be defined over a manifold, possibly with boundaries.
[submitted] DM phase: A novel algorithm for correcting dispersion of radio signals
Radio waves propagating in space are subject to frequency-dependent delay due to interactions with cold free electrons, which gives coherent radio emissions a unique structure known as dispersion. The study of impulsive radio signals from astronomical sources, such as those emitted by pulsars and fast radio bursts (FRBs), requires proper corrections for this effect. Moreover, the ionized medium itself can be characterized by sensitive measurements of this dispersion.
Signal dispersion is proportional to the integrated column density of free electrons along the line of sight, a quantity known as dispersion measure (DM), and inversely proportional to the observing frequency squared. Traditional methods search for the best DM value of a source by maximizing the signal-to-noise ratio (S/N) of the detected signal. While sensitive and efficient algorithms have been designed for this purpose, they are affected by two limitations. Firstly, they implicitly assume a broadband emission across the entire observing frequency bandwidth. While this is normally true for pulsars, some FRBs have been observed to have complex spectra which returned incorrect DM values. Secondly, these traditional algorithms are highly sensitive to large-amplitude events such as large noise spikes and radio interference. In order to overcome these limitations, we developed a new algorithm to maximize the coherent power of the signal instead of its intensity. Since the structure of the signal is coherent at different frequencies, this method is relatively insensitive to complex spectro-temporal shapes of the pulses. In addition, this method is more robust to noise and interference because these normally have incoherent structures and the amplitude information in each frequency channel is discarded.
[ascl:1705.002] DMATIS: Dark Matter ATtenuation Importance Sampling
DMATIS (Dark Matter ATtenuation Importance Sampling) calculates the trajectories of DM particles that propagate in the Earth's crust and the lead shield to reach the DAMIC detector using an importance sampling Monte-Carlo simulation. A detailed Monte-Carlo simulation avoids the deficiencies of the SGED/KS method that uses a mean energy loss description to calculate the lower bound on the DM-proton cross section. The code implementing the importance sampling technique makes the brute-force Monte-Carlo simulation of moderately strongly interacting DM with nucleons computationally feasible. DMATIS is written in Python 3 and MATHEMATICA.
[ascl:1506.002] dmdd: Dark matter direct detection
The dmdd package enables simple simulation and Bayesian posterior analysis of recoil-event data from dark-matter direct-detection experiments under a wide variety of scattering theories. It enables calculation of the nuclear-recoil rates for a wide range of non-relativistic and relativistic scattering operators, including non-standard momentum-, velocity-, and spin-dependent rates. It also accounts for the correct nuclear response functions for each scattering operator and takes into account the natural abundances of isotopes for a variety of experimental target elements.
[ascl:1010.029] DNEST: Diffusive Nested Sampling
[ascl:1604.007] DNest3: Diffusive Nested Sampling
DNest3 is a C++ implementation of Diffusive Nested Sampling (ascl:1010.029), a Markov Chain Monte Carlo (MCMC) algorithm for Bayesian Inference and Statistical Mechanics. Relative to older DNest versions, DNest3 has improved performance (in terms of the sampling overhead, likelihood evaluations still dominate in general) and is cleaner code: implementing new models should be easier than it was before. In addition, DNest3 is multi-threaded, so one can run multiple MCMC walkers at the same time, and the results will be combined together.
[ascl:1608.013] DOLPHOT: Stellar photometry
[ascl:1709.004] DOOp: DAOSPEC Output Optimizer pipeline
The DAOSPEC Output Optimizer pipeline (DOOp) runs efficient and convenient equivalent widths measurements in batches of hundreds of spectra. It uses a series of BASH scripts to work as a wrapper for the FORTRAN code DAOSPEC (ascl:1011.002) and uses IRAF (ascl:9911.002) to automatically fix some of the parameters that are usually set by hand when using DAOSPEC. This allows batch-processing of quantities of spectra that would be impossible to deal with by hand. DOOp was originally built for the large quantity of UVES and GIRAFFE spectra produced by the Gaia-ESO Survey, but just like DAOSPEC, it can be used on any high resolution and high signal-to-noise ratio spectrum binned on a linear wavelength scale.
[ascl:1206.011] Double Eclipsing Binary Fitting
[ascl:1504.012] DPI: Symplectic mapping for binary star systems for the Mercury software package
DPI is a FORTRAN77 library that supplies the symplectic mapping method for binary star systems for the Mercury N-Body software package (ascl:1201.008). The binary symplectic mapping is implemented as a hybrid symplectic method that allows close encounters and collisions between massive bodies and is therefore suitable for planetary accretion simulations.
[ascl:1804.003] DPPP: Default Pre-Processing Pipeline
DPPP (Default Pre-Processing Pipeline, also referred to as NDPPP) reads and writes radio-interferometric data in the form of Measurement Sets, mainly those that are created by the LOFAR telescope. It goes through visibilities in time order and contains standard operations like averaging, phase-shifting and flagging bad stations. Between the steps in a pipeline, the data is not written to disk, making this tool suitable for operations where I/O dominates. More advanced procedures such as gain calibration are also included. Other computing steps can be provided by loading a shared library; currently supported external steps are the AOFlagger (ascl:1010.017) and a bridge that enables loading python steps.
[ascl:1303.025] DPUSER: Interactive language for image analysis
DPUSER is an interactive language capable of handling numbers (both real and complex), strings, and matrices. Its main aim is to do astronomical image analysis, for which it provides a comprehensive set of functions, but it can also be used for many other applications.
[ascl:1712.005] draco: Analysis and simulation of drift scan radio data
draco analyzes transit radio data with the m-mode formalism. It is telescope agnostic, and is used as part of the analysis and simulation pipeline for the CHIME (Canadian Hydrogen Intensity Mapping Experiment) telescope. It can simulate time stream data from maps of the sky (using the m-mode formalism) and add gain fluctuations and correctly correlated instrumental noise (i.e. Wishart distributed). Further, it can perform various cuts on the data and make maps of the sky from data using the m-mode formalism.
[ascl:1512.009] DRACULA: Dimensionality Reduction And Clustering for Unsupervised Learning in Astronomy
DRACULA classifies objects using dimensionality reduction and clustering. The code has an easy interface and can be applied to separate several types of objects. It is based on tools developed in scikit-learn, with some usage requiring also the H2O package.
[ascl:1106.011] DRAGON: Galactic Cosmic Ray Diffusion Code
DRAGON adopts a second-order Cranck-Nicholson scheme with Operator Splitting and time overrelaxation to solve the diffusion equation. This provides a fast solution that is accurate enough for the average user. Occasionally, users may want to have very accurate solutions to their problem. To enable this feature, users may get close to the accurate solution by using the fast method, and then switch to a more accurate solution scheme featuring the Alternating-Direction-Implicit (ADI) Cranck-Nicholson scheme.
[ascl:1011.009] DRAGON: Monte Carlo Generator of Particle Production from a Fragmented Fireball in Ultrarelativistic Nuclear Collisions
A Monte Carlo generator of the final state of hadrons emitted from an ultrarelativistic nuclear collision is introduced. An important feature of the generator is a possible fragmentation of the fireball and emission of the hadrons from fragments. Phase space distribution of the fragments is based on the blast wave model extended to azimuthally non-symmetric fireballs. Parameters of the model can be tuned and this allows to generate final states from various kinds of fireballs. A facultative output in the OSCAR1999A format allows for a comprehensive analysis of phase-space distributions and/or use as an input for an afterburner. DRAGON's purpose is to produce artificial data sets which resemble those coming from real nuclear collisions provided fragmentation occurs at hadronisation and hadrons are emitted from fragments without any further scattering. Its name, DRAGON, stands for DRoplet and hAdron GeneratOr for Nuclear collisions. In a way, the model is similar to THERMINATOR, with the crucial difference that emission from fragments is included.
[ascl:1811.002] DRAGONS: Gemini Observatory data reduction platform
[ascl:1507.012] DRAMA: Instrumentation software environment
DRAMA is a fast, distributed environment for writing instrumentation control systems. It allows low level instrumentation software to be controlled from user interfaces running on UNIX, MS Windows or VMS machines in a consistent manner. Such instrumentation tasks can run either on these machines or on real time systems such as VxWorks. DRAMA uses techniques developed by the AAO while using the Starlink-ADAM environment, but is optimized for the requirements of instrumentation control, portability, embedded systems and speed. A special program is provided which allows seamless communication between ADAM and DRAMA tasks.
[ascl:1504.006] drive-casa: Python interface for CASA scripting
drive-casa provides a Python interface for scripting of CASA (ascl:1107.013) subroutines from a separate Python process, allowing for utilization alongside other Python packages which may not easily be installed into the CASA environment. This is particularly useful for embedding use of CASA subroutines within a larger pipeline. drive-casa runs plain-text casapy scripts directly; alternatively, the package includes a set of convenience routines which try to adhere to a consistent style and make it easy to chain together successive CASA reduction commands to generate a command-script programmatically.
[ascl:1212.011] DrizzlePac: HST image software
DrizzlePac allows users to easily and accurately align and combine HST images taken at multiple epochs, and even with different instruments. It is a suite of supporting tasks for AstroDrizzle which includes:
• astrodrizzle to align and combine images
• tweakreg and tweakback for aligning images in different visits
• pixtopix transforms an X,Y pixel position to its pixel position after distortion corrections
• skytopix transforms sky coordinates to X,Y pixel positions. A reverse transformation can be done using the task pixtosky.
[ascl:1610.003] DSDEPROJ: Direct Spectral Deprojection
Deprojection of X-ray data by methods such as PROJCT, which are model dependent, can produce large and unphysical oscillating temperature profiles. Direct Spectral Deprojection (DSDEPROJ) solves some of the issues inherent to model-dependent deprojection routines. DSDEPROJ is a model-independent approach, assuming only spherical symmetry, which subtracts projected spectra from each successive annulus to produce a set of deprojected spectra.
[ascl:1010.006] DSPSR: Digital Signal Processing Software for Pulsar Astronomy
DSPSR, written primarily in C++, is an open-source, object-oriented, digital signal processing software library and application suite for use in radio pulsar astronomy. The library implements an extensive range of modular algorithms for use in coherent dedispersion, filterbank formation, pulse folding, and other tasks. The software is installed and compiled using the standard GNU configure and make system, and is able to read astronomical data in 18 different file formats, including FITS, S2, CPSR, CPSR2, PuMa, PuMa2, WAPP, ASP, and Mark5.
[ascl:1501.004] dst: Polarimeter data destriper
Dst is a fully parallel Python destriping code for polarimeter data; destriping is a well-established technique for removing low-frequency correlated noise from Cosmic Microwave Background (CMB) survey data. The software destripes correctly formatted HDF5 datasets and outputs hitmaps, binned maps, destriped maps and baseline arrays.
[ascl:1505.034] dStar: Neutron star thermal evolution code
[ascl:1201.011] Duchamp: A 3D source finder for spectral-line data
Duchamp is software designed to find and describe sources in 3-dimensional, spectral-line data cubes. Duchamp has been developed with HI (neutral hydrogen) observations in mind, but is widely applicable to many types of astronomical images. It features efficient source detection and handling methods, noise suppression via smoothing or multi-resolution wavelet reconstruction, and a range of graphical and text-based outputs to allow the user to understand the detections.
[ascl:1605.014] DUO: Spectra of diatomic molecules
Duo computes rotational, rovibrational and rovibronic spectra of diatomic molecules. The software, written in Fortran 2003, solves the Schrödinger equation for the motion of the nuclei for the simple case of uncoupled, isolated electronic states and also for the general case of an arbitrary number and type of couplings between electronic states. Possible couplings include spin–orbit, angular momenta, spin-rotational and spin–spin. Introducing the relevant couplings using so-called Born–Oppenheimer breakdown curves can correct non-adiabatic effects.
[ascl:1503.005] dust: Dust scattering and extinction in the X-ray
Written in Python, dust calculates X-ray dust scattering and extinction in the intergalactic and local interstellar media.
[ascl:1307.001] DustEM: Dust extinction and emission modelling
DustEM computes the extinction and the emission of interstellar dust grains heated by photons. It is written in Fortran 95 and is jointly developed by IAS and CESR. The dust emission is calculated in the optically thin limit (no radiative transfer) and the default spectral range is 40 to 108 nm. The code is designed so dust properties can easily be changed and mixed and to allow for the inclusion of new grain physics.
[ascl:9911.001] DUSTY: Radiation transport in a dusty environment
DUSTY solves the problem of radiation transport in a dusty environment. The code can handle both spherical and planar geometries. The user specifies the properties of the radiation source and dusty region, and the code calculates the dust temperature distribution and the radiation field in it. The solution method is based on a self-consistent equation for the radiative energy density, including dust scattering, absorption and emission, and does not introduce any approximations. The solution is exact to within the specified numerical accuracy. DUSTY has built in optical properties for the most common types of astronomical dust and comes with a library for many other grains. It supports various analytical forms for the density distribution, and can perform a full dynamical calculation for radiatively driven winds around AGB stars. The spectral energy distribution of the source can be specified analytically as either Planckian or broken power-law. In addition, arbitrary dust optical properties, density distributions and external radiation can be entered in user supplied files. Furthermore, the wavelength grid can be modified to accommodate spectral features. A single DUSTY run can process an unlimited number of models, with each input set producing a run of optical depths, as specified. The user controls the detail level of the output, which can include both spectral and imaging properties as well as other quantities of interest.
[ascl:1602.004] DUSTYWAVE: Linear waves in gas and dust
Written in Fortran, DUSTYWAVE computes the exact solution for linear waves in a two-fluid mixture of gas and dust. The solutions are general with respect to both the dust-to-gas ratio and the amplitude of the drag coefficient.
[ascl:1809.013] dynesty: Dynamic Nested Sampling package
[ascl:1902.010] dyPolyChord: Super fast dynamic nested sampling with PolyChord
dyPolyChord implements dynamic nested sampling using the efficient PolyChord (ascl:1502.011) sampler to provide state-of-the-art nested sampling performance. Any likelihoods and priors which work with PolyChord can be used (Python, C++ or Fortran), and the output files produced are in the PolyChord format.
[ascl:1407.017] e-MERLIN data reduction pipeline
Written in Python and utilizing ParselTongue (ascl:1208.020) to interface with AIPS (ascl:9911.003), the e-MERLIN data reduction pipeline processes, calibrates and images data from the UK's radio interferometric array (Multi-Element Remote-Linked Interferometer Network). Driven by a plain text input file, the pipeline is modular and can be run in stages. The software includes options to load raw data, average in time and/or frequency, flag known sources of interference, flag more comprehensively with SERPent (ascl:1312.001), carry out some or all of the calibration procedures (including self-calibration), and image in either normal or wide-field mode. It also optionally produces a number of useful diagnostic plots at various stages so data quality can be assessed.
[ascl:1106.004] E3D: The Euro3D Visualization Tool
E3D is a package of tools for the analysis and visualization of IFS data. It is capable of reading, writing, and visualizing reduced data from 3D spectrographs of any kind.
[ascl:1805.004] EARL: Exoplanet Analytic Reflected Lightcurves package
[ascl:1611.012] EarthShadow: Calculator for dark matter particle velocity distribution after Earth-scattering
EarthShadow calculates the impact of Earth-scattering on the distribution of Dark Matter (DM) particles. The code calculates the speed and velocity distributions of DM at various positions on the Earth and also helps with the calculation of the average scattering probabilities. Tabulated data for DM-nuclear scattering cross sections and various numerical results, plots and animations are also included in the code package.
[ascl:1612.010] Earthshine simulator: Idealized images of the Moon
Terrestrial albedo can be determined from observations of the relative intensity of earthshine. Images of the Moon at different lunar phases can be analyzed to derive the semi-hemispheric mean albedo of the Earth, and an important tool for doing this is simulations of the appearance of the Moon for any time. This software produces idealized images of the Moon for arbitrary times. It takes into account the libration of the Moon and the distances between Sun, Moon and the Earth, as well as the relevant geometry. The images of the Moon are produced as FITS files. User input includes setting the Julian Day of the simulation. Defaults for image size and field of view are set to produce approximately 1x1 degree images with the Moon in the middle from an observatory on Earth, currently set to Mauna Loa.
[ascl:1011.013] EasyLTB: Code for Testing LTB Models against CosmologyConfronting Lemaitre-Tolman-Bondi Models with Observational Cosmology
The possibility that we live in a special place in the universe, close to the centre of a large void, seems an appealing alternative to the prevailing interpretation of the acceleration of the universe in terms of a LCDM model with a dominant dark energy component. In this paper we confront the asymptotically flat Lemaitre-Tolman-Bondi (LTB) models with a series of observations, from Type Ia Supernovae to Cosmic Microwave Background and Baryon Acoustic Oscillations data. We propose two concrete LTB models describing a local void in which the only arbitrary functions are the radial dependence of the matter density Omega_M and the Hubble expansion rate H. We find that all observations can be accommodated within 1 sigma, for our models with 4 or 5 independent parameters. The best fit models have a chi^2 very close to that of the LCDM model. We perform a simple Bayesian analysis and show that one cannot exclude the hypothesis that we live within a large local void of an otherwise Einstein-de Sitter model.
[ascl:1203.007] EBTEL: Enthalpy-Based Thermal Evolution of Loops
Observational and theoretical evidence suggests that coronal heating is impulsive and occurs on very small cross-field spatial scales. A single coronal loop could contain a hundred or more individual strands that are heated quasi-independently by nanoflares. It is therefore an enormous undertaking to model an entire active region or the global corona. Three-dimensional MHD codes have inadequate spatial resolution, and 1D hydro codes are too slow to simulate the many thousands of elemental strands that must be treated in a reasonable representation. Fortunately, thermal conduction and flows tend to smooth out plasma gradients along the magnetic field, so "0D models" are an acceptable alternative. We have developed a highly efficient model called Enthalpy-Based Thermal Evolution of Loops (EBTEL) that accurately describes the evolution of the average temperature, pressure, and density along a coronal strand. It improves significantly upon earlier models of this type--in accuracy, flexibility, and capability. It treats both slowly varying and highly impulsive coronal heating; it provides the differential emission measure distribution, DEM(T), at the transition region footpoints; and there are options for heat flux saturation and nonthermal electron beam heating. EBTEL gives excellent agreement with far more sophisticated 1D hydro simulations despite using four orders of magnitude less computing time. It promises to be a powerful new tool for solar and stellar studies.
[ascl:1411.017] ECCSAMPLES: Bayesian Priors for Orbital Eccentricity
ECCSAMPLES solves the inverse cumulative density function (CDF) of a Beta distribution, sometimes called the IDF or inverse transform sampling. This allows one to sample from the relevant priors directly. ECCSAMPLES actually provides joint samples for both the eccentricity and the argument of periastron, since for transiting systems they display non-zero covariance.
[ascl:1810.006] Echelle++: Generic spectrum simulator
[ascl:1405.018] ECHOMOP: Echelle data reduction package
[ascl:1112.001] Eclipse: ESO C Library for an Image Processing Software Environment
Written in ANSI C, eclipse is a library offering numerous services related to astronomical image processing: FITS data access, various image and cube loading methods, binary image handling and filtering (including convolution and morphological filters), 2-D cross-correlation, connected components, cube and image arithmetic, dead pixel detection and correction, object detection, data extraction, flat-fielding with robust fit, image generation, statistics, photometry, image-space resampling, image combination, and cube stacking. It also contains support for mathematical tools like random number generation, FFT, curve fitting, matrices, fast median computation, and point-pattern matching. The main feature of this library is its ability to handle large amounts of input data (up to 2GB in the current version) regardless of the amount of memory and swap available on the local machine. Another feature is the very high speed allowed by optimized C, making it an ideal base tool for programming efficient number-crunching applications, e.g., on parallel (Beowulf) systems.
[ascl:1901.010] eddy: Extracting Disk DYnamics
[ascl:1512.003] EDRS: Electronography Data Reduction System
[ascl:1512.004] EDRSX: Extensions to the EDRS package
[ascl:1804.008] EGG: Empirical Galaxy Generator
[ascl:1904.004] ehtim: Imaging, analysis, and simulation software for radio interferometry
ehtim (eht-imaging) simulates and manipulates VLBI data and produces images with regularized maximum likelihood methods. The package contains several primary classes for loading, simulating, and manipulating VLBI data. The main classes are the Image, Array, Obsdata, Imager, and Caltable classes, which provide tools for loading images and data, producing simulated data from realistic u-v tracks, calibrating, inspecting, and plotting data, and producing images from data sets in various polarizations using various data terms and regularizers.
[ascl:1904.013] EightBitTransit: Calculate light curves from pixel grids
EightBitTransit calculates the light curve of any pixelated image transiting a star and inverts a light curve to recover the "shadow image" that produced it.
[ascl:1102.014] Einstein Toolkit for Relativistic Astrophysics
[ascl:1904.022] eleanor: Extracted and systematics-corrected light curves for TESS-observed stars
eleanor extracts target pixel files from TESS Full Frame Images and produces systematics-corrected light curves for any star observed by the TESS mission. eleanor takes a TIC ID, a Gaia source ID, or (RA, Dec) coordinates of a star observed by TESS and returns, as a single object, a light curve and accompanying target pixel data. The process can be customized, allowing, for example, examination of intermediate data products and changing the aperture used for light curve extraction. eleanor also offers tools that make it easier to work with stars observed in multiple TESS sectors.
[ascl:1603.016] ellc: Light curve model for eclipsing binary stars and transiting exoplanets
ellc analyzes the light curves of detached eclipsing binary stars and transiting exoplanet systems. The model represents stars as triaxial ellipsoids, and the apparent flux from the binary is calculated using Gauss-Legendre integration over the ellipses that are the projection of these ellipsoids on the sky. The code can also calculate the fluxweighted radial velocity of the stars during an eclipse (Rossiter-McLaghlin effect). ellc can model a wide range of eclipsing binary stars and extrasolar planetary systems, and can enable the use of modern Monte Carlo methods for data analysis and model testing.
[ascl:1106.024] ELMAG: Simulation of Electromagnetic Cascades
A Monte Carlo program for the simulation of electromagnetic cascades initiated by high-energy photons and electrons interacting with extragalactic background light (EBL) is presented. Pair production and inverse Compton scattering on EBL photons as well as synchrotron losses and deflections of the charged component in extragalactic magnetic fields (EGMF) are included in the simulation. Weighted sampling of the cascade development is applied to reduce the number of secondary particles and to speed up computations. As final result, the simulation procedure provides the energy, the observation angle, and the time delay of secondary cascade particles at the present epoch. Possible applications are the study of TeV blazars and the influence of the EGMF on their spectra or the calculation of the contribution from ultrahigh energy cosmic rays or dark matter to the diffuse extragalactic gamma-ray background. As an illustration, we present results for deflections and time-delays relevant for the derivation of limits on the EGMF.
[ascl:1203.006] EMACSS: Evolve Me A Cluster of StarS
The star cluster evolution code Evolve Me A Cluster of StarS (EMACSS) is a simple yet physically motivated computational model that describes the evolution of some fundamental properties of star clusters in static tidal fields. The prescription is based upon the flow of energy within the cluster, which is a constant fraction of the total energy per half-mass relaxation time. According to Henon's predictions, this flow is independent of the precise mechanisms for energy production within the core, and therefore does not require a complete description of the many-body interactions therein. Dynamical theory and analytic descriptions of escape mechanisms is used to construct a series of coupled differential equations expressing the time evolution of cluster mass and radius for a cluster of equal-mass stars. These equations are numerically solved using a fourth-order Runge-Kutta integration kernel; the results were benchmarked against a data base of direct N-body simulations. EMACSS is publicly available and reproduces the N-body results to within ~10 per cent accuracy for the entire post-collapse evolution of star clusters.
[ascl:1303.002] emcee: The MCMC Hammer
[ascl:1201.004] emGain: Determination of EM gain of CCD
The determination of the EM gain of the CCD is best done by fitting the histogram of many low-light frames. Typically, the dark+CIC noise of a 30ms frame itself is a sufficient amount of signal to determine accurately the EM gain with about 200 512x512 frames. The IDL code emGain takes as an input a cube of frames and fit the histogram of all the pixels with the EM stage output probability function. The function returns the EM gain of the frames as well as the read-out noise and the mean signal level of the frames.
[ascl:1708.027] empiriciSN: Supernova parameter generator
[ascl:1010.018] Emu CMB: Power spectrum emulator
Emu CMB is a fast emulator the CMB temperature power spectrum based on CAMB (Jan 2010 version). Emu CMB is based on a "space-filling" Orthogonal Array Latin Hypercube design in a de-correlated parameter space obtained by using a fiducial WMAP5 CMB Fisher matrix as a rotation matrix. This design strategy allows for accurate interpolation with small numbers of simulation design points. The emulator presented here is calibrated with 100 CAMB runs that are interpolated over the design space using a global quadratic polynomial fit.
[ascl:1109.012] EnBiD: Fast Multi-dimensional Density Estimation
We present a method to numerically estimate the densities of a discretely sampled data based on a binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles, until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density and its shape provides an estimate of the variance. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We use this method to determine the appropriate metric at each point in space and then use kernel-based methods for calculating the density. The kernel-smoothed estimates were found to be more accurate and have lower dispersion. We apply this method to determine the phase-space densities of dark matter haloes obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function v(f) of phase-space density f does not have a constant slope but rather a small hump at high phase-space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of v(f) versus f relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behaviour as seen in simulations. The use of the presented method is not limited to calculation of phase-space densities, but can be used as a general purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.
[ascl:1501.008] Enrico: Python package to simplify Fermi-LAT analysis
Enrico analyzes Fermi data. It produces spectra (model fit and flux points), maps and lightcurves for a target by editing a config file and running a python script which executes the Fermi science tool chain.
[ascl:1010.072] Enzo: AMR Cosmology Application
[ascl:1511.021] EPIC: E-field Parallel Imaging Correlator
E-field Parallel Imaging Correlator (EPIC), a highly parallelized Object Oriented Python package, implements the Modular Optimal Frequency Fourier (MOFF) imaging technique. It also includes visibility-based imaging using the software holography technique and a simulator for generating electric fields from a sky model. EPIC can accept dual-polarization inputs and produce images of all four instrumental cross-polarizations.
[ascl:1302.005] EPICS: Experimental Physics and Industrial Control System
EPICS is a set of software tools and applications developed collaboratively and used to create distributed soft real-time control systems for scientific instruments such as particle accelerators and telescopes. Such distributed control systems typically comprise tens or even hundreds of computers, networked together to allow communication between them and to provide control and feedback of the various parts of the device from a central control room, or even remotely over the internet. EPICS uses Client/Server and Publish/Subscribe techniques to communicate between the various computers. A Channel Access Gateway allows engineers and physicists elsewhere in the building to examine the current state of the IOCs, but prevents them from making unauthorized adjustments to the running system. In many cases the engineers can make a secure internet connection from home to diagnose and fix faults without having to travel to the site.
EPICS is used by many facilities worldwide, including the Advanced Photon Source at Argonne National Laboratory, Fermilab, Keck Observatory, Laboratori Nazionali di Legnaro, Brazilian Synchrotron Light Source, Los Alamos National Laboratory, Australian Synchrotron, and Stanford Linear Accellerator Center.
[ascl:1204.017] epsnoise: Pixel noise in ellipticity and shear measurements
epsnoise simulates pixel noise in weak-lensing ellipticity and shear measurements. This open-source python code can efficiently create an intrinsic ellipticity distribution, shear it, and add noise, thereby mimicking a "perfect" measurement that is not affected by shape-measurement biases. For theoretical studies, we provide the Marsaglia distribution, which describes the ratio of normal variables in the general case of non-zero mean and correlation. We also added a convenience method that evaluates the Marsaglia distribution for the ratio of moments of a Gaussian-shaped brightness distribution, which gives a very good approximation of the measured ellipticity distribution also for galaxies with different radial profiles. We provide four shear estimators, two based on the ε ellipticity measure, two on χ. While three of them are essentially plain averages, we introduce a new estimator which requires a functional minimization.
[ascl:1802.016] eqpair: Electron energy distribution calculator
[ascl:1504.003] EsoRex: ESO Recipe Execution Tool
[ascl:1405.017] ESP: Extended Surface Photometry
[ascl:1305.001] ESTER: Evolution STEllaire en Rotation
[ascl:1311.012] ETC: Exposure Time Calculator
[ascl:1204.011] EXCOP: EXtraction of COsmological Parameters
The EXtraction of COsmological Parameters software (EXCOP) is a set of C and IDL programs together with a very large database of cosmological models generated by CMBFAST that will compute likelihood functions for cosmological parameters given some CMB data. This is the software and database used in the Stompor et al. (2001) analysis of a high resoultion Maxima1 CMB anisotropy map.
[ascl:1803.014] ExoCross: Spectra from molecular line lists
[ascl:1812.007] ExoGAN: Exoplanets Generative Adversarial Network
[submitted] ExoPlanet
[ascl:1407.008] Exopop: Exoplanet population inference
[ascl:1501.012] Exorings: Exoring modelling software
[ascl:1703.008] exorings: Exoring Transit Properties
[ascl:1708.023] ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox
[ascl:1803.011] ExtLaw_H18: Extinction law code
[ascl:1010.061] EyE: Enhance Your Extraction
[ascl:1407.019] EZ_Ages: Stellar population age calculator
[ascl:1210.004] EZ: A Tool For Automatic Redshift Measurement
EZ (Easy-Z) estimates redshifts for extragalactic objects. It compares the observed spectrum with a set of (user given) spectral templates to find out the best value for the redshift. To accomplish this task, it uses a highly configurable set of algorithms. EZ is easily extendible with new algorithms. It is implemented as a set of C programs and a number of python classes. It can be used as a standalone program, or the python classes can be directly imported by other applications.
[ascl:1208.021] EzGal: A Flexible Interface for Stellar Population Synthesis Models
EzGal is a flexible Python program which generates observable parameters (magnitudes, colors, and mass-to-light ratios) for arbitrary input stellar population synthesis (SPS) models; it enables simple, direct comparison of different model sets so that the uncertainty introduced by choice of model set can be quantified. EzGal is also capable of generating composite stellar population models (CSPs) for arbitrary input star-formation histories and reddening laws, and can be used to interpolate between metallicities for a given model set.
[ascl:1705.006] f3: Full Frame Fotometry for Kepler Full Frame Images
Light curves from the Kepler telescope rely on "postage stamp" cutouts of a few pixels near each of 200,000 target stars. These light curves are optimized for the detection of short-term signals like planet transits but induce systematics that overwhelm long-term variations in stellar flux. Longer-term effects can be recovered through analysis of the Full Frame Images, a set of calibration data obtained monthly during the Kepler mission. The Python package f3 analyzes the Full Frame Images to infer long-term astrophysical variations in the brightness of Kepler targets, such as magnetic activity or sunspots on slowly rotating stars.
[ascl:1802.001] FAC: Flexible Atomic Code
FAC calculates various atomic radiative and collisional processes, including radiative transition rates, collisional excitation and ionization by electron impact, energy levels, photoionization, and autoionization, and their inverse processes radiative recombination and dielectronic capture. The package also includes a collisional radiative model to construct synthetic spectra for plasmas under different physical conditions.
[ascl:1509.004] FalconIC: Initial conditions generator for cosmological N-body simulations in Newtonian, Relativistic and Modified theories
FalconIC generates discrete particle positions, velocities, masses and pressures based on linear Boltzmann solutions that are computed by libraries such as CLASS and CAMB. FalconIC generates these initial conditions for any species included in the selection, including Baryons, Cold Dark Matter and Dark Energy fluids. Any species can be set in Eulerian (on a fixed grid) or Lagrangian (particle motion) representation, depending on the gauge and reality chosen. That is, for relativistic initial conditions in the synchronous comoving gauge, Dark Matter can only be described in an Eulerian representation. For all other choices (Relativistic in Longitudinal gauge, Newtonian with relativistic expansion rates, Newtonian without any notion of radiation), all species can be treated in all representations. The code also computes spectra. FalconIC is useful for comparative studies on initial conditions.
[ascl:1402.016] FAMA: Fast Automatic MOOG Analysis
FAMA (Fast Automatic MOOG Analysis), written in Perl, computes the atmospheric parameters and abundances of a large number of stars using measurements of equivalent widths (EWs) automatically and independently of any subjective approach. Based on the widely-used MOOG code, it simultaneously searches for three equilibria, excitation equilibrium, ionization balance, and the relationship between logn(FeI) and the reduced EWs. FAMA also evaluates the statistical errors on individual element abundances and errors due to the uncertainties in the stellar parameters. Convergence criteria are not fixed "a priori" but instead are based on the quality of the spectra.
[ascl:1102.017] FARGO: Fast Advection in Rotating Gaseous Objects
FARGO is an efficient and simple modification of the standard transport algorithm used in explicit eulerian fixed polar grid codes, aimed at getting rid of the average azimuthal velocity when applying the Courant condition. This results in a much larger timestep than the usual procedure, and it is particularly well-suited to the description of a Keplerian disk where one is traditionally limited by the very demanding Courant condition on the fast orbital motion at the inner boundary. In this modified algorithm, the timestep is limited by the perturbed velocity and by the shear arising from the differential rotation. The speed-up resulting from the use of the FARGO algorithm is problem dependent. In the example presented in the code paper below, which shows the evolution of a Jupiter sized protoplanet embedded in a minimum mass protoplanetary nebula, the FARGO algorithm is about an order of magnitude faster than a traditional transport scheme, with a much smaller numerical diffusivity.
[ascl:1509.006] FARGO3D: Hydrodynamics/magnetohydrodynamics code
[submitted] Fast Template Periodogram
The Fast Template Periodogram extends the Generalised Lomb Scargle periodogram (Zechmeister and Kurster 2009) for arbitrary (periodic) signal shapes. A template is first approximated by a truncated Fourier series of length H. The Nonequispaced Fast Fourier Transform NFFT is used to efficiently compute frequency-dependent sums. Template fitting can now be done in NlogN time, improving existing algorithms by an order of magnitude for even small datasets. The FTP can be used in conjunction with gradient descent to accelerate a non-linear model fit, or be used in place of the multi-harmonic periodogram for non-sinusoidal signals with a priori known shapes.
[ascl:1603.006] FAST-PT: Convolution integrals in cosmological perturbation theory calculator
FAST-PT calculates 1-loop corrections to the matter power spectrum in cosmology. The code utilizes Fourier methods combined with analytic expressions to reduce the computation time down to scale as N log N, where N is the number of grid point in the input linear power spectrum. FAST-PT is extremely fast, enabling mode-coupling integral computations fast enough to embed in Monte Carlo Markov Chain parameter estimation.
[ascl:1010.037] FastChi: A Fast Chi-squared Technique For Period Search of Irregularly Sampled Data
The Fast Chi-Squared Algorithm is a fast, powerful technique for detecting periodicity. It was developed for analyzing variable stars, but is applicable to many of the other applications where the Fast Fourier Transforms (FFTs) or other periodograms (such as Lomb-Scargle) are currently used. The Fast Chi-squared technique takes a data set (e.g. the brightness of a star measured at many different times during a series of observations) and finds the periodic function that has the best frequency and shape (to an arbitrary number of harmonics) to fit the data. Among its advantages are:
• Statistical efficiency: all of the data are used, weighted by their individual error bars, giving a result with a significance calibrated in well-understood Chi-squared statistics.
• Sensitivity to harmonic content: many conventional techniques look only at the significance (or the amplitude) of the fundamental sinusoid and discard the power of the higher harmonics.
• Insensitivity to the sample timing: you won't find a period of 24 hours just because you take your observations at night. You do not need to window your data.
• The frequency search is gridded more tightly than the traditional "integer number of cycles over the span of observations", eliminating power loss from peaks that fall between the grid points.
• Computational speed: The complexity of the algorithm is O(NlogN), where N is the number of frequencies searched, due to its use of the FFT.
[ascl:9910.003] FASTELL: Fast calculation of a family of elliptical mass gravitational lens models
Because of their simplicity, axisymmetric mass distributions are often used to model gravitational lenses. Since galaxies are usually observed to have elliptical light distributions, mass distributions with elliptical density contours offer more general and realistic lens models. They are difficult to use, however, since previous studies have shown that the deflection angle (and magnification) in this case can only be obtained by rather expensive numerical integrations. We present a family of lens models for which the deflection can be calculated to high relative accuracy (10-5) with a greatly reduced numerical effort, for small and large ellipticity alike. This makes it easier to use these distributions for modeling individual lenses as well as for applications requiring larger computing times, such as statistical lensing studies. FASTELL is a code to calculate quickly and accurately the lensing deflection and magnification matrix for the softened power-law elliptical mass distribution (SPEMD) lens galaxy model. The SPEMD consists of a softened power-law radial distribution with elliptical isodensity contours.
[ascl:1010.041] FASTLens (FAst STatistics for weak Lensing): Fast Method for Weak Lensing Statistics and Map Making
The analysis of weak lensing data requires to account for missing data such as masking out of bright stars. To date, the majority of lensing analyses uses the two point-statistics of the cosmic shear field. These can either be studied directly using the two-point correlation function, or in Fourier space, using the power spectrum. The two-point correlation function is unbiased by missing data but its direct calculation will soon become a burden with the exponential growth of astronomical data sets. The power spectrum is fast to estimate but a mask correction should be estimated. Other statistics can be used but these are strongly sensitive to missing data. The solution that is proposed by FASTLens is to properly fill-in the gaps with only NlogN operations, leading to a complete weak lensing mass map from which one can compute straight forwardly and with a very good accuracy any kind of statistics like power spectrum or bispectrum.
[ascl:1302.008] FASTPHOT: A simple and quick IDL PSF-fitting routine
PSF fitting photometry allows a simultaneously fit of a PSF profile on the sources. Many routines use PSF fitting photometry, including IRAF/allstar, Strarfinder, and Convphot. These routines are in general complex to use and slow. FASTPHOT is optimized for prior extraction (the position of the sources is known) and is very fast and simple.
[ascl:1905.010] FastPM: Scaling N-body Particle Mesh solver
FastPM solves the gravity Possion equation with a boosted particle mesh. Arbitrary time steps can be used. The code is intended to study the formation of large scale structure and supports plain PM and Comoving-Lagranian (COLA) solvers. A broadband correction enforces the linear theory model growth factor at large scale. FastPM scales extremely well to hundred thousand MPI ranks, which is possible through the use of the PFFT Fourier Transform library. The size of mesh in FastPM can vary with time, allowing one to use coarse force mesh at high redshift with increase temporal resolution for accurate large scale modes. The code supports a variety of Greens function and differentiation kernels, though for most practical simulations the choice of kernels does not make a difference. A parameter file interpreter is provided to validate and execute the configuration files without running the simulation, allowing creative usages of the configuration files.
[ascl:1507.011] FAT: Fully Automated TiRiFiC
[ascl:1711.017] FATS: Feature Analysis for Time Series
[ascl:1712.011] FBEYE: Analyzing Kepler light curves and validating flares
FBEYE, the "Flares By-Eye" detection suite, is written in IDL and analyzes Kepler light curves and validates flares. It works on any 3-column light curve that contains time, flux, and error. The success of flare identification is highly dependent on the smoothing routine, which may not be suitable for all sources.
[ascl:1505.014] FCLC: Featureless Classification of Light Curves
FCLC (Featureless Classification of Light Curves) software describes the static behavior of a light curve in a probabilistic way. Individual data points are converted to densities and consequently probability density are compared instead of features. This gives rise to an independent classification which can corroborate the usefulness of the selected features.
[ascl:1806.027] fcmaker: Creating ESO-compliant finding charts for Observing Blocks on p2
fcmaker creates astronomical finding charts for Observing Blocks (OBs) on the p2 web server from the European Southern Observatory (ESO). It automates the creation of ESO-compliant finding charts for Service Mode and/or Visitor Mode OBs at the Very Large Telescope (VLT). The design of the fcmaker finding charts, based on an intimate knowledge of VLT observing procedures, is fine-tuned to best support night time operations. As an automated tool, fcmaker also allows observers to independently check visually, for the first time, the observing sequence coded inside an OB. This includes, for example, the signs of telescope and position angle offsets.
[ascl:1705.012] fd3: Spectral disentangling of double-lined spectroscopic binary stars
The spectral disentangling technique can be applied on a time series of observed spectra of a spectroscopic double-lined binary star (SB2) to determine the parameters of orbit and reconstruct the spectra of component stars, without the use of template spectra. fd3 disentangles the spectra of SB2 stars, capable also of resolving the possible third companion. It performs the separation of spectra in the Fourier space which is faster, but in several respects less versatile than the wavelength-space separation. (Wavelength-space separation is implemented in the twin code CRES.) fd3 is written in C and is designed as a command-line utility for a Unix-like operating system. fd3 is a new version of FDBinary (ascl:1705.011), which is now deprecated.
[ascl:1705.011] FDBinary: A tool for spectral disentangling of double-lined spectroscopic binary stars
FDBinary disentangles spectra of SB2 stars. The spectral disentangling technique can be applied on a time series of observed spectra of an SB2 to determine the parameters of orbit and reconstruct the spectra of component stars, without the use of template spectra. The code is written in C and is designed as a command-line utility for a Unix-like operating system. FDBinary uses the Fourier-space approach in separation of composite spectra. This code has been replaced with the newer fd3 (ascl:1705.012).
[ascl:1604.011] FDPS: Framework for Developing Particle Simulators
[ascl:1806.001] feets: feATURE eXTRACTOR FOR tIME sERIES
[ascl:1905.011] Fermitools: Fermi Science Tools
Fermi Science Tools is a suite of tools for the analysis of both the Large-Area Telescope (LAT) and the Gamma-ray Burst Monitor (GBM) data, including point source analysis for generating maps, spectra, and light curves, pulsar timing analysis, and source identification.
[ascl:1208.011] Fewbody: Numerical toolkit for simulating small-N gravitational dynamics
Fewbody is a numerical toolkit for simulating small-N gravitational dynamics. It is a general N-body dynamics code, although it was written for the purpose of performing scattering experiments, and therefore has several features that make it well-suited for this purpose. Fewbody uses the 8th-order Runge-Kutta Prince-Dormand integration method with 9th-order error estimate and adaptive timestep to advance the N-body system forward in time. It integrates the usual formulation of the N-body equations in configuration space, but allows for the option of global pairwise Kustaanheimo-Stiefel (K-S) regularization (Heggie 1974; Mikkola 1985). The code uses a binary tree algorithm to classify the N-body system into a set of independently bound hierarchies, and performs collisions between stars in the “sticky star” approximation. Fewbody contains a collection of command line utilities that can be used to perform individual scattering and N-body interactions, but is more generally a library of functions that can be used from within other codes.
[ascl:1512.017] FFTLog: Fast Fourier or Hankel transform
FFTLog is a set of Fortran subroutines that compute the fast Fourier or Hankel (= Fourier-Bessel) transform of a periodic sequence of logarithmically spaced points. FFTLog can be regarded as a natural analogue to the standard Fast Fourier Transform (FFT), in the sense that, just as the normal FFT gives the exact (to machine precision) Fourier transform of a linearly spaced periodic sequence, so also FFTLog gives the exact Fourier or Hankel transform, of arbitrary order m, of a logarithmically spaced periodic sequence.
[ascl:1603.014] fibmeasure: Python/Cython module to find the center of back-illuminated optical fibers in metrology images
fibmeasure finds the precise locations of the centers of back-illuminated optical fibers in images. It was developed for astronomical fiber positioning feedback via machine vision cameras and is optimized for high-magnification images where fibers appear as resolvable circles. It was originally written during the design of the WEAVE pick-and-place fiber positioner for the William Herschel Telescope.
[ascl:1307.004] FieldInf: Field Inflation exact integration routines
FieldInf is a collection of fast modern Fortran routines for computing exactly the background evolution and primordial power spectra of any single field inflationary models. It implements reheating without any assumptions through the "reheating parameter" R allowing robust inflationary parameter estimations and inference on the reheating energy scale. The underlying perturbation code actually deals with N fields minimally-coupled and/or non-minimally coupled to gravity and works for flat FLRW only.
[ascl:1203.013] Figaro: Data Reduction Software
Figaro is a data reduction system that originated at Caltech and whose development continued at the Anglo-Australian Observatory. Although it is intended to be able to deal with any sort of data, almost all its applications to date are geared towards processing optical and infrared data. Figaro uses hierarchical data structures to provide flexibility in its data file formats. Figaro was originally written to run under DEC's VMS operating system, but is now available both for VMS and for various flavours of UNIX.
[ascl:1608.009] FilFinder: Filamentary structure in molecular clouds
[ascl:1602.007] FilTER: Filament Trait-Evaluated Reconstruction
FilTER (Filament Trait-Evaluated Reconstruction) post-processes output from DisPerSE (ascl:1302.015 ) to produce a set of filaments that are well-defined and have measured properties (e.g. width), then cuts the profiles, fits and assesses them to reconstruct new filaments according to defined criteria.
[ascl:1808.006] Fips: An OpenGL based FITS viewer
[ascl:1202.014] FISA: Fast Integrated Spectra Analyzer
[ascl:1010.070] Fisher Matrix Manipulation and Confidence Contour Plotting allows you to combine constraints from multiple experiments (e.g., weak lensing + supernovae) and add priors (e.g., a flat universe) simply and easily. Calculate parameter uncertainties and plot confidence ellipses. Fisher matrix expectations for several experiments are included as calculated by myself (time delays) and the Dark Energy Task Force (WL/SN/BAO/CL/CMB), or provide your own.
[ascl:1201.007] Fisher4Cast: Fisher Matrix Toolbox
The Fisher4Cast suite, which requires MatLab, provides a standard, tested tool set for general Fisher Information matrix prediction and forecasting for use in both research and education. The toolbox design is robust and modular, allowing for easy additions and adaptation while keeping the user interface intuitive and easy to use. Fisher4Cast is completely general but the default is coded for cosmology. It provides parameter error forecasts for cosmological surveys providing distance, Hubble expansion and growth measurements in a general, curved FLRW background.
[ascl:1609.005] FISHPACK90: Efficient FORTRAN Subprograms for the Solution of Separable Elliptic Partial Differential Equations
FISHPACK90 is a modernization of the original FISHPACK (ascl:1609.004), employing Fortran90 to slightly simplify and standardize the interface to some of the routines. This collection of Fortran programs and subroutines solves second- and fourth-order finite difference approximations to separable elliptic Partial Differential Equations (PDEs). These include Helmholtz equations in cartesian, polar, cylindrical, and spherical coordinates, as well as more general separable elliptic equations. The solvers use the cyclic reduction algorithm. When the problem is singular, a least-squares solution is computed. Singularities induced by the coordinate system are handled, including at the origin r=0 in cylindrical coordinates, and at the poles in spherical coordinates. Test programs are provided for the 19 solvers. Each serves two purposes: as a template to guide you in writing your own codes utilizing the FISHPACK90 solvers, and as a demonstration on your computer that you can correctly produce FISHPACK90 executables.
[ascl:1601.016] Fit Kinematic PA: Fit the global kinematic position-angle of galaxies
Fit kinematic PA measures the global kinematic position-angle (PA) from integral field observations of a galaxy stellar or gas kinematics; the code is available in IDL and Python.
[ascl:1609.015] FIT3D: Fitting optical spectra
[ascl:1206.002] FITS Liberator: Image processing software
[ascl:1505.029] fits2hdf: FITS to HDFITS conversion
fits2hdf ports FITS files to Hierarchical Data Format (HDF5) files in the HDFITS format. HDFITS allows faster reading of data, higher compression ratios, and higher throughput. HDFITS formatted data can be presented transparently as an in-memory FITS equivalent by changing the import lines in Python-based FITS utilities. fits2hdf includes a utility to port MeasurementSets (MS) to HDF5 files.
[ascl:1710.018] FITSFH: Star Formation Histories
[ascl:1111.014] FITSH: Software Package for Image Processing
FITSH provides a standalone environment for analysis of data acquired by imaging astronomical detectors. The package provides utilities both for the full pipeline of subsequent related data processing steps (including image calibration, astrometry, source identification, photometry, differential analysis, low-level arithmetic operations, multiple image combinations, spatial transformations and interpolations, etc.) and for aiding the interpretation of the (mainly photometric and/or astrometric) results. The package also features a consistent implementation of photometry based on image subtraction, point spread function fitting and aperture photometry and provides easy-to-use interfaces for comparisons and for picking the most suitable method for a particular problem. The utilities in the package are built on the top of the commonly used UNIX/POSIX shells (hence the name of the package), therefore both frequently used and well-documented tools for such environments can be exploited and managing massive amount of data is rather convenient.
[ascl:1107.003] FITSManager: Management of Personal Astronomical Data
[ascl:1905.012] Fitsverify: FITS file format-verification tool
Fitsverify rigorously checks whether a FITS (Flexible Image Transport System) data file conforms to the requirements defined in Version 3.0 of the FITS Standard document; it is a standalone version of the ftverify and fverify tasks that are distributed as part of the ftools (ascl:9912.002) software package. The source code must be compiled and linked with the CFITSIO (ascl:1010.001) library. An interactive web is also available that can verify the format of any FITS data file on a local computer or on the Web.
[ascl:1612.006] flexCE: Flexible one-zone chemical evolution code
flexCE (flexible Chemical Evolution) computes the evolution of a one-zone chemical evolution model with inflow and outflow in which gas is instantaneously and completely mixed. It can be used to demonstrate the sensitivity of chemical evolution models to parameter variations, show the effect of CCSN yields on chemical evolution models, and reproduce the 2D distribution in [O/Fe]{[Fe/H] by mixing models with a range of inflow and outflow histories. It can also post-process cosmological simulations to predict element distributions.
[ascl:1107.004] Flexible DM-NRG
This code combines the spectral sum-conserving methods of Weichselbaum and von Delft and of Peters, Pruschke and Anders (both relying upon the complete basis set construction of Anders and Schiller) with the use of non-Abelian symmetries in a flexible manner: Essentially any non-Abelian symmetry can be taught to the code, and any number of such symmetries can be used throughout the computation for any density of states, and to compute any local operators' correlation function's real and imaginary parts or any thermodynamical expectation value. The code works both at zero and finite temperatures.
[ascl:1205.006] Flexion: IDL code for calculating gravitational flexion
Gravitational flexion is a technique for measuring 2nd order gravitational lensing signals in background galaxies and radio lobes. Unlike shear, flexion directly probes variations of the potential field. Moreover, the information contained in flexion is orthogonal to what is found in the shear. Thus, we get the information "for free."
[ascl:1411.016] Flicker: Mean stellar densities from flicker
Flicker calculates the mean stellar density of a star by inputting the flicker observed in a photometric time series. Written in Fortran90, its output may be used as an informative prior on stellar density when fitting transit light curves.
[ascl:1210.007] FLUKA: Fully integrated particle physics Monte Carlo simulation package
FLUKA (FLUktuierende KAskade) is a general-purpose tool for calculations of particle transport and interactions with matter. FLUKA can simulate with high accuracy the interaction and propagation in matter of about 60 different particles, including photons and electrons from 1 keV to thousands of TeV, neutrinos, muons of any energy, hadrons of energies up to 20 TeV (up to 10 PeV by linking FLUKA with the DPMJET code) and all the corresponding antiparticles, neutrons down to thermal energies and heavy ions. The program, written in Fortran, can also transport polarised photons (e.g., synchrotron radiation) and optical photons. Time evolution and tracking of emitted radiation from unstable residual nuclei can be performed online.
[ascl:1105.008] Flux Tube Model
This Fortran code computes magnetohydrostatic flux tubes and sheets according to the method of Steiner, Pneuman, & Stenflo (1986) A&A 170, 126-137. The code has many parameters contained in one input file that are easily modified. Extensive documentation is provided in README files.
[ascl:1712.010] Flux Tube: Solar model
Flux Tube is a nonlinear, two-dimensional, numerical simulation of magneto-acoustic wave propagation in the photosphere and chromosphere of small-scale flux tubes with internal structure. Waves with realistic periods of three to five minutes are studied, after horizontal and vertical oscillatory perturbations are applied to the equilibrium model. Spurious reflections of shock waves from the upper boundary are minimized by a special boundary condition.
[ascl:1701.007] Forecaster: Mass and radii of planets predictor
Forecaster predicts the mass (or radius) from the radius (or mass) for objects covering nine orders-of-magnitude in mass. It is an unbiased forecasting model built upon a probabilistic mass-radius relation conditioned on a sample of 316 well-constrained objects. It accounts for observational errors, hyper-parameter uncertainties and the intrinsic dispersions observed in the calibration sample.
[ascl:1904.011] FortesFit: Flexible spectral energy distribution modelling with a Bayesian backbone
FortesFit efficiently explores and discriminates between various spectral energy distributions (SED) models of astronomical sources. The Python package adds Bayesian inference to a framework that is designed for the easy incorporation and relative assessment of SED models, various fitting engines, and a powerful treatment of priors, especially those that may arise from non-traditional wave-bands such as the X-ray or radio emission, or from spectroscopic measurements. It has been designed with particular emphasis for its scalability to large datasets and surveys.
[ascl:1405.007] FORWARD: Forward modeling of coronal observables
FORWARD forward models various coronal observables and can access and compare existing data. Given a coronal model, it can produce many different synthetic observables (including Stokes polarimetry), as well as plots of model plasma properties (density, magnetic field, etc.). It uses the CHIANTI database (ascl:9911.004) and CLE polarimetry synthesis code, works with numerical model datacubes, interfaces with the PFSS module of SolarSoft (ascl:1208.013), includes several analytic models, and connects to the Virtual Solar Observatory for downloading data in a format directly comparable to model predictions.
[ascl:1204.004] Fosite: 2D advection problem solver
Fosite implements a method for the solution of hyperbolic conservation laws in curvilinear orthogonal coordinates. It is written in Fortran 90/95 integrating object-oriented (OO) design patterns, incorporating the flexibility of OO-programming into Fortran 90/95 while preserving the efficiency of the numerical computation. Although mainly intended for CFD simulations, Fosite's modular design allows its application to other advection problems as well. Unlike other two-dimensional implementations of finite volume methods, it accounts for local conservation of specific angular momentum. This feature turns the program into a perfect tool for astrophysical simulations where angular momentum transport is crucial. Angular momentum transport is not only implemented for standard coordinate systems with rotational symmetry (i.e. cylindrical, spherical) but also for a general set of orthogonal coordinate systems allowing the use of exotic curvilinear meshes (e.g. oblate-spheroidal). As in the case of the advection problem, this part of the software is also kept modular, therefore new geometries may be incorporated into the framework in a straightforward manner.
[ascl:1610.012] Fourierdimredn: Fourier dimensionality reduction model for interferometric imaging
Fourierdimredn (Fourier dimensionality reduction) implements Fourier-based dimensionality reduction of interferometric data. Written in Matlab, it derives the theoretically optimal dimensionality reduction operator from a singular value decomposition perspective of the measurement operator. Fourierdimredn ensures a fast implementation of the full measurement operator and also preserves the i.i.d. Gaussian properties of the original measurement noise.
[ascl:1806.030] foxi: Forecast Observations and their eXpected Information
Using information theory and Bayesian inference, the foxi Python package computes a suite of expected utilities given futuristic observations in a flexible and user-friendly way. foxi requires a set of n-dim prior samples for each model and one set of n-dim samples from the current data, and can calculate the expected ln-Bayes factor between models, decisiveness between models and its maximum-likelihood averaged equivalent, the decisivity, and the expected Kullback-Leibler divergence (i.e., the expected information gain of the futuristic dataset). The package offers flexible inputs and is designed for all-in-one script calculation or an initial cluster run then local machine post-processing, which should make large jobs quite manageable subject to resources and includes features such as LaTeX tables and plot-making for post-data analysis visuals and convenience of presentation.
[ascl:1010.002] fpack: FITS Image Compression Program
2. The FITS header keywords remain uncompressed for fast access.
[ascl:1906.003] FREDDA: A fast, real-time engine for de-dispersing amplitudes
FREDDA detects Fast Radio Bursts (FRBs) in power data. It is optimized for use at ASKAP, namely GHz frequencies with 10s of beams, 100s of channels and millisecond integration times. The code is written in CUDA for NVIDIA Graphics Processing Units.
[ascl:1610.014] Freddi: Fast Rise Exponential Decay accretion Disk model Implementation
Freddi (Fast Rise Exponential Decay: accretion Disk model Implementation) solves 1-D evolution equations of the Shakura-Sunyaev accretion disk. It simulates fast rise exponential decay (FRED) light curves of low mass X-ray binaries (LMXBs). The basic equation of the viscous evolution relates the surface density and viscous stresses and is of diffusion type; evolution of the accretion rate can be found on solving the equation. The distribution of viscous stresses defines the emission from the source. The standard model for the accretion disk is implied; the inner boundary of the disk is at the ISCO or can be explicitely set. The boundary conditions in the disk are the zero stress at the inner boundary and the zero accretion rate at the outer boundary. The conditions are suitable during the outbursts in X-ray binary transients with black holes. In a binary system, the accretion disk is radially confined. In Freddi, the outer radius of the disk can be set explicitely or calculated as the position of the tidal truncation radius.
[ascl:1211.002] FreeEOS: Equation of State for stellar interiors calculations
FreeEOS is a Fortran library for rapidly calculating the equation of state using an efficient free-energy minimization technique that is suitable for physical conditions in stellar interiors. Converged FreeEOS solutions can be reliably determined for the first time for physical conditions occurring in stellar models with masses between 0.1 M and the hydrogen-burning limit near 0.07 M and hot brown-dwarf models just below that limit. However, an initial survey of results for those conditions showed EOS discontinuities (plasma phase transitions) and other problems which will need to be addressed in future work by adjusting the interaction radii characterizing the pressure ionization used for the FreeEOS calculations.
[ascl:1406.006] FROG: Time-series analysis
FROG performs time series analysis and display. It provides a simple user interface for astronomers wanting to do time-domain astrophysics but still offers the powerful features found in packages such as PERIOD (ascl:1406.005). FROG includes a number of tools for manipulation of time series. Among other things, the user can combine individual time series, detrend series (multiple methods) and perform basic arithmetic functions. The data can also be exported directly into the TOPCAT (ascl:1101.010) application for further manipulation if needed.
[ascl:1506.006] fsclean: Faraday Synthesis CLEAN imager
Fsclean produces 3D Faraday spectra using the Faraday synthesis method, transforming directly from multi-frequency visibility data to the Faraday depth-sky plane space. Deconvolution is accomplished using the CLEAN algorithm, and the package includes Clark and Högbom style CLEAN algorithms. Fsclean reads in MeasurementSet visibility data and produces HDF5 formatted images; it handles images and data of arbitrary size, using scratch HDF5 files as buffers for data that is not being immediately processed, and is limited only by available disk space.
[ascl:1710.012] FSFE: Fake Spectra Flux Extractor
The fake spectra flux extractor generates simulated quasar absorption spectra from a particle or adaptive mesh-based hydrodynamic simulation. It is implemented as a python module. It can produce both hydrogen and metal line spectra, if the simulation includes metals. The cloudy table for metal ionization fractions is included. Unlike earlier spectral generation codes, it produces absorption from each particle close to the sight-line individually, rather than first producing an average density in each spectral pixel, thus substantially preserving more of the small-scale velocity structure of the gas. The code supports both Gadget (ascl:0003.001) and AREPO.
[ascl:1010.043] FSPS: Flexible Stellar Population Synthesis
FSPS is a flexible SPS package that allows the user to compute simple stellar populations (SSPs) for a range of IMFs and metallicities, and for a variety of assumptions regarding the morphology of the horizontal branch, the blue straggler population, the post--AGB phase, and the location in the HR diagram of the TP-AGB phase. From these SSPs the user may then generate composite stellar populations (CSPs) for a variety of star formation histories (SFHs) and dust attenuation prescriptions. Outputs include the "observed" spectra and magnitudes of the SSPs and CSPs at arbitrary redshift. In addition to these fortran routines, several IDL routines are provided that allow easy manipulation of the output. FSPS was designed with the intention that the user would make full use of the provided fortran routines. However, the full FSPS package is quite large, and requires some time for the user to become familiar with all of the options and syntax. Some users may only need SSPs for a range of metallicities and IMFs. For such users, standard SSP sets for several IMFs, evolutionary tracks, and spectral libraries are available here.
[ascl:1711.003] FTbg: Background removal using Fourier Transform
FTbg performs Fourier transforms on FITS images and separates low- and high-spatial frequency components by a user-specified cut. Both components are then inverse Fourier transformed back to image domain. FTbg can remove large-scale background/foreground emission in many astrophysical applications. FTbg has been designed to identify and remove Galactic background emission in Herschel/Hi-GAL continuum images, but it is applicable to any other (e.g., Planck) images when background/foreground emission is a concern.
[ascl:9912.002] FTOOLS: A general package of software to manipulate FITS files
FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.
[ascl:1112.002] Funtools: FITS Users Need Tools
Funtools is a "minimal buy-in" FITS library and utility package developed at the the High Energy Astrophysics Division of SAO. The Funtools library provides simplified access to a wide array of file types: standard astronomical FITS images and binary tables, raw arrays and binary event lists, and even tables of ASCII column data. A sophisticated region filtering library (compatible with ds9) filters images and tables using boolean operations between geometric shapes, support world coordinates, etc. Funtools also supports advanced capabilities such as optimized data searching using index files.
Because Funtools consists of a library and a set of user programs, it is most appropriately built from source. Funtools has been ported to Solaris, Linux, LinuxPPC, SGI, Alpha OSF1, Mac OSX (darwin) and Windows 98/NT/2000/XP. Once the source code tar file is retrieved, Funtools can be built and installed easily using standard commands.
[ascl:1205.005] Fv: Interactive FITS file editor
Fv is an easy-to-use graphical program for viewing and editing any FITS format image or table. The Fv software is small, completely self-contained and runs on Windows PCs, most Unix platforms and Mac OS-X. Fv also provides a portal into the Hera data analysis service from the HEASARC.
[ascl:1010.015] Fyris Alpha: Computational Fluid Dynamics Code
Fyris Alpha is a high resolution, shock capturing, multi-phase, up-wind Godunov method hydrodynamics code that includes a variable equation of state and optional microphysics such as cooling, gravity and multiple tracer variables. The code has been designed and developed for use primarily in astrophysical applications, such as galactic and interstellar bubbles, hypersonic shocks, and a range of jet phenomena. Fyris Alpha boasts both higher performance and more detailed microphysics than its predecessors, with the aim of producing output that is closer to the observational domain, such as emission line fluxes, and eventually, detailed spectral synthesis. Fyris Alpha is approximately 75,000 lines of C code; it encapsulates the split sweep semi-lagrangian remap PPM method used by ppmlr (in turn developed from VH1, Blondin et al. 1998) but with an improved Riemann solver, which is derived from the exact solver of Gottlieb and Groth (1988), a significantly faster solution than previous solvers. It has a number of optimisations that have improved the speed so that additional calculations neeed for multi-phase simulations become practical.
[ascl:1801.011] GABE: Grid And Bubble Evolver
[ascl:0003.001] GADGET-2: A Code for Cosmological Simulations of Structure Formation
The cosmological simulation code GADGET-2, a new massively parallel TreeSPH code, is capable of following a collisionless fluid with the N-body method, and an ideal gas by means of smoothed particle hydrodynamics (SPH). The implementation of SPH manifestly conserves energy and entropy in regions free of dissipation, while allowing for fully adaptive smoothing lengths. Gravitational forces are computed with a hierarchical multipole expansion, which can optionally be applied in the form of a TreePM algorithm, where only short-range forces are computed with the `tree'-method while long-range forces are determined with Fourier techniques. Time integration is based on a quasi-symplectic scheme where long-range and short-range forces can be integrated with different timesteps. Individual and adaptive short-range timesteps may also be employed. The domain decomposition used in the parallelisation algorithm is based on a space-filling curve, resulting in high flexibility and tree force errors that do not depend on the way the domains are cut. The code is efficient in terms of memory consumption and required communication bandwidth. It has been used to compute the first cosmological N-body simulation with more than 10^10 dark matter particles, reaching a homogeneous spatial dynamic range of 10^5 per dimension in a 3D box. It has also been used to carry out very large cosmological SPH simulations that account for radiative cooling and star formation, reaching total particle numbers of more than 250 million. GADGET-2 is publicly released to the research community.
[ascl:1108.005] Gaepsi: Gadget Visualization Toolkit
Gaepsi is a PYTHON extension for visualizing cosmology simulations produced by Gadget. Visualization is the most important facet of Gaepsi, but it also allows data analysis on GADGET simulations with its growing number of physics related subroutines and constants. Unlike mesh based scheme, SPH simulations are directly visible in the sense that a splatting process is required to produce raster images from the simulations. Gaepsi produces images of 2-dimensional line-of-sight projections of the simulation. Scalar fields and vector fields are both supported.
Besides the traditional way of slicing a simulation, Gaepsi also has built-in support of 'Survey-like' domain transformation proposed by Carlson & White. An improved implementation is used in Gaepsi. Gaepsi both implements an interactive shell for plotting and exposes its API for batch processing. When complied with OpenMP, Gaepsi automatically takes the advantage of the multi-core computers. In interactive mode, Gaepsi is capable of producing images of size up to 32000 x 32000 pixels. The user can zoom, pan and rotate the field with a command in on the finger tip. The interactive mode takes full advantages of matplotlib's rich annotating, labeling and image composition facilities. There are also built-in commands to add objects that are commonly used in cosmology simulations to the figures.
[ascl:1707.006] Gala: Galactic astronomy and gravitational dynamics
Gala is a Python package (and Astropy affiliated package) for Galactic astronomy and gravitational dynamics. The bulk of the package centers around implementations of gravitational potentials, numerical integration, nonlinear dynamics, and astronomical velocity transformations (i.e. proper motions). Gala uses the Astropy units and coordinates subpackages extensively to provide a clean, pythonic interface to these features but does any heavy-lifting in C and Cython for speed.
[ascl:1109.011] GalactICS: Galaxy Model Building Package
GalactICS generates N-body realizations of axisymmetric galaxy models consisting of disk, bulge and halo. Some of the code is in Fortran 77, using lines longer than 72 characters in some cases. The -e flag in the makefile allow for this for a Solaris f77 compiler. Other programs are written in C. Again, the linking between these routines works on Solaris systems, but may need to be adjusted for other architectures. We have found that linking using f77 instead of ld will often automatically load the appropriate libraries.
The graphics output by some of the programs (dbh, plotforce, diskdf, plothalo) uses the PGPLOT library. Alternatively, remove all calls to routines with names starting with "PG", as well as the -lpgplot flag in the Makefile, and the programs should still run fine.
[ascl:1108.004] Galacticus: A Semi-Analytic Model of Galaxy Formation
Galacticus is designed to solve the physics involved in the formation of galaxies within the current standard cosmological framework. It is of a type of model known as “semi-analytic” in which the numerous complex non-linear physics involved are solved using a combination of analytic approximations and empirical calibrations from more detailed, numerical solutions. Models of this type aim to begin with the initial state of the Universe (specified shortly after the Big Bang) and apply physical principles to determine the properties of galaxies in the Universe at later times, including the present day. Typical properties computed include the mass of stars and gas in each galaxy, broad structural properties (e.g. radii, rotation speeds, geometrical shape etc.), dark matter and black hole contents, and observable quantities such as luminosities, chemical composition etc.
[ascl:1303.018] Galactus: Modeling and fitting of galaxies from neutral hydrogen (HI) cubes
Galactus, written in python, is an astronomical software tool for the modeling and fitting of galaxies from neutral hydrogen (HI) cubes. Galactus uses a uniform medium to generate a cube. Galactus can perform the full-radiative transfer for the HI, so can model self-absorption in the galaxy.
[ascl:1408.011] GALAPAGOS-C: Galaxy Analysis over Large Areas
GALAPAGOS-C is a C implementation of the IDL code GALAPAGOS (ascl:1203.002). It processes a complete set of survey images through automation of source detection via SExtractor (ascl:1010.064), postage stamp cutting, object mask preparation, sky background estimation and complex two-dimensional light profile Sérsic modelling via GALFIT (ascl:1104.010). GALAPAGOS-C uses MPI-parallelization, thus allowing quick processing of large data sets. The code can fit multiple Sérsic profiles to each galaxy, each representing distinct galaxy components (e.g. bulge, disc, bar), and optionally can fit asymmetric Fourier mode distortions.
[ascl:1203.002] GALAPAGOS: Galaxy Analysis over Large Areas: Parameter Assessment by GALFITting Objects from SExtractor
GALAPAGOS, Galaxy Analysis over Large Areas: Parameter Assessment by GALFITting Objects from SExtractor (ascl:1010.064), automates source detection, two-dimensional light-profile Sersic modelling and catalogue compilation in large survey applications. Based on a single setup, GALAPAGOS can process a complete set of survey images. It detects sources in the data, estimates a local sky background, cuts postage stamp images for all sources, prepares object masks, performs Sersic fitting including neighbours and compiles all objects in a final output catalogue. For the initial source detection GALAPAGOS applies SExtractor, while GALFIT (ascl:1104.010) is incorporated for modelling Sersic profiles. It measures the background sky involved in the Sersic fitting by means of a flux growth curve. GALAPAGOS determines postage stamp sizes based on SExtractor shape parameters. In order to obtain precise model parameters GALAPAGOS incorporates a complex sorting mechanism and makes use of multiplexing capabilities. It combines SExtractor and GALFIT data in a single output table. When incorporating information from overlapping tiles, GALAPAGOS automatically removes multiple entries from identical sources. GALAPAGOS is programmed in the Interactive Data Language, IDL. A C implementation of the software, GALAPAGOS-C (ascl:1408.011), is available.
[ascl:1710.022] galario: Gpu Accelerated Library for Analyzing Radio Interferometer Observations
The galario library exploits the computing power of modern graphic cards (GPUs) to accelerate the comparison of model predictions to radio interferometer observations. It speeds up the computation of the synthetic visibilities given a model image (or an axisymmetric brightness profile) and their comparison to the observations.
[ascl:1503.002] Galax2d: 2D isothermal Euler equations solver
Galax2d computes the 2D stationary solution of the isothermal Euler equations of gas dynamics in a rotating galaxy with a weak bar. The gravitational potential represents a weak bar and controls the flow. A damped Newton method solves the second-order upwind discretization of the equations for a steady-state solution, using a consistent linearization and a direct solver. The code can be applied as a tool for generating flow models if used on not too fine meshes, up to 256 by 256 cells for half a disk in polar coordinates.
[ascl:1104.005] GALAXEV: Evolutionary Stellar Population Synthesis Models
GALAXEV is a library of evolutionary stellar population synthesis models computed using the new isochrone synthesis code of Bruzual & Charlot (2003). This code allows one to computes the spectral evolution of stellar populations in wide ranges of ages and metallicities at a resolution of 3 Å across the whole wavelength range from 3200 Å to 9500 Å, and at lower resolution outside this range.
[ascl:1101.007] Galaxia: A Code to Generate a Synthetic Survey of the Milky Way
We present here a fast code for creating a synthetic survey of the Milky Way. Given one or more color-magnitude bounds, a survey size and geometry, the code returns a catalog of stars in accordance with a given model of the Milky Way. The model can be specified by a set of density distributions or as an N-body realization. We provide fast and efficient algorithms for sampling both types of models. As compared to earlier sampling schemes which generate stars at specified locations along a line of sight, our scheme can generate a continuous and smooth distribution of stars over any given volume. The code is quite general and flexible and can accept input in the form of a star formation rate, age metallicity relation, age velocity dispersion relation and analytic density distribution functions. Theoretical isochrones are then used to generate a catalog of stars and support is available for a wide range of photometric bands. As a concrete example we implement the Besancon Milky Way model for the disc. For the stellar halo we employ the simulated stellar halo N-body models of Bullock & Johnston (2005). In order to sample N-body models, we present a scheme that disperses the stars spawned by an N-body particle, in such a way that the phase space density of the spawned stars is consistent with that of the N-body particles. The code is ideally suited to generating synthetic data sets that mimic near future wide area surveys such as GAIA, LSST and HERMES. As an application we study the prospect of identifying structures in the stellar halo with a simulated GAIA survey.
[ascl:1904.002] GALAXY: N-body simulation software for isolated, collisionless stellar systems
GALAXY evolves (almost) isolated, collisionless stellar systems, both disk-like and ellipsoidal. In addition to the N-body code galaxy, which offers eleven different methods to compute the gravitational accelerations, the package also includes sophisticated set-up and analysis software. While not as versatile as tree codes, for certain restricted applications the particle-mesh methods in GALAXY are 50 to 200 times faster than a widely-used tree code. After reading in data providing the initial positions, velocities, and (optionally) masses of the particles, GALAXY compute the gravitational accelerations acting on each particle and integrates forward the velocities and positions of the particles for a short time step, repeating these two steps as desired. Intermediate results can be saved, as can the final moment in a state from which the integration could be resumed. Particles can have individual masses and their motion can be integrated using a range of time steps for greater efficiency; message-passing-interface (MPI) calls are available to enable GALAXY's use on parallel machines with high efficiency.
[ascl:1312.010] GalaxyCount: Galaxy counts and variance calculator
GalaxyCount calculates the number and standard deviation of galaxies in a magnitude limited observation of a given area. The methods to calculate both the number and standard deviation may be selected from different options. Variances may be computed for circular, elliptical and rectangular window functions.
[ascl:1010.033] GALEV Evolutionary Synthesis Models
[ascl:1810.001] galfast: Milky Way mock catalog generator
[ascl:1104.010] GALFIT: Detailed Structural Decomposition of Galaxy Images
GALFIT is a two-dimensional (2-D) fitting algorithm designed to extract structural components from galaxy images, with emphasis on closely modeling light profiles of spatially well-resolved, nearby galaxies observed with the Hubble Space Telescope. The algorithm improves on previous techniques in two areas: 1.) by being able to simultaneously fit a galaxy with an arbitrary number of components, and 2.) with optimization in computation speed, suited for working on large galaxy images. 2-D models such as the "Nuker'' law, the Sersic (de Vaucouleurs) profile, an exponential disk, and Gaussian or Moffat functions are used. The azimuthal shapes are generalized ellipses that can fit disky and boxy components. Many galaxies with complex isophotes, ellipticity changes, and position-angle twists can be modeled accurately in 2-D. When examined in detail, even simple-looking galaxies generally require at least three components to be modeled accurately rather than the one or two components more often employed. This is illustrated by way of seven case studies, which include regular and barred spiral galaxies, highly disky lenticular galaxies, and elliptical galaxies displaying various levels of complexities. A useful extension of this algorithm is to accurately extract nuclear point sources in galaxies.
[ascl:1510.005] GALFORM: Galactic modeling
[ascl:1408.008] GALIC: Galaxy initial conditions construction
GalIC (GALaxy Initial Conditions) is an implementation of an iterative method to construct steady state composite halo-disk-bulge galaxy models with prescribed density distribution and velocity anisotropy that can be used as initial conditions for N-body simulations. The code is parallelized for distributed memory based on MPI. While running, GalIC produces "snapshot files" that can be used as initial conditions files. GalIC supports the three file formats ('type1' format, the slightly improved 'type2' format, and an HDF5 format) of the GADGET (ascl:0003.001) code for its output snapshot files.
[ascl:1511.010] Galileon-Solver: N-body code
Galileon-Solver adds an extra force to PMCode (ascl:9909.001) using a modified Poisson equation to provide a non-linearly transformed density field, with the operations all performed in real space. The code's implicit spherical top-hat assumption only works over fairly long distance averaging scales, where the coarse-grained picture it relies on is a good approximation of reality; it uses discrete Fourier transforms and cyclic reduction in the usual way.
[ascl:1903.010] GalIMF: Galaxy-wide Initial Mass Function
GalIMF (Galaxy-wide Initial Mass Function) computes the galaxy-wide initial stellar mass function by integrating over a whole galaxy, parameterized by star formation rate and metallicity. The generated stellar mass distribution depends on the galaxy-wide star formation rate (SFR, which is related to the total mass of a galalxy) and the galaxy-wide metallicity. The code can generate a galaxy-wide IMF (IGIMF) and can also generate all the stellar masses within a galaxy with optimal sampling (OSGIMF). To compute the IGIMF or the OSGIMF, the GalIMF module contains all local IMF properties (e.g. the dependence of the stellar IMF on the metallicity, on the density of the star-cluster forming molecular cloud cores), and this software module can, therefore, be also used to obtain only the stellar IMF with various prescriptions, or to investigate other features of the stellar population such as what is the most massive star that can be formed in a star cluster.
[ascl:1711.011] galkin: Milky Way rotation curve data handler
galkin is a compilation of kinematic measurements tracing the rotation curve of our Galaxy, together with a tool to treat the data. The compilation is optimized to Galactocentric radii between 3 and 20 kpc and includes the kinematics of gas, stars and masers in a total of 2780 measurements collected from almost four decades of literature. The user-friendly software provided selects, treats and retrieves the data of all source references considered. This tool is especially designed to facilitate the use of kinematic data in dynamical studies of the Milky Way with various applications ranging from dark matter constraints to tests of modified gravity.
[ascl:1903.005] Galmag: Computation of realistic galactic magnetic fields
Galmag computes galactic magnetic fields based on mean field dynamo theory. Written in Python, Galmag allows quick exploration of solutions to the mean field dynamo equation based on galaxy parameters specified by the user, such as the scale height profile and the galaxy rotation curves. The magnetic fields are solenoidal by construction and can be helical.
[ascl:1501.014] GalPaK 3D: Galaxy parameters and kinematics extraction from 3D data
GalPaK 3D extracts the intrinsic (i.e. deconvolved) galaxy parameters and kinematics from any 3-dimensional data. The algorithm uses a disk parametric model with 10 free parameters (which can also be fixed independently) and a MCMC approach with non-traditional sampling laws in order to efficiently probe the parameter space. More importantly, it uses the knowledge of the 3-dimensional spread-function to return the intrinsic galaxy properties and the intrinsic data-cube. The 3D spread-function class is flexible enough to handle any instrument.
GalPaK 3D can simultaneously constrain the kinematics and morphological parameters of (non-merging, i.e. regular) galaxies observed in non-optimal seeing conditions and can also be used on AO data or on high-quality, high-SNR data to look for non-axisymmetric structures in the residuals.
[ascl:1611.006] GalPot: Galaxy potential code
GalPot finds the gravitational potential associated with axisymmetric density profiles. The package includes code that performs transformations between commonly used coordinate systems for both positions and velocities (the class OmniCoords), and that integrates orbits in the potentials. GalPot is a stand-alone version of Walter Dehnen's GalaxyPotential C++ code taken from the falcON code in the NEMO Stellar Dynamics Toolbox (ascl:1010.051).
[ascl:1010.028] GALPROP: Code for Cosmic-ray Transport and Diffuse Emission Production
[ascl:1411.008] galpy: Galactic dynamics package
[ascl:1402.009] GalSim: Modular galaxy image simulation toolkit
GalSim is a fast, modular software package for simulation of astronomical images. Though its primary purpose is for tests of weak lensing analysis methods, it can be used for other purposes. GalSim allows galaxies and PSFs to be represented in a variety of ways, and can apply shear, magnification, dilation, or rotation to a galaxy profile including lensing-based models from a power spectrum or NFW halo profile. It can write images in regular FITS files, FITS data cubes, or multi-extension FITS files. It can also compress the output files using various compressions including gzip, bzip2, and rice. The user interface is in python or via configuration scripts, and the computations are done in C++ for speed.
[ascl:1711.007] galstep: Initial conditions for spiral galaxy simulations
galstep generates initial conditions for disk galaxy simulations with GADGET-2 (ascl:0003.001), RAMSES (ascl:1011.007) and GIZMO (ascl:1410.003), including a stellar disk, a gaseous disk, a dark matter halo and a stellar bulge. The first two components follow an exponential density profile, and the last two a Dehnen density profile with gamma=1 by default, corresponding to a Hernquist profile.
[ascl:1711.010] galstreams: Milky Way streams footprint library and toolkit
galstreams provides a compilation of spatial information for known stellar streams and overdensities in the Milky Way and includes Python tools for visualizing them. ASCII tables are also provided for quick viewing of the stream's footprints using TOPCAT (ascl:1101.010).
[ascl:1304.003] GALSVM: Automated Morphology Classification
GALSVM is IDL software for automated morphology classification. It was specially designed for high redshift data but can be used at low redshift as well. It analyzes morphologies of galaxies based on a particular family of learning machines called support vector machines. The method can be seen as a generalization of the classical CAS classification but with an unlimited number of dimensions and non-linear boundaries between decision regions. It is fully automated and consequently well adapted to large cosmological surveys.
[ascl:1612.017] GAMER: GPU-accelerated Adaptive MEsh Refinement code
GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.
[ascl:1105.011] Ganalyzer: A tool for automatic galaxy image analysis
Ganalyzer is a model-based tool that automatically analyzes and classifies galaxy images. Ganalyzer works by separating the galaxy pixels from the background pixels, finding the center and radius of the galaxy, generating the radial intensity plot, and then computing the slopes of the peaks detected in the radial intensity plot to measure the spirality of the galaxy and determine its morphological class. Unlike algorithms that are based on machine learning, Ganalyzer is based on measuring the spirality of the galaxy, a task that is difficult to perform manually, and in many cases can provide a more accurate analysis compared to manual observation. Ganalyzer is simple to use, and can be easily embedded into other image analysis applications. Another advantage is its speed, which allows it to analyze ~10,000,000 galaxy images in five days using a standard modern desktop computer. These capabilities can make Ganalyzer a useful tool in analyzing large datasets of galaxy images collected by autonomous sky surveys such as SDSS, LSST or DES.
[ascl:1708.012] GANDALF: Gas AND Absorption Line Fitting
[ascl:1602.015] GANDALF: Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids
GANDALF, a successor to SEREN (ascl:1102.010), is a hybrid self-gravitating fluid dynamics and collisional N-body code primarily designed for investigating star formation and planet formation problems. GANDALF uses various implementations of Smoothed Particle Hydrodynamics (SPH) to perform hydrodynamical simulations of gas clouds undergoing gravitational collapse to form new stars (or other objects), and can perform simulations of pure N-body dynamics using high accuracy N-body integrators, model the intermediate phase of cluster evolution, and provide visualizations via its python interface as well as interactive simulations. Although based on many of the SEREN routines, GANDALF has been largely re-written from scratch in C++ using more optimal algorithms and data structures.
[ascl:1303.027] GaPP: Gaussian Processes in Python
The algorithm Gaussian processes can reconstruct a function from a sample of data without assuming a parameterization of the function. The GaPP code can be used on any dataset to reconstruct a function. It handles individual error bars on the data and can be used to determine the derivatives of the reconstructed function. The data sample can consist of observations of the function and of its first derivative.
[ascl:1010.049] Gas-momentum-kinetic SZ cross-correlations
We present a new method for detecting the missing baryons by generating a template for the kinematic Sunyaev-Zel'dovich effect. The template is computed from the product of a reconstructed velocity field with a galaxy field. We provide maps of such templates constructed from SDSS Data Release 7 spectroscopic data (SDSS VAGC sample) along side with their expected two point correlation functions with CMB temperature anisotropies. Codes of generating such coefficients of the two point correlation function are also released to provide users of the gas-momentum map a way to change the parameters such as cosmological parameters, reionization history, ionization parameters, etc.
[ascl:1210.020] GASGANO: Data File Organizer
GASGANO is a GUI software tool for managing and viewing data files produced by VLT Control System (VCS) and the Data Flow System (DFS). It is developed and maintained by ESO to help its user community manage and organize astronomical data observed and produced by all VLT compliant telescopes in a systematic way. The software understands FITS, PAF, and ASCII files, and Reduction Blocks, and can group, sort, classify, filter, and search data in addition to allowing the user to browse, view, and manage them.
[ascl:1710.019] GASOLINE: Smoothed Particle Hydrodynamics (SPH) code
Gasoline solves the equations of gravity and hydrodynamics in astrophysical problems, including simulations of planets, stars, and galaxies. It uses an SPH method that features correct mixing behavior in multiphase fluids and minimal artificial viscosity. This method is identical to the SPH method used in the ChaNGa code (ascl:1105.005), allowing users to extend results to problems requiring >100,000 cores. Gasoline uses a fast, memory-efficient O(N log N) KD-Tree to solve Poisson's Equation for gravity and avoids artificial viscosity in non-shocking compressive flows.
[ascl:1610.007] gatspy: General tools for Astronomical Time Series in Python
Gatspy contains efficient, well-documented implementations of several common routines for Astronomical time series analysis, including the Lomb-Scargle periodogram, the Supersmoother method, and others.
[ascl:1406.018] GAUSSCLUMPS: Gaussian-shaped clumping from a spectral map
GAUSSCLUMPS decomposes a spectral map into Gaussian-shape clumps. The clump-finding algorithm decomposes a spectral data cube by iteratively removing 3-D Gaussians as representative clumps. GAUSSCLUMPS was originally a separate code distribution but is now a contributed package in GILDAS (ascl:1305.010). A reimplementation can also be found in CUPID (ascl:1311.007).
[ascl:1305.009] GaussFit: Solving least squares and robust estimation problems
GaussFit solves least squares and robust estimation problems; written originally for reduction of NASA Hubble Space Telescope data, it includes a complete programming language designed especially to formulate estimation problems, a built-in compiler and interpreter to support the programming language, and a built-in algebraic manipulator for calculating the required partial derivatives analytically. The code can handle nonlinear models, exact constraints, correlated observations, and models where the equations of condition contain more than one observed quantity. Written in C, GaussFit includes an experimental robust estimation capability so data sets contaminated by outliers can be handled simply and efficiently.
[ascl:1907.019] GaussPy: Python implementation of the Autonomous Gaussian Decomposition algorithm
GaussPy implements the Autonomous Gaussian Decomposition (AGD) algorithm, which uses computer vision and machine learning techniques to provide optimized initial guesses for the parameters of a multi-component Gaussian model automatically and efficiently. The speed and adaptability of AGD allow it to interpret large volumes of spectral data efficiently. Although it was initially designed for applications in radio astrophysics, AGD can be used to search for one-dimensional Gaussian (or any other single-peaked spectral profile)-shaped components in any data set. To determine how many Gaussian functions to include in a model and what their parameters are, AGD uses a technique called derivative spectroscopy. The derivatives of a spectrum can efficiently identify shapes within that spectrum corresponding to the underlying model, including gradients, curvature and edges.
[ascl:1907.020] GaussPy+: Gaussian decomposition package for emission line spectra
GaussPy+ is a fully automated Gaussian decomposition package for emission line spectra. It is based on GaussPy (ascl:1907.019) and offers several improvements, including automating preparatory steps and providing an accurate noise estimation, improving the fitting routine, and providing a routine to refit spectra based on neighboring fit solutions. GaussPy+ handles complex emission and low to moderate signal-to-noise values.
[ascl:1710.014] GBART: Determination of the orbital elements of spectroscopic binaries
GBART is an improved version of the code for determining the orbital elements for spectroscopic binaries originally written by Bertiau & Grobben (1968).
[ascl:1303.019] GBTIDL: Reduction and Analysis of GBT Spectral Line Data
GBTIDL is an interactive package for reduction and analysis of spectral line data taken with the Robert C. Byrd Green Bank Telescope (GBT). The package, written entirely in IDL, consists of straightforward yet flexible calibration, averaging, and analysis procedures (the "GUIDE layer") modeled after the UniPOPS and CLASS data reduction philosophies, a customized plotter with many built-in visualization features, and Data I/O and toolbox functionality that can be used for more advanced tasks. GBTIDL makes use of data structures which can also be used to store intermediate results. The package consumes and produces data in GBT SDFITS format. GBTIDL can be run online and have access to the most recent data coming off the telescope, or can be run offline on preprocessed SDFITS files.
[ascl:1811.018] gdr2_completeness: GaiaDR2 data retrieval and manipulation
[ascl:1010.079] Geant4: A Simulation Toolkit for the Passage of Particles through Matter
[ascl:1007.003] GEMINI: A toolkit for analytical models of two-point correlations and inhomogeneous structure formation
Gemini is a toolkit for analytical models of two-point correlations and inhomogeneous structure formation. It extends standard Press-Schechter theory to inhomogeneous situations, allowing a realistic, analytical calculation of halo correlations and bias.
[ascl:1212.005] General complex polynomial root solver
This general complex polynomial root solver, implemented in Fortran and further optimized for binary microlenses, uses a new algorithm to solve polynomial equations and is 1.6-3 times faster than the ZROOTS subroutine that is commercially available from Numerical Recipes, depending on application. The largest improvement, when compared to naive solvers, comes from a fail-safe procedure that permits skipping the majority of the calculations in the great majority of cases, without risking catastrophic failure in the few cases that these are actually required.
[ascl:1812.014] GENGA: Gravitational ENcounters with Gpu Acceleration
[ascl:1706.006] GenPK: Power spectrum generator
GenPK generates the 3D matter power spectra for each particle species from a Gadget snapshot. Written in C++, it requires both FFTW3 and GadgetReader.
[ascl:1011.015] Geokerr: Computing Photon Orbits in a Kerr Spacetime
Relativistic radiative transfer problems require the calculation of photon trajectories in curved spacetime. Programmed in Fortran, Geokerr uses a novel technique for rapid and accurate calculation of null geodesics in the Kerr metric. The equations of motion from the Hamilton-Jacobi equation are reduced directly to Carlson's elliptic integrals, simplifying algebraic manipulations and allowing all coordinates to be computed semi-analytically for the first time.
[ascl:1511.015] George: Gaussian Process regression
George is a fast and flexible library, implemented in C++ with Python bindings, for Gaussian Process regression useful for accounting for correlated noise in astronomical datasets, including those for transiting exoplanet discovery and characterization and stellar population modeling.
[ascl:1412.012] GeoTOA: Geocentric TOA tools
GeoTOA computes the pulse times of arrival (TOAs) at an observatory (or spacecraft) from unbinned Fermi LAT data. Written in Python, the software requires NumPy, matplotlib, SciPy, FSSC Science Tools, and Tempo2 (ascl:1210.015).
[ascl:1512.002] GetData: A filesystem-based, column-oriented database format for time-ordered binary data
The GetData Project is the reference implementation of the Dirfile Standards, a filesystem-based, column-oriented database format for time-ordered binary data. Dirfiles provide a fast, simple format for storing and reading data, suitable for both quicklook and analysis pipelines. GetData provides a C API and bindings exist for various other languages. GetData is distributed under the terms of the GNU Lesser General Public License.
[ascl:1705.007] getimages: Background derivation and image flattening method
getimages performs background derivation and image flattening for high-resolution images obtained with space observatories. It is based on median filtering with sliding windows corresponding to a range of spatial scales from the observational beam size up to a maximum structure width X. The latter is a single free parameter of getimages that can be evaluated manually from the observed image. The median filtering algorithm provides a background image for structures of all widths below X. The same median filtering procedure applied to an image of standard deviations derived from a background-subtracted image results in a flattening image. Finally, a flattened image is computed by dividing the background-subtracted by the flattening image. Standard deviations in the flattened image are now uniform outside sources and filaments. Detecting structures in such radically simplified images results in much cleaner extractions that are more complete and reliable. getimages also reduces various observational and map-making artifacts and equalizes noise levels between independent tiles of mosaicked images. The code (a Bash script) uses FORTRAN utilities from getsources (ascl:1507.014), which must be installed.
[ascl:1507.014] getsources: Multi-scale, multi-wavelength source extraction
getsources is a powerful multi-scale, multi-wavelength source extraction algorithm. It analyzes fine spatial decompositions of original images across a wide range of scales and across all wavebands, cleans those single-scale images of noise and background, and constructs wavelength-independent single-scale detection images that preserve information in both spatial and wavelength dimensions. getsources offers several advantages over other existing methods of source extraction, including the filtering out of irrelevant spatial scales to improve detectability, especially in the crowded regions and for extended sources, the ability to combine data over all wavebands, and the full automation of the extraction process.
[ascl:1509.008] GFARGO: FARGO for GPU
GFARGO is a GPU version of FARGO (ascl:1102.017). It is written in C and C for CUDA and runs only on NVIDIA’s graphics cards. Though it corresponds to the standard, isothermal version of FARGO, not all functionalities of the CPU version have been translated to CUDA. The code is available in single and double precision versions, the latter compatible with FERMI architectures. GFARGO can run on a graphics card connected to the display, allowing the user to see in real time how the fields evolve.
[ascl:1510.001] GGADT: Generalized Geometry Anomalous Diffraction Theory
[ascl:1112.008] GGobi: A data visualization system
[ascl:1107.002] GIBIS: Gaia Instrument and Basic Image Simulator
GIBIS is a pixel-level simulator of the Gaia mission. It is intended to simulate how the Gaia instruments will observe the sky, using realistic simulations of the astronomical sources and of the instrumental properties. It is a branch of the global Gaia Simulator under development within the Gaia DPAC CU2 Group (Data Simulations). Access is currently restricted to Gaia DPAC teams.
[ascl:1112.005] GIDGET: Gravitational Instability-Dominated Galaxy Evolution Tool
Observations of disk galaxies at z~2 have demonstrated that turbulence driven by gravitational instability can dominate the energetics of the disk. GIDGET is a 1D simulation code, which we have made publicly available, that economically evolves these galaxies from z~2 to z~0 on a single CPU in a matter of minutes, tracking column density, metallicity, and velocity dispersions of gaseous and multiple stellar components. We include an H$_2$ regulated star formation law and the effects of stellar heating by transient spiral structure. We use this code to demonstrate a possible explanation for the existence of a thin and thick disk stellar population and the age-velocity dispersion correlation of stars in the solar neighborhood: the high velocity dispersion of gas in disks at z~2 decreases along with the cosmological accretion rate, while at lower redshift, the dynamically colder gas forms the low velocity dispersion stars of the thin disk.
[ascl:1305.010] GILDAS: Grenoble Image and Line Data Analysis Software
GILDAS is a collection of software oriented toward (sub-)millimeter radioastronomical applications (either single-dish or interferometer). It has been adopted as the IRAM standard data reduction package and is jointly maintained by IRAM & CNRS. GILDAS contains many facilities, most of which are oriented towards spectral line mapping and many kinds of 3-dimensional data. The code, written in Fortran-90 with a few parts in C/C++ (mainly keyboard interaction, plotting, widgets), is easily extensible.
[ascl:1004.001] GIM2D: Galaxy IMage 2D
GIM2D (Galaxy IMage 2D) is an IRAF/SPP package written to perform detailed bulge/disk decompositions of low signal-to-noise images of distant galaxies in a fully automated way. GIM2D takes an input image from HST or ground-based telescopes and outputs a galaxy-subtracted image as well as a catalog of structural parameters.
[ascl:1303.020] Ginga: Flexible FITS viewer
Ginga is a viewer for astronomical data FITS (Flexible Image Transport System) files; the viewer centers around a FITS display widget which supports zooming and panning, color and intensity mapping, a choice of several automatic cut levels algorithms and canvases for plotting scalable geometric forms. In addition to this widget, the FITS viewer provides a flexible plugin framework for extending the viewer with many different features. A fairly complete set of "standard" plugins are provided for expected features of a modern viewer: panning and zooming windows, star catalog access, cuts, star pick/fwhm, thumbnails, and others. This viewer was written by software engineers at Subaru Telescope, National Astronomical Observatory of Japan, and is in use at that facility.
[ascl:1109.018] GIPSY: Groningen Image Processing System
GIPSY is an acronym of Groningen Image Processing SYstem. It is a highly interactive software system for the reduction and display of astronomical data. It supports multi-tasking using a versatile user interface, it has an advanced data structure, a powerful script language and good display facilities based on the X Window system.
GIPSY consists of a number of components which can be divided into a number of classes:
• The user interfaces. Currently two user interfaces are available; one for interactive work and one for batch processing.
• The data structure.
• The display utilities.
• The application programs. These are the majority of programs.
GIPSY was designed originally for the reduction of interferometric data from the Westerbork Synthesis Radio Telescope, but in its history of more than 20 years it has grown to a system capable of handling data from many different instruments (e.g. TAURUS, IRAS etc.).
[ascl:1907.025] GIST: Galaxy IFU Spectroscopy Tool
[ascl:1410.003] GIZMO: Multi-method magneto-hydrodynamics+gravity code
GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).
[submitted] GizmoAnalysis: read and analyze Gizmo simulations
GizmoAnalysis reads and analyzes N-body simulations run with the Gizmo code (ascl:1410.003). Written in Python 3, we developed it primarily to analyze FIRE simulations, though it is useable with any Gizmo snapshot files. It offers the following functionality: reads snapshot files and converts particle data to physical units; provides a flexible dictionary class to store particle data and compute derived quantities on the fly; plots images and properties of particles; generates region files for input to MUSIC (ascl:1311.011) to generate cosmological zoom-in initial conditions; computes rates of supernovae and stellar winds, including their nucleosynthetic yields, as used in FIRE simulations. Includes a Jupyter notebook tutorial.
[ascl:1805.025] GLACiAR: GaLAxy survey Completeness AlgoRithm
[ascl:1812.002] GLADIS: GLobal Accretion Disk Instability Simulation
[ascl:1010.012] glafic: Software Package for Analyzing Gravitational Lensing
glafic is a public software package for analyzing gravitational lensing. It offers many features including computations of various lens properties for many mass models, solving the lens equation using an adaptive grid algorithm, simulations of lensed extended images with PSF convolved, and efficient modeling of observed strong lens systems.
[ascl:1103.006] GLESP 2.0: Gauss-Legendre Sky Pixelization for CMB Analysis
GLESP is a pixelization scheme for the cosmic microwave background (CMB) radiation maps. This scheme is based on the Gauss-Legendre polynomials zeros and allows one to create strict orthogonal expansion of the map.
[ascl:1802.010] Glimpse: Sparsity based weak lensing mass-mapping tool
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
[ascl:1110.008] Glnemo2: Interactive Visualization 3D Program
Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface.
Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.
[ascl:1807.019] GLS: Generalized Lomb-Scargle periodogram
The Lomb-Scargle periodogram is a common tool in the frequency analysis of unequally spaced data equivalent to least-squares fitting of sine waves. GLS is a solution for the generalization to a full sine wave fit, including an offset and weights (χ2 fitting). Compared to the Lomb-Scargle periodogram, GLS is superior as it provides more accurate frequencies, is less susceptible to aliasing, and gives a much better determination of the spectral intensity.
[ascl:1402.002] Glue: Linked data visualizations across multiple files
Glue, written in Python, links visualizations of scientific datasets across many files, allowing for interactive, linked statistical graphics of multiple files. It supports many file formats including common image formats (jpg, tiff, png), ASCII tables, astronomical image and table formats (FITS, VOT, IPAC), and HDF5. Custom data loaders can also be easily added. Glue is highly scriptable and extendable.
[ascl:1710.015] GMCALab: Generalized Morphological Component Analysis
GMCALab solves Blind Source Separation (BSS) problems from multichannel/multispectral/hyperspectral data. In essence, multichannel data provide different observations of the same physical phenomena (e.g. multiple wavelengths), which are modeled as a linear combination of unknown elementary components or sources. Written as a set of Matlab toolboxes, it provides a generic framework that can be extended to tackle different matrix factorization problems.
[ascl:1708.013] GMM: Gaussian Mixture Modeling
GMM (Gaussian Mixture Modeling) tests the existence of bimodality in globular cluster color distributions. GMM uses three indicators to distinguish unimodal and bimodal distributions: the kurtosis of the distribution, the separation of the peaks, and the probability of obtaining the same χ2 from a unimodal distribution.
[ascl:1801.009] Gnuastro: GNU Astronomy Utilities
Gnuastro (GNU Astronomy Utilities) manipulates and analyzes astronomical data. It is an official GNU package of a large collection of programs and C/C++ library functions. Command-line programs perform arithmetic operations on images, convert FITS images to common types like JPG or PDF, convolve an image with a given kernel or matching of kernels, perform cosmological calculations, crop parts of large images (possibly in multiple files), manipulate FITS extensions and keywords, and perform statistical operations. In addition, it contains programs to make catalogs from detection maps, add noise, make mock profiles with a variety of radial functions using monte-carlo integration for their centers, match catalogs, and detect objects in an image among many other operations. The command-line programs share the same basic command-line user interface for the comfort of both the users and developers. Gnuastro is written to comply fully with the GNU coding standards and integrates well with all Unix-like operating systems. This enables astronomers to expect a fully familiar experience in the source code, building, installing and command-line user interaction that they have seen in all the other GNU software that they use. Gnuastro's extensive library is included for users who want to build their own unique programs.
[ascl:1210.003] GOSSIP: SED fitting code
GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.
[ascl:1210.001] GP2PCF: Brute-force computation of 2-point correlation functions
The two-point correlation function is a simple statistic that quantifies the clustering of a given distribution of objects. In studies of the large scale structure of the Universe, it is an important tool containing information about the matter clustering and the evolution of the Universe at different cosmological epochs. A classical application of this statistic is the galaxy-galaxy correlation function to find constraints on the parameter Omega_m or the location of the baryonic acoustic oscillation peak. This calculation, however, is very expensive in terms of computer power and Graphics Processing Units provide one solution for efficient analysis of the increasingly larger galaxy surveys that are currently taking place.
GP2PCF is a public code in CUDA for performing this computation; with a single GPU board it is possible to achieve 120-fold speedups with respect to a standard implementation in C running on a single CPU. With respect to other solutions such as k-trees the improvement is of a factor of a few retaining full precision. The speedup is comparable to running in parallel in a cluster of O(100) cores.
[ascl:1512.006] GPC: General Polygon Clipper library
The University of Manchester GPC library is a flexible and highly robust polygon set operations library for use with C, C#, Delphi, Java, Perl, Python, Haskell, Lua, VB.Net and other applications. It supports difference, intersection, exclusive-or and union clip operations, and polygons may be comprised of multiple disjoint contours. Contour vertices may be given in any order - clockwise or anticlockwise, and contours may be convex, concave or self-intersecting, and may be nested (i.e. polygons may have holes). Output may take the form of either polygon contours or tristrips, and hole and external contours are differentiated in the result. GPC is free for non-profit and educational use; a Commercial Use License is required for commercial use.
[ascl:1411.018] GPI Pipeline: Gemini Planet Imager Data Pipeline
The GPI data pipeline allows users to reduce and calibrate raw GPI data into spectral and polarimetric datacubes, and to apply various PSF subtraction methods to those data. Written in IDL and available in a compiled version, the software includes an integrated calibration database to manage reference files and an interactive data viewer customized for high contrast imaging that allows exploration and manipulation of data.
[ascl:1403.001] GPU-D: Generating cosmological microlensing magnification maps
GPU-D is a GPU-accelerated implementation of the inverse ray-shooting technique used to generate cosmological microlensing magnification maps. These maps approximate the source plane magnification patterns created by an ensemble of stellar-mass compact objects within a foreground macrolens galaxy. Unlike other implementations, GPU-D solves the gravitational lens equation without any approximation. Due to the high computational intensity and high degree of parallelization inherent in the algorithm, it is ideal for brute-force implementation on GPUs. GPU-D uses CUDA for GPU acceleration and require NVIDIA devices to run.
[ascl:1906.014] GPUVMEM: Maximum Entropy Method (MEM) GPU algorithm for radio astronomical image synthesis
The maximum entropy method (MEM) is a well known deconvolution technique in radio-interferometry. This method solves a non-linear optimization problem with an entropy regularization term. Other heuristics such as CLEAN are faster but highly user dependent. Nevertheless, MEM has the following advantages: it is unsupervised, it has a statistical basis, it has a better resolution and better image quality under certain conditions. GPUVMEM presents a high performance GPU version of non-gridding MEM.
[ascl:1010.022] GR1D: Open-Source Code for Spherically-Symmetric Stellar Collapse to Neutron Stars and Black Holes
GR1D is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D.
[ascl:1010.080] GRACOS: Scalable and Load Balanced P3M Cosmological N-body Code
The GRACOS (GRAvitational COSmology) code, a parallel implementation of the particle-particle/particle-mesh (P3M) algorithm for distributed memory clusters, uses a hybrid method for both computation and domain decomposition. Long-range forces are computed using a Fourier transform gravity solver on a regular mesh; the mesh is distributed across parallel processes using a static one-dimensional slab domain decomposition. Short-range forces are computed by direct summation of close pairs; particles are distributed using a dynamic domain decomposition based on a space-filling Hilbert curve. A nearly-optimal method was devised to dynamically repartition the particle distribution so as to maintain load balance even for extremely inhomogeneous mass distributions. Tests using $800^3$ simulations on a 40-processor beowulf cluster showed good load balance and scalability up to 80 processes. There are limits on scalability imposed by communication and extreme clustering which may be removed by extending the algorithm to include adaptive mesh refinement.
[ascl:1106.008] GRAFIC-2: Multiscale Gaussian Random Fields for Cosmological Simulations
This paper describes the generation of initial conditions for numerical simulations in cosmology with multiple levels of resolution, or multiscale simulations. We present the theory of adaptive mesh refinement of Gaussian random fields followed by the implementation and testing of a computer code package performing this refinement called GRAFIC-2.
[ascl:1204.006] GRASIL: Spectral evolution of stellar systems with dust
GRASIL (which stands for GRAphite and SILicate) computes the spectral evolution of stellar systems taking into account the effects of dust, which absorbs and scatters optical and UV photons and emits in the IR-submm region. It may be used as well to do “standard” no-dust stellar spectral synthesis. The code is very well calibrated and applied to interpret galaxies at different redshifts. GRASIL can be downloaded or run online using the GALSYNTH WEB interface.
[ascl:1609.008] GRASP: General-purpose Relativistic Atomic Structure Package
GRASP (General-purpose Relativistic Atomic Structure Package) calculates atomic structure, including energy levels, radiative rates (A-values) and lifetimes; it is a fully relativistic code based on the jj coupling scheme. This code has been superseded by GRASP2K (ascl:1611.007).
[ascl:1611.007] GRASP2K: Relativistic Atomic Structure Package
GRASP2K is a revised and greatly expanded version of GRASP (ascl:1609.008) and is adapted for 64-bit computer architecture. It includes new angular libraries, can transform from jj- to LSJ-coupling, and coefficients of fractional parentage have been extended to j=9/2, making calculations feasible for the lanthanides and actinides. GRASP2K identifies each atomic state by the total energy and a label for the configuration state function with the largest expansion coefficient in LSJLSJ intermediate coupling.
[ascl:1902.004] GraviDy: Gravitational Dynamics
GraviDy performs N-body 3D visualizations; it is a GPU, direct-summation N-body integrator based on the Hermite scheme and includes relativistic corrections for sources of gravitational radiation. The software is modular, allowing users to readily introduce new physics, and exploits available computational resources. The software can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version.
[ascl:1102.003] GRAVLENS: Computational Methods for Gravitational Lensing
Modern applications of strong gravitational lensing require the ability to use precise and varied observational data to constrain complex lens models. Two sets of computational methods for lensing calculations are discussed. The first is a new algorithm for solving the lens equation for general mass distributions. This algorithm makes it possible to apply arbitrarily complicated models to observed lenses. The second is an evaluation of techniques for using observational data including positions, fluxes, and time delays of point-like images, as well as maps of extended images, to constrain models of strong lenses. The techniques presented here are implemented in a flexible and user-friendly software package called gravlens, which is made available to the community.
[ascl:1403.005] GRay: Massive parallel ODE integrator
GRay is a massive parallel ordinary differential equation integrator that employs the "stream processing paradigm." It is designed to efficiently integrate billions of photons in curved spacetime according to Einstein's general theory of relativity. The code is implemented in CUDA C/C++.
[ascl:1302.007] GRID-core: Gravitational Potential Identification of Cores
GRID-core is a core-finding method using the contours of the local gravitational potential to identify core boundaries. The GRID-core method applied to 2D surface density and 3D volume density are in good agreement for bound cores. We have implemented a version of the GRID-core algorithm in IDL, suitable for core-finding in observed maps. The required input is a two-dimensional FITS file containing a map of the column density in a region of a cloud.
[ascl:1702.012] GRIM: General Relativistic Implicit Magnetohydrodynamics
[ascl:1905.001] Grizli: Grism redshift and line analysis software
Grizli produces quantitative and comprehensive modeling and fitting of slitless spectroscopic observations, which typically involve overlapping spectra of hundreds or thousands of objects in exposures taken with one or more separate grisms and at multiple dispersion position angles. This type of analysis provides complete and uniform characterization of the spectral properties (e.g., continuum shape, redshifts, line fluxes) of all objects in a given exposure taken in the slitless spectroscopic mode.
[ascl:1512.018] growl: Growth factor and growth rate of expanding universes
Growl calculates the linear growth factor Da and its logarithmic derivative dln D/dln a in expanding Friedmann-Robertson-Walker universes with arbitrary matter and vacuum densities. It permits rapid and stable numerical evaluation.
[ascl:1605.013] grtrans: Polarized general relativistic radiative transfer via ray tracing
grtrans calculates ray tracing radiative transfer in the Kerr metric, including the full treatment of polarised radiative transfer and parallel transport along geodesics, for comparing theoretical models of black hole accretion flows and jets with observations. The code is written in Fortran 90 and parallelizes with OpenMP; the full code and several components have Python interfaces. grtrans includes Geokerr (ascl:1011.015) and requires cfitsio (ascl:1010.001) and pyfits (ascl:1207.009).
[ascl:1503.009] GSD: Global Section Datafile access library
The GSD library reads data written in the James Clerk Maxwell Telescope GSD format. This format uses the General Single-Dish Data model and was used at the JCMT until 2005. The library provides an API to open GSD files and read their contents. The content of the data files is self-describing and the library can return the type and name of any component. The library is used by SPECX (ascl:1310.008), JCMTDR (ascl:1406.019) and COADD (ascl:1411.020). The SMURF (ascl:1310.007) package can convert GSD heterodyne data files to ACSIS format using this library.
[ascl:1806.008] gsf: galactic structure finder
[ascl:1610.005] GSGS: In-Focus Phase Retrieval Using Non-Redundant Mask Data
GSGS does phase retrieval on images given an estimate of the pupil phase (from a non-redundant mask or other interferometric approach), the pupil geometry, and the in-focus image. The code uses a modified Gerchberg-Saxton algorithm that iterates between pupil plane and image plane to measure the pupil phase.
[ascl:1701.011] GWFrames: Manipulate gravitational waveforms
GWFrames eliminates all rotational behavior, thus simplifying the waveform as much as possible and allowing direct generalizations of methods for analyzing nonprecessing systems. In the process, the angular velocity of a waveform is introduced, which also has important uses, such as supplying a partial solution to an important inverse problem.
[ascl:1203.005] Gyoto: General relativitY Orbit Tracer of Observatoire de Paris
GYOTO, a general relativistic ray-tracing code, aims at computing images of astronomical bodies in the vicinity of compact objects, as well as trajectories of massive bodies in relativistic environments. This code is capable of integrating the null and timelike geodesic equations not only in the Kerr metric, but also in any metric computed numerically within the 3+1 formalism of general relativity. Simulated images and spectra have been computed for a variety of astronomical targets, such as a moving star or a toroidal accretion structure. The underlying code is open source and freely available. It is user-friendly, quickly handled and very modular so that extensions are easy to integrate. Custom analytical metrics and astronomical targets can be implemented in C++ plug-in extensions independent from the main code.
[ascl:1308.010] GYRE: Stellar oscillation code
GYRE is an oscillation code that solves the stellar pulsation equations (both adiabatic and non-adiabatic) using a novel Magnus Multiple Shooting numerical scheme devised to overcome certain weaknesses of the usual relaxation and shooting schemes. The code is accurate (up to 6th order in the number of grid points), robust, and makes efficient use of multiple processor cores and/or nodes.
[ascl:1402.031] gyrfalcON: N-body code
gyrfalcON (GalaxY simulatoR using falcON) is a full-fledged N-body code using Dehnen’s force algorithm of complexity O(N) (falcON); this algorithm is approximately 10 times faster than an optimally coded tree code. The code features individual adaptive time steps and individual (but fixed) softening lengths. gyrfalcON is included in and requires NEMO to run.
[submitted] HaloAnalysis
HaloAnalysis reads and analyzes halo/galaxy catalogs, generated from Rockstar (ascl:1210.008) or AHF (ascl:1102.009), and merger trees generated from Consistent Trees (ascl:1210.011). Written in Python 3, it offers the following functionality: reads halo/galaxy/tree catalogs from multiple file formats; assigns baryonic particles and properties to dark-matter halos; combines and re-generates halo/galaxy/tree files in hdf5 format; analyzes properties of halos/galaxies; selects halos to generate zoom-in initial conditions. Includes a Jupyter notebook tutorial.
[ascl:1402.032] HALOFIT: Nonlinear distribution of cosmological mass and galaxies
HALOFIT provides an explanatory framework for galaxy bias and clustering and has been incorporated into CMB packages such as CMBFAST (ascl:9909.004) and CAMB (ascl:1102.026). It attains a reasonable level of precision, though the halo model does not match N-body data perfectly. The code is written in Fortran 77. HALOFIT tends to underpredict the power on the smallest scales in standard LCDM universes (although HALOFIT was designed to work for a much wider range of power spectra); its accuracy can be improved by using a supplied correction.
[ascl:1010.053] Halofitting codes for DGP and Degravitation
We perform N-body simulations of theories with infinite-volume extra dimensions, such as the Dvali-Gabadadze-Porrati (DGP) model and its higher-dimensional generalizations, where 4D gravity is mediated by massive gravitons. The longitudinal mode of these gravitons mediates an extra scalar force, which we model as a density-dependent modification to the Poisson equation. This enhances gravitational clustering, particularly on scales that have undergone mild nonlinear processing. While the standard non-linear fitting algorithm of Smith et al. overestimates this power enhancement on non-linear scales, we present a modified fitting formula that offers a remarkably good fit to our power spectra. Due to the uncertainty in galaxy bias, our results are consistent with precision power spectrum determinations from galaxy redshift surveys, even for graviton Compton wavelengths as small as 300 Mpc. Our model is sufficiently general that we expect it to capture the phenomenology of a wide class of related higher-dimensional gravity scenarios.
[ascl:1505.017] HALOGEN: Approximate synthetic halo catalog generator
HALOGEN generates approximate synthetic halo catalogs. Written in C, it decomposes the problem of generating cosmological tracer distributions (eg. halos) into four steps: generating an approximate density field, generating the required number of tracers from a CDF over mass, placing the tracers on field particles according to a bias scheme dependent on local density, and assigning velocities to the tracers based on velocities of local particles. It also implements a default set of four models for these steps. HALOGEN uses 2LPTic (ascl:1201.005) and CUTE (ascl:1505.016); the software is flexible and can be adapted to varying cosmologies and simulation specifications.
[ascl:1407.020] Halogen: Multimass spherical structure models for N-body simulations
Halogen, written in C, generates multimass spherically symmetric initial conditions for N-body simulations. A large family of radial density profiles is supported. The initial conditions are sampled from the full distribution function.
[ascl:1604.005] Halotools: Galaxy-Halo connection models
[ascl:1210.022] HAM2D: 2D Shearing Box Model
HAM solves non-relativistic hyperbolic partial differential equations in conservative form using high-resolution shock-capturing techniques. This version of HAM has been configured to solve the magnetohydrodynamic equations of motion in axisymmetry to evolve a shearing box model.
[ascl:1201.014] Hammurabi: Simulating polarized Galactic synchrotron emission
The Hammurabi code is a publicly available C++ code for generating mock polarized observations of Galactic synchrotron emission with telescopes such as LOFAR, SKA, Planck, and WMAP, based on model inputs for the Galactic magnetic field (GMF), the cosmic-ray density distribution, and the thermal electron density. The Hammurabi code allows one to perform simulations of several different data sets simultaneously, providing a more reliable constraint of the magnetized ISM.
[ascl:1905.009] HAOS-DIPER: HAO Spectral Diagnostic Package For Emitted Radiation
HAOS-DIPER works with and manipulates data for neutral atoms and atomic ions to understand radiation emitted by some space plasmas, notably the solar atmosphere and stellar atmospheres. HAOS-DIPER works with quantum numbers for atomic levels, enabling it to perform tasks otherwise difficult or very tedious, including a variety of data checks, calculations based upon the atomic numbers, and searching and manipulating data based upon these quantum numbers. HAOS-DIPER handles conditions from LTE to coronal-like conditions, in a manner controlled by one system variable !REGIME, and has some capability for estimating data for which no accurate parameters are available and for accounting for the effects of missing atomic levels.
[ascl:1209.005] HARM: A Numerical Scheme for General Relativistic Magnetohydrodynamics
HARM uses a conservative, shock-capturing scheme for evolving the equations of general relativistic magnetohydrodynamics. The fluxes are calculated using the Harten, Lax, & van Leer scheme. A variant of constrained transport, proposed earlier by Tóth, is used to maintain a divergence-free magnetic field. Only the covariant form of the metric in a coordinate basis is required to specify the geometry. On smooth flows HARM converges at second order.
[ascl:1306.003] Harmony: Synchrotron Emission Coefficients
Harmony is a general numerical scheme for evaluating MBS emission and absorption coefficients for both polarized and unpolarized light in a plasma with a general distribution function.
[ascl:1109.004] HAZEL: HAnle and ZEeman Light
A big challenge in solar and stellar physics in the coming years will be to decipher the magnetism of the solar outer atmosphere (chromosphere and corona) along with its dynamic coupling with the magnetic fields of the underlying photosphere. To this end, it is important to develop rigorous diagnostic tools for the physical interpretation of spectropolarimetric observations in suitably chosen spectral lines. HAZEL is a computer program for the synthesis and inversion of Stokes profiles caused by the joint action of atomic level polarization and the Hanle and Zeeman effects in some spectral lines of diagnostic interest, such as those of the He I 1083.0 nm and 587.6 nm (or D3) multiplets. It is based on the quantum theory of spectral line polarization, which takes into account in a rigorous way all the relevant physical mechanisms and ingredients (optical pumping, atomic level polarization, level crossings and repulsions, Zeeman, Paschen-Back and Hanle effects). The influence of radiative transfer on the emergent spectral line radiation is taken into account through a suitable slab model. The user can either calculate the emergent intensity and polarization for any given magnetic field vector or infer the dynamical and magnetic properties from the observed Stokes profiles via an efficient inversion algorithm based on global optimization methods.
[ascl:1711.022] HBT: Hierarchical Bound-Tracing
HBT is a Hierarchical Bound-Tracing subhalo finder and merger tree builder, for numerical simulations in cosmology. It tracks haloes from birth and continues to track them after mergers, finding self-bound structures as subhaloes and recording their merger histories as merger trees.
[ascl:1711.023] HBT+: Subhalo finder and merger tree builder
HBT+ is a hybrid subhalo finder and merger tree builder for cosmological simulations. It comes as an MPI edition that can be run on distributed clusters or shared memory machines and is MPI/OpenMP parallelized, and also as an OpenMP edition that can be run on shared memory machines and is only OpenMP parallelized. This version is more memory efficient than the MPI branch on shared memory machines, and is more suitable for analyzing zoomed-in simulations that are difficult to balance on distributed clusters. Both editions support hydro simulations with gas/stars.
[ascl:1502.009] HDS: Hierarchical Data System
[ascl:1107.018] HEALPix: Hierarchical Equal Area isoLatitude Pixelization of a sphere
HEALPix is an acronym for Hierarchical Equal Area isoLatitude Pixelization of a sphere. As suggested in the name, this pixelization produces a subdivision of a spherical surface in which each pixel covers the same surface area as every other pixel. Another property of the HEALPix grid is that the pixel centers occur on a discrete number of rings of constant latitude, the number of constant-latitude rings is dependent on the resolution of the HEALPix grid.
[ascl:1907.002] healvis: Radio interferometric visibility simulator based on HEALpix maps
Healvis simulates radio interferometric visibility off of HEALPix shells. It generates a flat-spectrum and a GSM model and computes visibilities, and can simulates visibilities given an Observation Parameter YAML file. Healvis can perform partial frequency simulations in serial to minimize instantaneous memory loads.
[ascl:1408.004] HEAsoft: Unified Release of FTOOLS and XANADU
HEASOFT combines XANADU, high-level, multi-mission software for X-ray astronomical spectral, timing, and imaging data analysis tasks, and FTOOLS (ascl:9912.002), general and mission-specific software to manipulate FITS files, into one package. It also contains contains the NuSTAR subpackage of tasks, NuSTAR Data Analysis Software (NuSTARDAS). The source code for the software can be downloaded; precompiled executables for the most widely used computer platforms are also available for download. As an additional service, HEAsoft tasks can be directly from a web browser via WebHera.
[ascl:1506.009] HEATCVB: Coronal heating rate approximations
HEATCVB is a stand-alone Fortran 77 subroutine that estimates the local volumetric coronal heating rate with four required inputs: the radial distance r, the wind speed u, the mass density ρ, and the magnetic field strength |B0|. The primary output is the heating rate Qturb at the location defined by the input parameters. HEATCVB also computes the local turbulent dissipation rate of the waves, γ = Qturb/(2UA).
[ascl:1903.017] HelioPy: Heliospheric and planetary physics library
[ascl:1503.004] HELIOS-K: Opacity Calculator for Radiative Transfer
HELIOS-K is an opacity calculator for exoplanetary atmospheres. It takes a line list as an input and computes the line shapes of an arbitrary number of spectral lines (~millions to billions). HELIOS-K is capable of computing 100,000 spectral lines in 1 second; it is written in CUDA, is optimized for graphics processing units (GPUs), and can be used with the HELIOS radiative transfer code (ascl:1807.009).
[ascl:1102.016] HERACLES: 3D Hydrodynamical Code to Simulate Astrophysical Fluid Flows
HERACLES is a 3D hydrodynamical code used to simulate astrophysical fluid flows. It uses a finite volume method on fixed grids to solve the equations of hydrodynamics, MHD, radiative transfer and gravity. This software is developed at the Service d'Astrophysique, CEA/Saclay as part of the COAST project and is registered under the CeCILL license. HERACLES simulates astrophysical fluid flows using a grid based Eulerian finite volume Godunov method. It is capable of simulating pure hydrodynamical flows, magneto-hydrodynamic flows, radiation hydrodynamic flows (using either flux limited diffusion or the M1 moment method), self-gravitating flows using a Poisson solver or all of the above. HERACLES uses cartesian, spherical and cylindrical grids.
[ascl:1808.005] hfof: Friends-of-Friends via spatial hashing
hfof is a 3-d friends-of-friends (FoF) cluster finder with Python bindings based on a fast spatial hashing algorithm that identifies connected sets of points where the point-wise connections are determined by a fixed spatial distance. This technique sorts particles into fine cells sufficiently compact to guarantee their cohabitants are linked, and uses locality sensitive hashing to search for neighboring (blocks of) cells. Tests on N-body simulations of up to a billion particles exhibit speed increases of factors up to 20x compared with FOF via trees, and is consistently complete in less than the time of a k-d tree construction, giving it an intrinsic advantage over tree-based methods.
[ascl:1607.011] HfS: Hyperfine Structure fitting tool
[ascl:1801.004] hh0: Hierarchical Hubble Constant Inference
hh0 is a Bayesian hierarchical model (BHM) that describes the full distance ladder, from nearby geometric-distance anchors through Cepheids to SNe in the Hubble flow. It does not rely on any of the underlying distributions being Gaussian, allowing outliers to be modeled and obviating the need for any arbitrary data cuts.
[ascl:1606.004] HIBAYES: Global 21-cm Bayesian Monte-Carlo Model Fitting
HIBAYES implements fully-Bayesian extraction of the sky-averaged (global) 21-cm signal from the Cosmic Dawn and Epoch of Reionization in the presence of foreground emission. User-defined likelihood and prior functions are called by the sampler PyMultiNest (ascl:1606.005) in order to jointly explore the full (signal plus foreground) posterior probability distribution and evaluate the Bayesian evidence for a given model. Implemented models, for simulation and fitting, include gaussians (HI signal) and polynomials (foregrounds). Some simple plotting and analysis tools are supplied. The code can be extended to other models (physical or empirical), to incorporate data from other experiments, or to use alternative Monte-Carlo sampling engines as required.
[ascl:1607.019] HIDE: HI Data Emulator
[ascl:1010.065] Higher Post Newtonian Gravity Calculations
Motivated by experimental probes of general relativity, we adopt methods from perturbative (quantum) field theory to compute, up to certain integrals, the effective lagrangian for its n-body problem. Perturbation theory is performed about a background Minkowski spacetime to O[(v/c)^4] beyond Newtonian gravity, where v is the typical speed of these n particles in their center of energy frame. For the specific case of the 2 body problem, the major efforts underway to measure gravitational waves produced by in-spiraling compact astrophysical binaries require their gravitational interactions to be computed beyond the currently known O[(v/c)^7]. We argue that such higher order post-Newtonian calculations must be automated for these field theoretic methods to be applied successfully to achieve this goal. In view of this, we outline an algorithm that would in principle generate the relevant Feynman diagrams to an arbitrary order in v/c and take steps to develop the necessary software. The Feynman diagrams contributing to the n-body effective action at O[(v/c)^6] beyond Newton are derived.
HiGPUs is also available as part of the AMUSE project.
[ascl:1807.008] HII-CHI-mistry_UV: Oxygen abundance and ionizionation parameters for ultraviolet emission lines
HII-CHI-mistry_UV derives oxygen and carbon abundances using the ultraviolet (UV) lines emitted by the gas phase ionized by massive stars. The code first fixes C/O using ratios of appropriate emission lines and, in a second step, calculates O/H and the ionization parameter from carbon lines in the UV. An optical version of this Python code, HII-CHI-mistry (ascl:1807.007), is also available.
[ascl:1807.007] HII-CHI-mistry: Oxygen abundance and ionizionation parameters for optical emission lines
HII-CHI-mistry calculates the oxygen abundance for gaseous nebulae ionized by massive stars using optical collisionally excited emission lines. This code takes the extinction-corrected emission line fluxes and, based on a Χ2 minimization on a photoionization models grid, determines chemical-abundances (O/H, N/O) and ionization parameters. An ultraviolet version of this Python code, HII-CHI-mistry-UV (ascl:1807.008), is also available.
[ascl:1603.017] HIIexplorer: Detect and extract integrated spectra of HII regions
HIIexplorer detects and extracts the integrated spectra of HII regions from IFS datacubes. The procedure assumes H ii regions are peaky/isolated structures with a strong ionized gas emission, clearly above the continuum emission and the average ionized gas emission across the galaxy and that H ii regions have a typical physical size of about a hundred or a few hundreds of parsecs, which corresponds to a typical projected size at the distance of the galaxies of a few arcsec for galaxies at z~0.016. All input parameters can be derived from either a visual inspection and/or a statistical analysis of the Hα emission line map. The algorithm produces a segmentation FITS file describing the pixels associated to each H ii region.
[ascl:1405.005] HIIPHOT: Automated Photometry of H II Regions
HIIPHOT enables accurate photometric characterization of H II regions while permitting genuine adaptivity to irregular source morphology. It makes a first guess at the shapes of all sources through object recognition techniques; it then allows for departure from such idealized "seeds" through an iterative growing procedure and derives photometric corrections for spatially coincident diffuse emission from a low-order surface fit to the background after exclusion of all detected sources.
[ascl:1111.001] HIPE: Herschel Interactive Processing Environment
The Herschel Space Observatory is the fourth cornerstone mission in the ESA science programme and performs photometry and spectroscopy in the 55 - 672 micron range. The development of the Herschel Data Processing System started in 2002 to support the data analysis for Instrument Level Tests. The Herschel Data Processing System was used for the pre-flight characterisation of the instruments, and during various ground segment test campaigns. Following the successful launch of Herschel 14th of May 2009 the Herschel Data Processing System demonstrated its maturity when the first PACS preview observation of M51 was processed within 30 minutes of reception of the first science data after launch. Also the first HIFI observations on DR21 were successfully reduced to high quality spectra, followed by SPIRE observations on M66 and M74. A fast turn-around cycle between data retrieval and the production of science-ready products was demonstrated during the Herschel Science Demonstration Phase Initial Results Workshop held 7 months after launch, which is a clear proof that the system has reached a good level of maturity.
[ascl:1507.008] HLINOP: Hydrogen LINe OPacity in stellar atmospheres
HLINOP is a collection of codes for computing hydrogen line profiles and opacities in the conditions typical of stellar atmospheres. It includes HLINOP for approximate quick calculation of any line of neutral hydrogen (suitable for model atmosphere calculations), based on the Fortran code of Kurucz and Peterson found in ATLAS9. It also includes HLINPROF, for detailed, accurate calculation of lower Balmer line profiles (suitable for detailed analysis of Balmer lines) and HBOP, to implement the occupation probability formalism of Daeppen, Anderson and Milhalas (1987) and thus account for the merging of bound-bound and bound-free opacity (used often as a wrapper to HLINOP for model atmosphere calculations).
[ascl:1412.006] HMF: Halo Mass Function calculator
HMF calculates the Halo Mass Function (HMF) given any set of cosmological parameters and fitting function and serves as the backend for the web application HMFcalc. Written in Python, it allows for dynamic accurate calculation of the transfer function with CAMB (ascl:1102.026) and efficient and self-consistent parameter updates. HMF offers exploration of the effects of cosmological parameters, redshift and fitting function on the predicted HMF.
[ascl:1201.010] HNBody: Hierarchical N-Body Symplectic Integration Package
HNBody is a new set of software utilities geared to the integration of hierarchical (nearly-Keplerian) N-body systems. Our focus is on symplectic methods, and we have included explicit support for three classes of particles (heavy, light, and massless), second and fourth order methods, post-Newtonian corrections, and the use of a symplectic corrector (among other things). For testing purposes, we also provide support for more general integration schemes (Bulirsch-Stoer & Runge-Kutta). Configuration files employing an intuitive syntax allow for easy problem setup, and many simple simulations can be done without the user compiling any code. Low-level interfaces are also available, enabling extensive customization.
[ascl:1711.013] HO-CHUNK: Radiation Transfer code
HO-CHUNK calculates radiative equilibrium temperature solution, thermal and PAH/vsg emission, scattering and polarization in protostellar geometries. It is useful for computing spectral energy distributions (SEDs), polarization spectra, and images.
[ascl:1102.019] HOP: A Group-finding Algorithm for N-body Simulations
We describe a new method (HOP) for identifying groups of particles in N-body simulations. Having assigned to every particle an estimate of its local density, we associate each particle with the densest of the Nh particles nearest to it. Repeating this process allows us to trace a path, within the particle set itself, from each particle in the direction of increasing density. The path ends when it reaches a particle that is its own densest neighbor; all particles reaching the same such particle are identified as a group. Combined with an adaptive smoothing kernel for finding the densities, this method is spatially adaptive, coordinate-free, and numerically straight-forward. One can proceed to process the output by truncating groups at a particular density contour and combining groups that share a (possibly different) density contour. While the resulting algorithm has several user-chosen parameters, we show that the results are insensitive to most of these, the exception being the outer density cutoff of the groups.
[ascl:1411.005] HOPE: Just-in-time Python compiler for astrophysical computations
HOPE is a specialized Python just-in-time (JIT) compiler designed for numerical astrophysical applications. HOPE focuses on a subset of the language and is able to translate Python code into C++ while performing numerical optimization on mathematical expressions at runtime. To enable the JIT compilation, the user only needs to add a decorator to the function definition. By using HOPE, the user benefits from being able to write common numerical code in Python while getting the performance of compiled implementation.
[ascl:1504.004] HOTPANTS: High Order Transform of PSF ANd Template Subtraction
HOTPANTS (High Order Transform of PSF ANd Template Subtraction) implements the Alard 1999 algorithm for image subtraction. It photometrically aligns one input image with another after they have been astrometrically aligned.
[ascl:1707.001] HRM: HII Region Models
HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.
[ascl:1412.008] Hrothgar: MCMC model fitting toolkit
Hrothgar is a parallel minimizer and Markov Chain Monte Carlo generator. It has been used to solve optimization problems in astrophysics (galaxy cluster mass profiles) as well as in experimental particle physics (hadronic tau decays).
[ascl:1511.014] HumVI: Human Viewable Image creation
HumVI creates a composite color image from sets of input FITS files, following the Lupton et al (2004, ascl:1511.013) composition algorithm. Written in Python, it takes three FITS files as input and returns a color composite, color-saturated png image with an arcsinh stretch. HumVI reads the zero points out of the FITS headers and uses them to put all the images on the same flux scale; photometrically calibrated images produce the best results.
[ascl:1103.010] Hydra: A Parallel Adaptive Grid Code
We describe the first parallel implementation of an adaptive particle-particle, particle-mesh code with smoothed particle hydrodynamics. Parallelisation of the serial code, "Hydra," is achieved by using CRAFT, a Cray proprietary language which allows rapid implementation of a serial code on a parallel machine by allowing global addressing of distributed memory.
The collisionless variant of the code has already completed several 16.8 million particle cosmological simulations on a 128 processor Cray T3D whilst the full hydrodynamic code has completed several 4.2 million particle combined gas and dark matter runs. The efficiency of the code now allows parameter-space explorations to be performed routinely using $64^3$ particles of each species. A complete run including gas cooling, from high redshift to the present epoch requires approximately 10 hours on 64 processors.
[ascl:1402.023] HydraLens: Gravitational lens model generator
HydraLens generates gravitational lens model files for Lenstool, PixeLens, glafic and Lensmodel and can also translate lens model files among these four lens model codes. Through a GUI, the user enters a new model by specifying the type of model and is then led through screens to collect the data. Written in MS Visual Basic, the code can also translate an existing model from any of the four supported codes to any of the other three.
[ascl:1108.010] Hyperz: Photometric Redshift Code
[ascl:1302.009] IAS Stacking Library in IDL
[ascl:1010.034] iCosmo: An Interactive Cosmology Package
[ascl:1903.007] ICSF: Intensity Conserving Spectral Fitting
[ascl:1507.020] IEHI: Ionization Equilibrium for Heavy Ions
[ascl:1304.019] IFrIT: Ionization FRont Interactive Tool
[ascl:1110.003] iGalFit: An Interactive Tool for GalFit
[ascl:1101.003] IGMtransfer: Intergalactic Radiative Transfer Code
[ascl:1504.015] IGMtransmission: Transmission curve computation
[ascl:1408.009] IIPImage: Large-image visualization
[ascl:1307.006] im2shape: Bayesian Galaxy Shape Estimation
[ascl:1803.007] IMAGINE: Interstellar MAGnetic field INference Engine
[ascl:1108.001] IMCAT: Image and Catalogue Manipulation Software
[ascl:1312.003] IMCOM: IMage COMbination
[ascl:1804.014] IMNN: Information Maximizing Neural Networks
[ascl:1806.005] Indri: Pulsar population synthesis toolset
[ascl:1210.023] inf_solv: Kerr inflow solver
[ascl:1711.002] inhomog: Biscale kinematical backreaction analytical evolution
[ascl:1907.027] intensitypower: Spectrum multipoles modeler
[ascl:1612.013] InversionKit: Linear inversions from frequency data
|
11519fef5e1f57d9 | Posts Tagged ‘ontology’
The Elusive Object
29 March 2011
Behind the curtain
The Reformed Realist
Some of Bernard d’Espagnat’s best and dearest friends might be realists.
Chapter nine of his On Physics and Philosophy, entitled “Various Realist Attempts,” describes with a perceptible tinge of sorrow how the conventional realist’s goal seems doomed to failure.
If not certainly doomed, they are at least misguided, he feels, no matter how much he sympathizes with the impulse to believe in a knowable physical reality beyond the appearances.
These attempts have some difficult hurdles to jump. A successful theory should—
1. Make the same (or almost the same) predictions as conventional quantum mechanics
2. Respect the results of Aspect-type experiments and the Bell Theorem
3. Show that the interpretation is more than just a calculating convenience
4. Be more than just a reassuring linguistic reconfiguration, and
5. Keep its conceptual building blocks pretty faithful to its roots in realism.
The last criterion isn’t absolutely necessary, but if the only way a realist theory can work is by defining common terms (such as particles) in curiously non-realist ways then the project seems a bit dubious.
Add to that the requirement to respect the Bell Theorem and (more or less) match conventional quantum theory’s predictions, which mandate nonlocality if you want physical realism, and these efforts look increasingly futile.
In greater detail…
D’Espagnat’s Realism vs Near Realism
D’Espagnat says he very much sympathizes with realists, and says his own views don’t depart too radically from theirs. His disagreement, he says, developed not on a priori grounds but after he pondered the evidence of physics.
Proof vs Sentiment
Physical realism is an unprovable metaphysical stance, one among many. But “nobody” believes the moon disappears when we don’t look at it, says d’Espagnat. Commonsense arguments even convinced Einstein.
Giving Up Physical Realism vs Locality
John Bell (of Bell’s Theorem fame) continued to believe in a physical reality even after his theorem and experimental data shook the foundations of physical realism.
He could have given up the idea of a physical reality knowable in principle, but instead he chose to believe this reality is nonlocal.
Description vs Synthesis
D’Espagnat makes up “Jack,” a physicist who’s a hardline physical realist. Jack believes science has succeeded magnificently on so many levels. Theories aren’t just some synthesis of observations. They are more-or-less accurate descriptions of reality (as d’Espagnat calls it, “reality-per-se”).
Senses vs Reality
Philosophers like Hume would counter that our knowledge of reality depends on our senses, yet we have no guarantee our sensations correspond with reality. Jack might call this argument overly broad as it applies to any piece of knowledge, including our ordinary experiences that we could hardly doubt.
Words vs Reality
The sceptic might then say that the results of experiments are communicated by words, but how do we know these words correspond to the building blocks of reality? Again Jack points to everyday experience and the concepts we seem to know instinctively works: objects, their positions, their motions, and so on.
The hardline realist says an experiment described using these simple concepts surely must say something true about physical reality.
Strong vs Weak Objectivity
Jack the hardline realist might then lament all those physicists who claim to be realists but use standard quantum mechanics. Don’t they realize this theory is only “weakly objective”? In other words, it describes observations but doesn’t claim to describe reality itself.
Standard vs Broglie-Bohm Interpretations
D’Espagnat says Jack would be further perplexed because the Broglie-Bohm interpretation offers predictions identical to the standard interpretation (in the non-relativistic domain) and claims to be an explanation. It doesn’t just predict observations.
It also may offer a (partial) way out of the “and-or” problem with mixed quantum states. We’d like to show why the pointer dial doesn’t indicate multiple values at the same time.
Standard vs Broglie-Bohm Predictions
D’Espagnat notes that Broglie-Bohm’s predictions match the standard model’s. The good news is that Broglie-Bohm’s predictions aren’t wrong. The bad news is the standard model uses simpler mathematics and predicts so much more.
Superficial Realism vs Nonlocal Results
Though not a critical deficiency, it’s definitely odd that Broglie-Bohm starts off with concepts intuitively familiar to us such as corpuscles and trajectories but ends up predicting a nonlocal reality.
This doesn’t mean the theory is wrong, but it does mean the realist’s agenda is somewhat frustrated.
Real vs Abstract Particles
Broglie-Bohm replaces boson particles with abstract quantities (fields or their Fourier components). Photons are only “appearances,” somewhat undermining the realist model. The jury’s still out on how to deal with fermions.
Measured vs Secret Properties
Broglie-Bohm says momentum is really the product of mass and velocity even if quantum measurements show something else (see chapter seven). Also in this model detectors are sometimes “fooled,” acting as if a particle hit them even when it didn’t.
Finally, a “quantum potential,” which doesn’t vary by distance, means “free” particles don’t really travel in straight lines.
So some aspects of reality remain experimentally out of reach, yielding only illusions, an odd position for a realist model to take.
Realism vs Observer Choices
Consider two entangled particles, one going left and one going right. The Broglie-Bohm model says in some set-ups you’ll consistently get the same result if you measure the left-moving particle first, and a different result if you measure the right-moving particle first. Since the particles are entangled, the first one you measure matches the result of the other one you measure.
The problem is that this doesn’t sound like it describes the world “as it really is” but rather just our observations. Our choices as observers seem to affect what’s “really” going on. This does not fit in very well with the realist agenda.
Relativity vs Observer Choices
It gets worse. Depending on who’s checking, the “time order” of these measurements may differ if they’re “spatially separated” (that’s when you’d have to travel faster than the speed of light to get from one measurement to the other). Since the instruments are showing the same result to any observer, are they simultaneously telling the truth and lying?
It appears you can choose a privileged space-time frame that somehow still matches the predictions of special relativity but is consistent with Broglie-Bohm too, but again we end up with all these illusory appearances and an explanation that can’t be verified (or at least distinguished from competing theories).
Bohm #1 vs Bohm #2
D’Espagnat (in a footnote) says difficulties with the Broglie-Bohm model led David Bohm to devise his “implicit order” theory, which does not rely on corpuscles. The problem is that the “implicit” order of what’s really happening is separated from the “explicit” order of appearances, and it’s hard to turn that distinction into an “ontologically interpretable” theory.
Standard vs Modal Interpretations
Borrowing modal logic’s use of intrinsic probabilities, Bas van Fraassen initiated a different approach to realist quantum mechanics that led to various related interpretations.
Wave Function vs Finer States
Standard quantum mechanics says the wave function is the best description of a quantum system. “Modal” interpretations say sometimes there are “finer” states governed by hidden variables (d’Espagnat prefers to call them “supplementary”).
Standard vs Intrinsic Probabilities
In “modal” interpretations the wave function describes the probability of various measurements but not necessarily what is “really” happening. The use of supplementary variables rescues these interpretations from the problem of proper mixtures and ensembles (see chapter eight). A system is in state A or state B even before a measurement, even if the quantum state is A + B.
Wave Function vs Value State
A system’s wave function describes observational probabilities. In a “modal” interpretation the system’s “value state” uses supplementary variables to describe what’s “really” happening.
Broglie-Bohm vs “Modal” Interpretations
“Modal” interpretations are indeterminate and Broglie-Bohm is determinate, but they share the need for supplementary variables that are experimentally undetectable–and they produce predictions identical to the standard interpretation’s.
These realist approaches also seem to violate special relativity. Since their predictions are consistent with the standard interpretation’s they end up being nonlocal, which special relativity isn’t really equipped to handle.
Also, in some cases (say some authors) the “modal” interpretation implies the measurement dial will somehow show a value different from the predicted “observed” value. It’s as convoluted as the measurement issues in Broglie-Bohm (such as detectors’ getting false hits).
Unlike Broglie-Bohm the “modal” interpretations also get into difficulties about properties of a system and its subsystems. A subsystem can have a property even if the system itself doesn’t.
Language vs Ontology
D’Espagnat wonders if the “modal” interpretations are basically just offering a different language convention. The terms make it sound like something is “really” going on, but this alleged reality is inaccessible to observers, and “modal” interpretations make the same predictions as the standard interpretation of quantum mechanics.
Schrödinger vs Heisenberg Representations
Yet another approach makes use of the Heisenberg representation. Its equations are supposedly more realism-friendly than Schrödinger’s wave function.
Time-dependent vs Time-independent Equations
In both representations dynamical quantities (position and velocity, for instance) are represented by “self-adjoint operators.”
The Schrödinger wave function is time independent until a measurement is made. The wave function does double duty, describing states then knowledge.
The Heisenberg representation does things differently. Its self-adjoint operators are time dependent–so maybe they describe “real” states that are evolving through time.
Heisenberg Representation vs Contingent States
The problem is that the self-adjoint operators in the Heisenberg representation, though designating dynamical quantities, refer to all possible values of those quantities. You have to specify initial values if you want the measurement to be a “mental registration” rather than a “creation” of those values.
Just as bad, the best way to specify those initial conditions is by using the wave function.
Heisenberg vs Schrödinger Operators
D’Espagnat says that in the end the self-adjoint operator has too modest a scope in the Heisenberg representation. It does not label contingent states.
In the Schrödinger representation there’s the opposite problem. The self-adjoint operator’s role there is too ambitious. It labels the initial state as it “really” is, which leads to the problems of the measurement collapse.
Feynman’s Reformulation vs Physical Realism
D’Espagnat says high-energy physicists mostly see physical realism as self-evident. Richard Feynman’s “fabricated ontology” greatly eases their calculations, and apparently eases many philosophical doubts too.
Probabilities with Detectors vs without Detectors
In standard quantum mechanics the probability amplitude indicates how likely one would find a particle (for instance) at a particular spot if there were a detector there.
Feynman’s leap was to interpret it as how likely a particle would “arrive” at a certain point–whether or not there was a detector there.
Being vs Calculating
So is this “arrival” (which means that it “is,” however briefly, at that point) an ontological claim or is it just a calculating convenience? D’Espagnat says Feynman knew quite well the problems of interpreting quantum mechanics but was “absolutely reluctant” to talk about them.
Since fringes in a double-slit experiment show up, clearly this way of speaking is just for predictive purposes. If a particle “really arrived” at one slit or the other there’d be no fringes on the detector screen. In fact, the older quantum field theory and the Feynman diagram approaches “are quite strictly equivalent.”
This means they both support the nonlocality hypothesis.
Standard vs Non-Boolean Logic
Quantum mechanics’ formalism uses Hilbert space. This infinite-dimensional abstract space leads some to suggest a non-Boolean logic would rescue objectivist realism.
Formalism vs Experimental Facts
However, d’Espagnat says that this reformulation has no more ontological significance than Feynman’s approach. Nonseparability and nonlocality remain as issues since these are experimental facts not dependent on the formalism. Using a kind of quantum logic can’t on its own describe microsystems in realist terms.
Standard vs Partial Logics
Griffiths, Gell-Mann and Hartle, and Omnès have tried using “partial logics” and “decohering histories.” D’Espagnat says that this approach (like the non-Boolean approach) reformulates quantum mechanics but doesn’t change its predictions. The experimental facts remain a barrier to objectivist realism.
Macroscopic Reality vs Microscopic Unreality
Because of experimental results (such as Aspect’s combined with the Bell inequalities) it’s clear that the microscopic arena is not going to yield to some “strongly objective” form of realism. The challenge then becomes figuring out how “real” macroscopic entities could possibly be made up of “unreal” microscopic constitutents.
Existence vs Meaning
One approach is to deflect the question. Decoherence describes a mechanism by which macroscopic objects have a certain (physical-looking) appearance—but not existence as such. Maybe we can create Dummett-like criteria (see chapter seven) for determining just the meaning (“signification”) of statements about macroreality (but not microreality).
Entities vs Observability
If you’re going to make meaningful statements about macroscopic reality then it would help if you could define macroscopic entities. This is surprisingly difficult. One attempt uses statistical mechanics’ concept of “irreversibility” because human observational skills are limited.
D’Espagnat says this approach doesn’t necessarily sit well with a realist. After all, the general goal of realist approaches is to describe reality (to some degree of accuracy) through our own observations.
Schrödinger’s Cat vs Laplace’s Demon
Decoherence theory says that our inability to make precise measurements of complex systems creates the illusion of macroscopic reality. So what do we do about this limitation? We could imagine some version of Laplace’s demon who’s able to make precise measurements of all physical quantities in the universe.
We could then try to determine if he sees Schrödinger’s cat as simultaneously dead or alive—or just one or the other, as humans do because of their limited observational acuity. This would tell us what’s “really” going on.
But how powerful should this demon be? Let’s assume he can’t use an instrument made up of more atoms than the universe possesses. Some physicists then calculate that even Laplace’s demon couldn’t observe the complex quantum superpositions theoretically observable in macroscopic objects.
The “meaningful” conclusion is that these complex quantities are “nonexistent” and therefore the Schrödinger cat problem disappears.
Realism vs Human Decisions
But can a supposed reality depend on the capabilities of an observer (human or otherwise)? Even more fundamentally, mathematical representations of quantum ensembles (see chapter eight) are compatible with an infinite number of physical representations. Why is just one representation chosen?
In the end it seems this kind of realist argument ends up describing an empirical reality, not a meaningful approximation of an observer-independent reality.
Linear vs Nonlinear Terms
You can trace the “conceptual difficulties” of quantum mechanics back to the mathematical linearity of the formalism. Unsurprisingly, some realists might consider adding terms to make the mathematics nonlinear.
These new terms have almost no effect on observational predictions but allow a profound conceptual leap when it comes to macroscopic objects. Their centre-of-mass wave function will now collapse frequently and spontaneously, so there’s no more “measurement collapse.”
Relativity vs Nonlinear Realism
Nonlocality is still an issue, even though we’re talking about faster-than-light “influences” instead of signalling. The realist might retort that standard quantum mechanics runs into the same problem, but d’Espagnat says it’s the demand for realism that prevents relativity and quantum mechanics from being compatible.
Decoherence vs Nonlinear Realism
Decoherence theory and approaches based on nonlinear terms are making essentially identical predictions. However, decoherence theory says macroscopic objects are just phenomena. We share this knowledge and call it “empirical reality.” Nonlinear realism believes these objects are “real.”
D’Espagnat wonders why we even need nonlinear terms considering that according to conventional (that is, linear) quantum mechanics any macroscopic object with quantum features quickly goes through decoherence and ends up showing classical features.
Appearance vs Reality
So you don’t need nonlinear terms unless you want macroscopic objects not just to “appear” the way they do but also “really” to be like that.
Verbalism vs Reality
D’Espagnat is unimpressed by these ontological manoeuvres. He rhetorically asks if this is “some kind of a poor man’s metaphysics” amounting to little more than “pure verbalism.”
Open Realism vs Commonsense Realism
Yet D’Espagnat is not prepared to abandon realism altogether. He believes in a “veiled reality” that can be gently prodded through an approach he calls “open realism.”
But for realism to be consistent with the results of quantum experiments the reality that’s allowed is far different from the “commonsense” reality of the man in the street, or even that of many hard-nosed physicists.
Measuring the Decoherence
4 March 2011
Realistically Speaking
Chapter eight of Bernard d’Espagnat’s On Physics and Philosophy is entitled, “Measurement and Decoherence, Universality Revisited.”
In some ways it was a very dense and difficult chapter to read (and summarize). However, in the end the main points seemed pretty reasonably clear:
1. Quantum universalism and our perceptions of macroscopic reality at first appear to clash
2. A macroscopic object easily shifts between numerous and narrow energy bands under the slightest influence from their environment
3. Therefore it’s almost impossible to measure the exact quantum states of macroscopic objects
4. Our lack of knowledge about large-scale systems in “decoherent” states leads to the apparent stability of the macroscopic world
5. However, on the microscopic level a “realistic” interpretation of superpositions only works if a system includes unmeasurable components or we restrict what measurements we’ll make.
There’s a lot of material in this chapter so one could easily come up with some other highlights. In any event, here are my impressions of the chapter in greater detail…
Realist Statements vs Realist Philosophy
Instead of saying “I see a rock on the path” one could say “I know if I looked on the path to see if I would get the impression of seeing a rock there, I would actually get that impression.”
That would be cumbersome so we use “realistic” statements even if we don’t believe in hard-line realism. If we switch back to the microscopic realm realist-like statements might mislead.
Macroscopic Realism vs Quantum Universalism
If we assume quantum formalism is universal, then why don’t we see a rock in two places at the same time?
Macroscopic realism says macroscopic objects have mind-independent forms located in mind-independent places. So even before we look at it, a measuring device’s pointer will point to one and only one part of the dial.
A macroscopic state-vector therefore can’t be a quantum superposition A + B, and hence we can’t see a rock in two places at the same time.
Schrödinger Equation vs Macroscopic Realism
The problem is that the Schrödinger equation will often demand such a superposition. Realists respond by using something other than state-vectors to describe macroscopic objects.
D’Espagnat says that he showed (in 1976) that such attempts will fail, and a somewhat more general proof was found by Bassi and Ghirardi (in 2000).
Antirealism vs Macroscopic Realism
A different approach is to follow Plato and Kant. The senses are unreliable and deceive us. There’s no distinction between Locke’s reliable “primary” qualities and the less reliable “secondary” qualities.
The only thing certain are the quantum rules that predict our observations. All else is uncertain.
Probability vs Determinism
However, we don’t experience the world as a sequence of probabilistic predictions. We picture objects with definite forms, and we can predict the behaviour of these objects using classical laws that are deterministic.
Textbook Realism vs Quantum Predictive Rules
Part of the problem is that textbooks talk about the mathematics (including symbols for wave forms) as if they represent physical states that “exist” whether or not we’re taking a measurement.
D’Espagnat notes the same old difficulties of realist interpretations will then reappear. He says symbols for the wave forms and other values should instead represent “epistemological realities.” They signify possible knowledge once the observer makes an observation.
In other words, the quantum rules predict observations, they don’t describe unobserved realities.
Absorbed vs Released Particles
In chapter four d’Espagnat assumed that a measured electron gets absorbed by the measuring instrument. In practice this rarely happens.
If the electron gets released, then the instrument and the electron form a “composite system.” Instrument and electron are “entangled” (in the quantum sense).
Composite States vs Measurements
If an electron is in a quantum superposition of two states, the instrument dial shows just one of those states (which you can confirm by using a second instrument to measure the first instrument).
If you test an “ensemble” of identical states all at once then some of your instruments will show one state while others will show the other state.
Note that the measurement points to the state of the electron after it’s measured, not before.
Measurements vs Quantum Collapse
Some physicists who won’t accept “weak objectivity” or mere “empirical reality” see the measurement process as “collapsing” a “real” wave function.
Quantum Collapse vs Quantum Universality
A quantum collapse is a “discontinuous” transition from the (differential hence continuous) Schrödinger equation.
If the quantum laws are universal, then what’s so special about a measuring instrument to produce this collapse?
Moveable Cuts vs Realism
Using the “von Neumann chain” idea, one can predict observations by placing a “cut” between observer and observed at various points. There’s nothing special about one particular instrument.
The cut may be placed between a measuring instrument and the particle, or between a second instrument (measuring the first instrument) and the first, or between a third instrument and the second, and so on.
Von Neumann showed that the results will be the same no matter where this cut is placed.
The problem is that the realist believes in a mind-independent reality, so presumably this cut should be in one and only one place. The collapse of a quantum system shouldn’t be at the whim of the observer (and his mind!).
Longing for Realism vs the Practice of Operationalism
D’Espagnat says a lot of physicists suffer from a kind of logical “shaky balance.” They want to believe in realism but in their working methods they use “operational” methods (which therefore don’t require a belief in realism).
Schrödinger’s Cat vs Quantum Superposition
Getting back to the composite system of instrument and electron, if the electron was prepared by a superposition of two states, then the composite system is represented by aA + bB. The small letters represent the “states” of the electron, and the big letters represent the states of the instruments.
But the measuring instruments will point to A or B on the dial, not both at the same time. Schrödinger imagined a cat that’s dead or alive depending on the results of the experiment.
We don’t see an instrument pointing to two parts of the dial simultaneously, nor can we imagine the cat is both dead and alive simultaneously.
Quantum Superposition vs Probabilities
The measuring instruments will show one result each time. Quantum rules predict the probability that a particular result will be seen, not that several results will be seen at the same time.
Probabilities vs Ensembles
To test probabilities we can create a really large ensemble of identical conditions and see what results we get. Imagine we create a whole lot of composite systems with an entangled electron and measuring instrument.
On each of those instrument dials we’ll measure one result or another, not both, and not something in between.
Identical States vs a “Proper” Mixture
Staying with the electron that was prepared as a superposition of states, we calculate a percentage probability that we’ll measure that electron as “being” in one specific “state” and another probability it’ll “be” in another “state.”
What if instead of a large number of identical states and identical measuring instruments we prepare some electrons in one state and some others prepared in the other state? We’ll determine how many of each by the predictions for the superposed state.
If we then just measure, say, position, we’ll get (approximately) the same results as predicted for the superposition of states. But if we try measuring something other than position our results may violate these predictions.
So unless we ignore everything but position, measurements on our ensemble of electrons in superposed states will differ from our proper mixture of electrons in pure quantum states.
Coherent vs Decoherent Measurements
Imagine we measure an entangled system of an electron (with states in superposition) and an atom. Then an ensemble of identical superposed states cannot be approximated by a “proper mixture” of separate pure states.
But if the atom and electron interact with a molecule that is too complex to measure, our measurements of the electron–atom system will be the same whether we measure an ensemble of identical states or a proper mixture.
The system has become “decoherent.”
Electron–Instrument vs Electron–Instrument–Environment Systems
It’s already hard enough to measure the “state” of an electron using an instrument. If we try to measure the “state” of the electron and the instrument in relation to the environment then we have a big problem.
Macroscopic vs Microscopic Energy Levels
A macroscopic object’s energy levels are very close to each other, so a very small disturbance from its environment (or its internal constituents) will shift its energy level.
Measurement Imprecision vs Quantum Precision
There is thus so much environmental influence on an instrument that we cannot measure the “state” of the instrument and electron as a system in the same way we were able to measure just the “state” of the electron.
That’s why we can’t perform an experiment similar to our earlier one that found differences between measurements on the ensemble of superposed states and the proper mixture of separate pure states.
Therefore an instrument pointer, which is a macroscopic object, will act like it’s in a single state, not a superposition.
Ensembles vs Double-slit Experiments
In the “Young slit experiment” we imagine a particle source, a barrier with two slits, and a detector screen (see chapter four). Normally the screen would show fringe-like patterns because of the quantum system’s wavelike nature.
However, if you add a dense gas to the area between the barrier and the detector screen then you’ll just see two “blobs,” therefore showing no evidence of wave-like interference.
The molecules in front of the screen are analogous to the molecules that are near an electron–atom system. The molecules form part of a system but are not themselves measured. In both cases we lose the effects of superposition.
Independent vs Empirical Reality
Because the insertion of unmeasurable molecules prompts us to infer distinct beams with distinct states (corresponding to the “up” or “bottom” slit), this shows how decoherence creates the illusion of a macroscopic reality.
D’Espagnat acknowledges it’s a bit artificial to make this distinction since we know about the particle source. But it reminds us that decoherence is what provides the illusion of an independent reality, although it’s really just an “empirical” reality.
Entanglement vs Reduced States
If one system gets “entangled” with another (such as an electron with an atom) then each system loses its own distinct wave function. There’ll now be a wave function for the combined system.
But the quantum formalism allows some information about the original system to be recovered if we imagine a large ensemble of its replicas. The mathematics that represents this is called a “reduced state.”
Quantum Prediction vs Decoherence
Imagine an ensemble of grain sands or dust specks. They’re small but still macroscopic. The quantum formalism predicts these small objects would be enough to produce the macroscopic effects in the Young slit experiment.
And the quantum formalism also predicts that these objects will act macroscopically, supporting the role of decoherence in creating the illusion of a macroscopic reality.
Reduced State vs Localization
The matrix mathematics used to describe the reduced state suggests the reduced state can stand in for an infinite number of proper mixtures of pure quantum states, which threatens the idea of locality. Fortunately at least one of those proper mixtures is composed of quantum states that are localized.
Experimental Superposition vs Decoherence
In experiments by Brune et al. a “mesoscopic” object is put into a superposition of states. In the brief time before environmental interactions introduce decoherence, the object’s quantum properties can be observed.
The experiments therefore provide evidence both for decoherence and for the validity of quantum laws in objects larger than microscopic.
Quantum Universality vs Classical Laws
Brune’s experiments support quantum universality, but it would be good if we could also show how to derive the laws of classical physics from the rules of quantum prediction.
Classical Numbers vs Quantum Operators
In classical physics various properties of an object (such as a table’s length) are represented by numbers governed by classical mechanics. In quantum physics these properties are represented by (Heisenberg) operators and obey quantum equations.
Roland Omnès has proved that the observational predictions of both approaches coincide (in classical physics’ traditional domains).
Quantum Laws vs “Reifying by Thought”
Because classical physics and their predictive formulas are so reliable in the macroscopic realm we naturally infer that past objects and events have “caused” present ones, and present ones will “cause” future ones.
Counterfactuality vs Quantum Mechanics
Counterfactuality depends on locality, but Bell’s Theorem combined with the Aspect-type experiments show that nonlocality, and hence counterfactuality, is violated (relevant if we’re realists).
If we want to show classical and quantum predictions are the same in the macroscopic realm then we’re going to have to figure out how to “recover” the counterfactuality we imagine macroscopic reality possesses.
Is there action-at-a-distance with macroscopic darts? It turns out their orientation is a macroscopic variable that “washes away” microscopic variations.
In fact orientation is one of the “collective variables” that includes length, mass, and other classically measurable quantities. We’ve already noted that Omnès showed their values are consistent with quantum formalism.
Macroscopic Certainty vs Microscopic Uncertainty
Measuring a “complete set of compatible observables” will give you the state vector that “exists” after all the measurements were made, but that doesn’t help you figure out the state vector that “existed” before you made any measurements.
The idea of a measurement is usually that it measures something previously existing. By that standard you can’t figure out a state vector for sure no matter how many measurements you make.
By contrast, the mathematics behind a macroscopic ensemble’s “reduced state” will tell us which physical quantities may be measured without disturbing the system. We can therefore recover the “state” of a macroscopic member of that ensemble.
D’Espagnat says this ability helps shed light on our intuition that the properties of something must have been the same before we looked at it.
Realism vs Semirealism
D’Espagnat will discuss those who still cling to realism in the next chapter. However, he says there are “semirealist” approaches that manage to stay faithful to the quantum formalism.
A and B vs A or B
The “and–or problem” arises because when we measure a system of superposed states aA + bB we see it as either in state A or in state B, not in both states A and B at the same time. This shift from “and” to “or” is nowhere suggested in the equations. D’Espagnat suggests this is a conceptual not a mathematical issue.
One vs Many Realities
The mathematics of quantum formalism does not require there just be one and only one reality. Everett’s “relative state theory” interprets this formalism to suggest that the universe “branches off” when a superposed system is measured.
In a given branch only one of the superposed “states” is measured, but the overall multi-branch system is still represented by the same expression that combines superposition plus entanglement: aA + bB.
Common Sense vs Formalism
Some physicists are attracted to Everett’s branching universes because it agrees with the quantum formalism. They believe that following the formalism first rather than common sense could bring in a revolution similar to relativity’s own repudiation of common sense.
Zurek vs Reality
Zurek showed that the “reduced state” of a macroscopic ensemble is stable under certain measurements. He goes further and defines “reality” as whatever is out there that remains stable under such measurements.
Quantum Universality vs Classical Foundations
Decoherence theory tips the balance away from thinking classical physics is somehow more foundational than quantum physics. Decoherence theory shows how the rules of classical physics may be derived from quantum rules.
Physics vs Chemistry, Biology, and Other Disciplines
Decoherence theory can’t let us predict the structure of other disciplines though. The quantum formalism has to be simplified “by hand.” Quantum theory is still universal, but our human choices, our human ways of conceiving things, will crucially guide our perceptions.
The Antirealist’s Reality
1 March 2011
Ultimate reality
The Invisible Hand
Chapter seven of Bernard d’Espagnat’s On Physics and Philosophy is a kind of grab bag, entitled: “Antirealism and Physics; the Einstein-Podolsky-Rosen Problem; Methodological Operationalism.”
D’Espagnat’s points in this chapter seem to boil down to this:
1. Physics (and science in general) is about predicting observations not describing some kind of reality
2. Operationalism (which concentrates on methodology) increases the reliability of science as it counters critics who complain scientific theories (which they say should describe and explain reality) keep changing, and
3. Although measurements (of “empirical” reality) depend on the observer, physical laws seem to be constrained in various ways (by the structure of an “ultimate” reality that’s scientifically indescribable).
This chapter feels a little scattered as d’Espagnat pre-emptively defends himself against a bevy of incoming realist missiles.
In the end, though, he’s an antirealist in terms of empirical reality, and a realist in his belief there’s an ultimate reality that’s (probably) beyond our direct knowledge but nonetheless influences the shape of our everyday reality.
Here’s some more detail…
Unconscious vs Conscious Antirealism
D’Espagnat says modern physicists (ever since Galileo) generally use an antirealist approach in their methods even if they don’t explicitly embrace antirealism as a philosophy.
Mind-independent Realism vs Pythagorean Ontology
Objectivist realism claims there’s a mind-independent reality whose contents resemble our observations.
A Pythagorean Ontology (capital “O”) claims there’s a mind-independent reality that is reachable through deeper mathematical truths.
Unlike either of these approaches, modern physics emphasizes instruments and measurements. It’s not very interested in saying what’s “really” out there in the “world,” whether physical or mathematical.
Meaningful Statements in Classical vs Quantum Physics
While done more intuitively in the past, physicists nowadays can more formally apply “meaningfulness conditions” to statements.
Also, quantum systems are so peculiar that certain distinctions need to be made. Antirealist statements have to be expressed and tested in special ways.
Facts vs Contingent Statements
D’Espagnat is concerned here not with general “factual” statements such as “Protons bear an electric charge” but rather with satements about physical quantities. A value is assigned to the speed of a particular object, for instance.
True/False Statements vs Meaningless Statements
Based on Dummett’s approach a statement about an object’s speed would be meaningful only if we can measure (at least in principle) that physical quantity at some specified time and place.
Necessary vs Sufficient Grounds for Meaningfulness
D’Espagnat says Dummett’s criterion is necessary, but that doesn’t mean it’s sufficient. Other conditions may need to be fulfilled.
Imagining vs Measuring a Quantity
It’s possible that we can conceive of a physical quantity that has no meaning. However, if we can measure it then that quantity will definitely have meaning.
Classical vs Quantum Measurements
In classical physics it’s intuitive to think a measurement reflects the “true” values of an object, but in quantum systems the measurement of a particle (depending on your model) either creates or changes the values that you’re trying to measure.
In quantum physics we’re not simply “registering” some pre-existing value when we take a measurement. So the “truth value” criteria will need to include more than just measurability.
Disturbing vs Non-disturbing Measurements
In the spirit of antirealism D’Espagnat introduces a test: for a statement to have a truth value “it should be possible” (at least in theory) to measure the required physical quantity without disturbing the system.
The Einstein–Podolsky–Rosen trio claimed in 1935 that in some cases there are indirect ways to make non-disturbing measurements, admittedly only on correlated systems.
Correlated Darts vs Photons
If you throw a pair of correlated darts (see chapter three) they originally have some identical orientation. Measuring one dart’s value after they become separated will tell us the other dart’s value. As a bonus, the measurement won’t even change that other dart’s orientation.
If instead of darts you use correlated photons, and instead of measuring orientation you measure the polarization vector’s component at some angle, then you run into a problem.
Consistent vs Broken Correlations
If you measure one photon’s component at a certain angle then you can be sure if you measure the other photon’s component at the same angle you’ll get the same value (which will simply be “plus” or “minus”).
Because we are capable of making this measurement then by our meaningfulness test we can tell if a statement about those values is true or false.
But quantum formalism says the system of these two photons can have just one value at a time. We can’t measure one photon at a particular angle, then measure the other photon to measure another angle’s polarization component.
Multiple Values vs Bell’s Inequalities
At least we can’t then claim the second photon has simultaneous values at two different angles. The first measurement destroys the original correlation.
Because Bell’s inequalities have been disproved experimentally, we know that these multiple values don’t exist simultaneously.
And because our original meaningfulness test implied such a simultaneity we know that test is flawed.
Actual vs Possible Measurements
If we instead require that measurements are available rather than merely could be available then we get a stricter test. By phrasing our requirements in the indicative not the conditional we end up with a sufficient condition, not just a necessary one.
Possible Measurements vs Observational Predictions
Dummett’s meaningfulness test is a very general antirealist approach. It doesn’t look at the factual data actually available in a microscopic situation. It just considers our ability to make measurements in principle.
D’Espagnat says the tighter requirements he’d impose take an approach even further along the antirealist path as they speak of observational predictions not measurements. This also takes us further down the path of instrumentalism.
Operationalism vs the Value of Science
D’Espagnat says if you understand operationalism properly then you’ll realize operationalism confirms the value of science and makes its statements more reliable.
Description vs Prediction
D’Espagnat says critics of science believe scientific knowledge is easily influenced by social and cultural factors, and is frequently throwing out old theories for the sake of very different new ones.
Superficially this makes sense. Einstein’s curved space-time replaced Newton’s gravitational force. They’re radically different approaches.
But science isn’t trying to describe reality. It’s trying to make predictions about observations. Newton’s approach makes good predictions in its own domain, but in other domains Einstein’s predictions are the only ones that work out.
Sometimes the predictions and domains can be identical. Fresnel’s and Maxwell’s theories of light make the same predictions. D’Espagnat says the value of Fresnel’s theory was independent of whether the ether was really out there.
If you drop the naïve realism and its concern for description, then science as a method for synthesizing and predicting experience is not so inconsistent.
Now we can see steady progress as science gets better and better in its power of prediction.
Scientific Knowledge vs Practicality
D’Espagnat says science is mainly knowledge. Even if science is concerned with prediction and not description, don’t confuse science with the various practical uses it’s put to (such as technology).
Descriptive vs Instrumentalist Knowledge
Science brings together an account of human experience that can be communicated: “If we do this, then we observe that.” Just because it’s not trying to describe “reality” doesn’t mean it’s not imparting some kind of knowledge.
Instrumentalist vs Theoretical Knowledge
These methods of making observational predictions are at the core of science. Coming up with a theory to define certain terms and describe certain entities can be useful, but that’s something added onto this predictive foundation.
Operationalism vs Instrumentalism
D’Espagnat doesn’t try to distinguish the two terms. He says the most important aspect of any theory that conforms to this approach is that it’s an instrument of making observational predictions. He says mathematical physics is a prime example.
Open Realism vs Endless Possibilities
In chapter five D’Espagnat talked of his preferred approach of “open realism.” Certainly our view of “reality” (specifically its physical laws) depends on us, including our ability to make observations. But there seem to be “constraints” on what kinds of theories are valid.
Describing vs Acknowledging Constraints
This “something else” that lies beyond our observations but somehow constrains them may not be directly accessible by us, but D’Espagnat says our inability to describe the constraints does not mean they don’t exist.
Ultimate vs Empirical Reality
An elusive, indescribable “ultimate reality” may still shape the physical laws that we describe. In turn the laws we infer are shaped from our observations that contribute to our sense of “empirical reality.”
Explanations vs Theories
D’Espagnat quotes one critic of operationalism, Mario Bunge, who says that the main role of a theory is to provide an explanation. Therefore a theory must provide at least a “rough sketch” of reality as it is.
D’Espagnat replies that the explanation would actually lie in the ultimate reality that constrains our physical laws, but this ultimate reality is not scientifically describable. Therefore what Bunge desires is impossible.
Unless we grant that “miracles” happen all the time there appear to be constraints on our physical laws. But the ultimate reality producing these constraints can’t be scientifically described because of the problems with objectivist realism noted before.
Physics vs Physical Objects
D’Espagnat says that Bunge considers a value in physics attached to something that is not physical is meaningless. If the value doesn’t refer to something “real” then it’s pointless.
D’Espagnat points out that many physical laws refer to values that are not attached to existing physical objects. Probability is a concept referring to either imaginary objects or is a thought not subject to physics.
Particles vs Waves
Also, wave functions are useful, in fact, essential for quantum physics. So are wave functions real? If so, then particles would have to be real too. If waves and particles exist simultaneously then we’d have to accept the Broglie–Bohm model with all its problems (see chapter nine).
Also, a ground-state electron in a hydrogen atom would seem to have zero momentum because it’s not changing state (quantum potential is balanced by Coulomb force). But the Compton effect shows momentum is non-zero. We have two different versions of momentum. If they were both “real” then we get into pointless difficulties, says d’Espagnat.
Other possibilities: waves change into particles (but the collapse of the wave function has lots of problems attached to it) or only waves exist (but then nonseparability and measurements cause problems).
So D’Espagnat says Bunge’s objections seem pretty “dogmatic.”
Circular vs Practical Definitions
Another objection notes (correctly, d’Espagnat acknowledges) that operationalists place a lot of emphasis on precise definitions, but Bunge says some concepts will remain undefined (just like a dictionary uses some undefined words to define other words).
D’Espagnat replies that operationalism is a methodology, not an “a priori” philosophical system. We want efficiency. Dictionaries are useful despite their undefined terms. Some concepts we just seem to naturally know (whether they’re born with us or not).
These undefined concepts (though neither certain nor absolute) let us operate a measuring instrument, for instance, which then lets us define other concepts.
Sometimes concepts considered “primary” in the past get defined explicitly, such as Einstein’s replacement of “absolute time” with a time that’s partly relative to the observer.
Measurement vs Change
The act of measurement seems to change the quantum system. If, as Bunge’s approach would suggest, this change is “real” then we’d have the difficult problem of explaining this change.
But the quantum approach is “weakly objective” so it refers only to measurement. In the end theoretical entities are useful for helping to make predictions in modern physics. Just don’t regard them as self-contained and “real.”
Einsteinian Hope vs Descriptive Failure
Einstein and those of a similar optimistic bent believed reality would be increasingly describable. This view does not seem consistent with the reality that the quantum framework paints.
Universal Appeal
23 February 2011
Vortex of a Vacuum
Confessions of an Open Realist
Like a slow-moving detective novel various suspects of an epistemological and ontological inclination have been eliminated chapter by chapter.
Bernard d’Espagnat, writing in chapter six of his On Physics and Philosophy, starts honing in on his favoured if still rather vague suspect, which he’s identified as “open realism” in previous chapters.
In the first chapter he defined the position as a “starting point” for further investigation. It was compatible with any approach save for “radical idealism.”
There is “something” out there that’s independent of the mind, he says, but whether that’s God, the Platonic Ideas, or something else, he’s not letting on.
So here’s a summary of chapter six, entitled “Universal Laws and the ‘Reality’ Question.”
Theoretical Frameworks vs Ordinary Theories
D’Espagnat believes pure physics has two kinds of theories: “theoretical frameworks” and “theories in the ordinary sense of the term.”
Newtonian Mechanics vs Law of Forces
Newtonian Mechanics was believed to have universal applicability. It could accommodate new forces such as electricity and magnetism. Hence it was a “universal theoretical framework.”
Newton’s theory of universal gravitation precisely specified various laws of forces. It concerned itself about details of a specific domain, and hence is a “theory in the ordinary sense of the term.”
Complete vs Partial Universality
What about modern physics? D’Espagnat asks if there are genuine theoretical frameworks out there, a set of laws with complete universality.
Classical Physics vs Modern Physics
Classical physics looked like the foundation of all sciences. Unfortunately it made some wrong predictions.
Quantum mechanics yields correct predictions whenever it’s used, so it’s the only candidate for a universal theoretical framework. Its specific applications such as non-relativistic quantum physics and quantum electrodynamics are ordinary theories.
Hard Sciences vs Soft Sciences
D’Espagnat notes that some thinkers in the soft sciences rightly point out the horrors that result when universality is applied to political and social realms. The difficulty arises when philosophers extend that criticism to the hard sciences.
Evidence vs Convenience
In the soft sciences objections to universality comes down to how useful or not, and how convenient or not the concept of universality turns out to be. This isn’t a logical argument that can be applied to the hard sciences.
Karate Blows vs Disc Galaxies
However, Scientific American runs articles on karate blows and disc galaxies. Can science really be so universal that it can apply to such a diverse range of topics?
Extreme vs Moderate Universalism
The objection isn’t convincing. We can imagine the electric field of an atom guaranteeing the stability of atoms in muscles and in galaxies.
Strictly speaking, says D’Espagnat, we can’t even discount extreme universalism in which everything is predicted from various general laws.
A less ambitious version of universalism (such as Hans Primas’s) says one could choose which laws to use from a larger set depending on the problem at hand.
Naive Realists vs Universalists
Even “naive realists” don’t always accept universality. The “vitalists” felt special rules applied to living beings.
Realists about Theories vs Realists about Entities
Realists about theories generally support universalism, otherwise what would the theories apply to?
Realists about entities (when not realists about theories too) move away from universalism. Despite the evidence of modern physics they feel an individual object has properties possessing an “existential primacy.”
Movable vs Unmovable Real
Even if we can’t move a ghost, flying saucer, or quasar, we can move a rock or an electron beam. Aren’t they real? Well, quantum field theory says a particle is not a reality in itself.
Real Individuality vs Correct Predictions
If you assume an electron is really an individual entity then you’ll predicts results different from modern physics. Remember that quantum predictions have never been contradicted by the evidence.
The Broken vs Unbroken Stick
On a macroscopic scale imagine a stick that’s partly immersed in water. It looks bent. We can move the stick up and down, and therefore move the “break.” That doesn’t make it real.
The Broken Stick vs The Atomic Microscope
Not only can we move a supposedly broken stick we can also move atoms with a tunnel-effect microscope, but that doesn’t prove the atoms exist as localized individual objects.
Objectivist Realism vs Logical Positivism
D’Espagnat tries to steer a course halfway between objectivist realism and logical positivism.
Existence vs Measurement Statements
Different ways of thinking produce different kinds of questions. “Is the stick broken or not?” is asking for a statement about what is “really” happening rather than the results of an observation.
Appearances vs Reality
Philosophers, especially the popularizers, like to point out physical appearances can be deceiving. That table is mostly empty space, for instance, not something classical physics would admit.
Facts in Old vs New Physics
Many thinkers stress the importance of facts. In “old-time physics” the microscopic level was real and precisely defined, serving as the foundation for the macroscopic. Many say that in modern physics there are no real facts as such.
No Boundary vs Fuzzy Boundary
In classical physics there’s no boundary between microscopic and macroscopic. In the new physics there’s a boundary that is rather fuzzy and depends on our observational abilities. The boundary is therefore “weakly objective.”
Classical Microcosm vs Broglie-Bohm
Classical physics saw the microcosm ontologically. A minority view in modern physics, the Broglie-Bohm interpretation of quantum mechanics attempts a microcosmic ontology but runs into difficulties.
Near Realism vs Collective Experience
Near realism (see chapter one) thinks we can ask questions about “reality-per-se.” It’s similar to “realism about entities” and doesn’t fit with the experimental data.
Another approach says our discursive knowledge springs from a “synthetic ordering” of our collective human experience. It’s a form of positivism.
Partial vs Total Positivism
But we don’t have to be total positivists. The Vienna Circle of early twentieth-century positivists confined scientific statements to observations. We’re not logically obligated to agree with this position.
Positivists vs Working Physicists
Most working physicists are intuitively realists. Unlike positivists, physicists changed their mind about total realism because of observational data.
Near Realism vs Objectivist Language
If we stop believing realism about entities (hence reality-per-se) then we can use objectivist language and Carnap’s linguistic framework to ask questions about existence or attributes.
We answer those questions through empirical investigations. We check the stick in the water and we (usually) answer that it’s not broken.
Human Skill vs Robot Fingers
A robot with less skillful appendages might only be able to move the stick up and down rather than carefully checking it from top to bottom.
Carnap would say we’re right to say the stick isn’t broken, and the robot is right to say the stick is broken. Different abilities let us assert different things.
Realism about Entities vs Realism about Theories
If we discard realism about entities we can still embrace realism about theories, which is much more universal than realism about entities.
Pythagorism vs Einsteinism
“Pythagorism” reminds us of how much modern physics looks for symmetry and symmetry-breaking. Espagnat’s term “Einsteinism” is the variant of physicists who miss Cartesian mechanism and search for the “true” concepts supposedly contained in mathematics.
Einsteinism vs Positivism
Einsteinism doesn’t restrict itself to observations of pointers and gradated scales. It’s close to an Ontology (big “O”). Einstein later in life felt general relativity offered genuine descriptions of structures really out there.
Bundled Realism vs Individual Concepts
But Einstein also refused to point to this or that concept as indispensably real. One had to verify the whole array of concepts taken together: physical reality, the outside reality, and the real state of a system, for instance. The verification step would show which concepts were needed and which weren’t.
Pre-arranged Ontology vs Consistency Quest
This “Pythagorean” ontology isn’t set up in advance but results from a successful consistency quest. Einstein believed he’d largely completed that quest, so felt confident in his realist stance.
Kant vs Einsteinism
Einsteinism’s Ontology/ontology (see chapter five) relies on contemporary physics’ mathematical entities. Kant would have disliked the ones that don’t correspond to an “a priori mode of our sensibility.”
This approach gives Einsteinism an advantage over moderate or radical idealism (see chapter 13).
Physics vs Einsteinism
The big challenge to Einsteinism isn’t philosophical but rather the results of modern physics.
In chapter two we saw how the ontological pictures of Feynman formalism made up just a pseudo-ontology.
In chapter three we saw how instrumentalism reconciled relativity and faster-than-light influences (in a realist interpretation).
In chapters four and five we ran into difficulties trying to fit quantum mechanics’ mathematical symbols into an ontological framework.
Platonist Intuition vs Quantum Data
Some physicists still embrace the intuition of pure mathematical beings waiting to be discovered in a world more real than our own.
The intuition is shattered by the experimental data showing the quantum framework’s mathematical formalism can’t access a mind-independent reality.
All “theories in the usual sense” based on this framework, such as supersymmetry and superstring theories, will encounter the same problem.
Independent Reality vs Research Guide
D’Espagnat believes that Pythagorism’s search for symmetry is still the best approach for physicists to take in their research even though mathematical physics can’t truly describe an independent reality.
However, great mathematical laws may still reflect “something” of this reality.
Naive Realism vs Modern Macrorealism
Physicists know that most people’s spontaneous realism is unjustified. Even Broglie-Bohm theory adopts a different kind of realism. But some thinkers want to rescue realism at least in the macroscopic realm.
Realism of the entities and classical mechanics are both correct, they’ll say, on larger scales. On the smaller scale there are two approaches.
Microscopic Measurements vs Partial Logics
Some advocates of macrorealism will say quantum physics describes measurements of the microscopic world but doesn’t describe it “as it is.” Everything we know about the world comes from our senses, but these “empiricists” don’t question the intrinsic reality of this observed world.
Other advocates assume we know the world (more or less) at it really is, but introduce different kinds of logic. Omnés’s “partial logics” are even quantitative. All this is instructive but not a “realist” position strictly speaking.
Quantum Rules vs Universal Frameworks
When it comes to a universal theoretical framework the quantum framework is the only plausible candidate. It has great predictive powers, but is it universal?
Atomicity vs Nonlocality
The “atomicity argument” says quantum mechanics successfully describes particles and fields, atoms are composed of particles and fields, and everything else is composed of atoms. Therefore quantum mechanics must be universal.
But this argument depends on the Cartesian principle of divisibility by thought. It imagines a mind-independent external reality with interacting but distinct parts. Quantum nonlocality (more specifically, nonseparability) disproves this approach in principle.
Atomicity vs Quantum Predictions
We could try to fashion together a compromise: pretend atomicity works, but when it doesn’t then use quantum mechanics for the rest of the predictions. This empirical argument doesn’t help show the quantum framework is universal.
Atomicity vs Born Rule
Another problem with the atomicity argument is that the Born rule says “orthodox” quantum mechanics makes predictions about observations. It doesn’t say whether an event takes place. That makes quantum mechanics “weakly objective.”
Atomicity vs Instrumentalism
Instrumentalism reconciles Aspect-like experiments and relativity theory (see chapter three), so again basic physics seems to be a source of mainly observational predictions.
Macroscopic vs Quantum Physics
The atomicity argument’s internal inconsistency suggests a different approach. With quantum mechanics’ predictive powers so impressive, can we derive macroscopic physics from the quantum framework?
In chapter eight we see recent evidence that it can. Universality of the quantum framework seems established.
Quantum Rules vs Quantum Theories
This quantum universality recalls Newtonian mechanics’ three great laws, which were considered to possess a universal scope. In our arguments we’re concentrating on fundamental questions about the quantum framework, which consist of the rules of quantum prediction.
We’re not really concerned about the specific ways this framework is applied to “theories in the usual sense” such as quantum field theory.
Dummetian Realists vs Antirealists
M. Dummet says realists and anti-realists differ in how they evaluate certain kinds of statements such as class L of general laws and class F of contingent facts (which is what most concerns d’Espagnat).
Knowledge-Independent vs Knowledge-Dependent Truths
Realists will believe a statement has an objective truth value whether or not we have a way to confirm it. Anti-realists believe a statement can be true only if it concerns something we could possibly know.
Imagine the late Mr. X. He led a sheltered life and never had to show cowardice or courage. How do we react to a statement, “Mr. X was a brave man”? A “Dumettian realist” will say it’s a meaningful statement, while a “Dumettian antirealist” will say it’s not.
Obvious Statements vs Complicated Concepts
A problem with Dumett’s approach is it assumes parts of statements are obvious so disputes concern whole statements. In modern physics the mathematical formalism and observational data mean we have to more carefully define “realism” and “antirealism.”
Small vs Large Domains of Definition
“Operational definitions” are discussed in chapter seven, but briefly philosophers debate how far a word’s meaning can be extended. A “consequent antirealist” says a concept depends on the factual data it was designed to describe. D’Espagnat mostly agrees.
However, d’Espagnat notes there may be exceptions. The English empiricists said, “Nothing is in the mind that has not passed through the senses,” but we can’t prove this rule is universal.
Antirealism vs Necessary Ideas
D’Espagnat believes the notion of existence is a “necessary idea” despite the English empiricists (see chapter five). He says one can believe in a necessary idea and still be a kind of antirealist.
Antirealism vs Metaphysical Realism
Citing Lena Soler, he says an antirealist can accept or reject metaphysical realism as long as he doesn’t claim a correspondence between theory and referent.
Constraints vs Correspondences
An “extra-linguistic referent” may still constrain scientific theories, perhaps by indicating that some possibilities won’t work out, even though we can’t describe it directly.
Open Realism vs Metaphysical Realism
D’Espagnat supports “open realism,” which he says is very close to metaphysical realism in a broad sense.
Open Realism vs Soler’s Antirealism
He concludes by saying his views are also compatible with antirealism in the way Soler presents it.
Getting Real
28 June 2010
Getting Real
Chapter five of Bernard d’Espagnat’s On Physics and Philosophy is entitled “Quantum Physics and Realism.”
D’Espagnat attempts to demolish various arguments for conventional realism even as he pokes holes in anti-realist arguments, finally settling on a kind of unknowable realism — unknowable except indirectly through the patterns predicted by the laws of quantum physics.
The last few sections of the chapter feel somewhat disjointed as he discusses related issues but kind of runs out of steam.
Here in more detail are some of the dichotomies (and similarities) he raises.
Physical Realism: Instinct vs Argument
D’Espagnat says scientists and “laymen” generally support physical realism, not just because it’s “instinctive” but because of some explicit arguments.
Practical vs Counterfactual Definitions
Some philosophers consider what is real to be what we can act on. But we can’t act on stars. So other philosophers speak of what would be the results if one performed an action. This is a counterfactual.
Classical vs Modal Logic
When we bring in the conditional we leave classical logic and move into the realm of modal logic.
Actual vs Counterfactual Measurements
Quantum formalism says little about counterfactuality. We can anticipate what information we’d gain if we actually performed a measurement. But this expectation doesn’t guarantee anything about the system if we perform a different measurement instead.
Disturbing vs Not Disturbing the System
The uncertainty over a system’s state persists even if we perform measurements that couldn’t possibly “disturb” the system. If we perform one measurement we can’t say it has the state that some other measurement would reveal — unless we actually make that measurement.
Conceivable vs Actual Tests
A realist who uses so much counterfactuality faces a strict litmus test for any strongly objective statements: no consequence of such a statement can be false if a test is actually performed.
Intuitive vs Rigorous Realism
It might seem self-evident that at least on the macroscopic level realism works. However, these arguments end up failing.
Predictions vs Proof
The “no-miracle argument” (or “inference toward the best explanation”) says a theory that makes lots of successful predictions is likely to be correct.
Realist vs Quantum Predictions
The problem is that quantum theory makes macroscopic predictions that match those made by realism of the accidents, so there’s no proof that objects exist with attributes the way our common sense tells us they should.
Realism vs the No-Miracle Argument
If the “no-miracle argument” fails to prove something as “obvious” as the existence of objects, can it be rescued or does it fail entirely? D’Espagnat considers two counterarguments.
“Equivalent” vs “No Equivalent” Option
You can try eliminating the quantum option by removing the “equivalent theory” option from the “no-miracle argument.” But some philosophers say the “no-miracle argument” still fails, because realism of the accidents doesn’t explain enough.
Minimal vs Generous Explanations
These philosophers say a scientific theory has to prove more than the problem at hand. Newton explained planetary orbits but in the process also explained gravitation, the Moon’s motion, and the return of Halley’s comet.
Realism of the accidents “explains” how we make predictions in our daily lives, but appears to offer no corroboration beyond this domain. D’Espagnat sympathizes with this argument but notes it doesn’t offer an alternative to the realist position.
Raw Observations vs Constructed Entities
A second anti-realist argument is that observations have to be interpreted: the sun just doesn’t sink into the western sea. But scientific revolutions can replace some entities with totally different ones (Newton vs. Einstein’s theories of gravitation, for instance).
If a theory can junk old entities, or offer two equivalent but very different mathematical formalisms, how is a realist to know which interpretation to trust?
Again, d’Espagnat says this argument should be taken seriously, but doesn’t undermine quantum theory as a possible replacement for realism of the accidents.
Realism’s Flaws vs Disproving It
Acknowledging these flaws in the case for realism of the accidents, d’Espagnat says these problems don’t prove realism is entirely wrong.
Descriptions vs Open Realism
D’Espagnat advocates an “open realism.” He says the counterarguments attack realism’s “power to describe,” but it might still be “a miracle” if there didn’t exist a mind-independent reality beyond words.
Laws of Physics vs Whimsy
The “no-miracle argument” comes in handy when considering the laws of physics. We can’t just decide the electromagnetic field is a scalar. Something constrains our imagination as we discover such laws.
D’Espagnat says the no-miracle “postulate” can’t prove conventional realism, but it justifies a kind of “open realism” with its mind-independent reality.
No-Miracle vs Intersubjective Agreement
We’ve looked at the “no-miracle” argument based on successful predictions. Now we look at an argument based on agreement between observers.
Contingent vs Non-contingent Facts
The intersubjective argument looks at agreement between observers about “contingent” facts. These are statements about how things are in reality rather than as a logical necessity.
Reality vs Mental Organization
A contingent fact might be that there’s a teapot on the table. If two people agree that’s the case then the simplest explanation is there’s really a teapot there.
One could also argue that the concept of “teapot” just mentally organizes our sensations, but then (d’Espagnat says) it would be hard to see how two people could agree on what they’re seeing.
Phenomena vs Noumena
Some anti-realist philosophers object that the concept of causality applies to phenomena, not noumena (such as Kant’s). They refuse to assume a relationship between a person’s mental images and the real world.
Phenomena vs Ad Hoc Objection
We’ve just seen the anti-realist objection about phenomena. There’s another objection that the realist argument based on intersubjectivity is too ad hoc.
Minimal vs Generous Explanations (Encore)
The realist’s explanation for the intersubjective agreement (so the objection goes) only explains the agreement, nothing more. The claim is that you should be able to apply a good theory to more than just the initial problem.
Noumena vs the Objectivist Realist
D’Espagnat says both the phenomena and ad hoc counterarguments rely on the concept of noumena (some reality not evident in the phenomena).
He adds that the objectivist realist would reject the idea of a noumena.
Objections vs Alternatives
Also, neither counterargument offers a better explanation of intersubjective agreement.
Objections vs Disproof
So neither objection delivers a knockout punch against our intuition that objects exist because we mutually agree they exist.
Realist Expectations vs Verification
A more detailed scenario is this: Alice predicts that whenever she writes in her notebook that she sees a teapot, Bob will write a similar prediction if he’s in the same room.
If Alice didn’t believe in objects’ existing independently then she’d be surprised to learn Bob agrees with her so consistently.
Conventional Realism vs Quantum Non-realism
But if Alice knew about quantum mechanics then she’d know you can believe in non-realism yet still make predictions that both she and Bob can agree on.
The quantum formalism predicts probabilities of observations that all observers will make.
But it doesn’t claim a pointer or teapot is “really” there, at least not before the measurement.
Disproving Realists’ Proofs vs Any Explanation
D’Espagnat says quantum mechanics shows philosophers’ objections to realists’ proofs are valid.
But quantum formalism provides an alternative “explanation” despite philosophers’ doubts than any explanation is possible.
Open Realism vs Radical Idealism
D’Espagnat reiterates that physical laws don’t exclusively depend on us, so radical idealism doesn’t work.
When you combine intersubjective agreement and quantum mechanics, he says, you end up with a reality beyond what the human mind creates, but this reality is also beyond description.
Classical vs Quantum Broglie-Bohm
D’Espagnat looks at a Broglie-Bohm model that is conceptually classical but makes quantum predictions.
The Broglie-Bohm model imagines “real” physical particles guided by a wave function, but this function ends up having to be non-local.
Classical vs Non-local correlations
It turns out you can’t load up the two particles at the source with supplementary (commonly called “hidden”) variables to predict the correlations.
“Bell’s calculation” (named after John Bell) shows that Bob’s measurement of one particle depends on Alice’s earlier measurement of its twin.
Classical vs Non-local Correlations
So even when you assume classically physical particles, if you want to make predictions compatible with standard quantum theory then you need to accept non-locality.
Fact vs Law
The correlation between the particle measurements depends on quantum law rather than any facts (such as the additional variables) that you add on.
Contingent Features vs Deep Structures
So the predictions depend not on “contingent” aspects of reality but rather its “deep structures.”
The deep structures of reality are mind-independent, and cannot be described except through the laws that predict our observations.
Experimental Data vs Contextuality
In order to match the experimental data, any theory you want to interpret ontologically will need to incorporate “contextuality.”
This just means that the measurement of one quantity depends on whether another quantity is simultaneously observed, and what that quantity is.
Contextuality vs Nonseparability
Besides contextuality, an ontologically interpretable theory must take into account the nonseparability of one part of a quantum system from any other part.
These two considerations derail objectivist realism’s attempts to interpret quantum phenomena.
Personal vs Impersonal Probabilities
The Born rule requires that anyone — and everyone — viewing a particular measurement will get the same “impression.”
D’Espagnat points out his way of adding a “personal” rule.
Physical Observer vs State of Mind
The measurement is finally “registered” not by the observer as a physical system but by the observer’s state of mind.
His model, later revived by others, suggests Alice and Bob could measure entangled particles and end up with different mental states.
Measuring State of Mind vs Neurons
But quantum mechanics demands a strict correlation between these measurements.
D’Espagnat says that when Alice asks Bob for his measurement she is measuring Bob’s physical state of neurons, vocal mechanisms, etc., not his state of mind.
The quantum formalism will apply to this kind of physical measurement and therefore will guarantee a correlation, although d’Espagnat does ask if quantum physics could really be so peculiar.
Relativity of Knowledge in Theory vs Practice
Starting a long time ago various philosophers have acknowledged they might have to give up on objectivist (or “transcendental”) realism.
Science would be allowed to examine our experience rather than what is “really” out there. But later philosophers blurred the language and the “empirical” distinction got lost when they talked of “reality.”
Scientists in turn felt justified in taking the intersubjective agreement of shared observations proves that what’s seen is really there.
Macroscopic Reality vs Quantum Superposition
But once you go beyond the macroscopic world to quantum states in superposition “reality-per-se” breaks down and no longer matches empirical reality.
Pure vs Quantum Philosophy
D’Espagnat then speaks of how in the twentieth century various philosophers developed theories without paying attention to quantum theory, yet their conclusions show some parallels to the more scientifically aware.
Wittgenstein vs Carnap
Wittgenstein spoke of the world as a set of facts, not things, but d’Espagnat finds Wittgenstein’s language ambiguous, with “fact” used either in a realist or mind-centered fashion.
D’Espagnat finds Carnap much clearer with the notion of “linguistic framework,” or Quine’s similar “relative ontology” (or just “ontology”).
World of Things vs Sense Data
Carnap said that in ordinary life we might use the “world of things” as our linguistic framework, but philosophers might use the framework of “sense data.”
Carnap vs Quine
Quine said the question of whether an object or attribute exists is answerable only in the right linguistic framework.
Carnap similarly spoke of “the ontology to which one’s use of a language commits him.”
Relative vs Classical Ontology
Carnap used “relative ontology” to describe a linguistic approach without meaning ontology’s classical meaning of “Reality as it really is.”
Big vs Small Range of Linguistic Frameworks
Carnap (and maybe other philosophers) could believe in a free choice of linguistic framework, but a quantum physicist has to take nonlocality into account.
In the basic version of Broglie-Bohm a pilot wave depends on the coordinates of all particles in the Universe. This “thing” is nonseparable and therefore is nothing like our ordinary concept of a thing.
Knowledge Through vs Beyond Language
D’Espagnat says a philosopher may say the lack of any (strongly) objective knowledge about “reality-per-se” means the concept is meaningless.
Scientists generally believe there is some real “outside stuff” so they in turn think there’s more to the world than language.
Abstractions vs Ontic Systems
Despite the holistic nature of quantum systems we tend to look at just part of it through “abstractions.” The partial systems are called “ontic” by physicist Hans Primas.
Intersubjective agreement exists because people who use the same abstractions come up with the same ontic approximations.
Exophysical Ontologizations vs Endophysics
When we get more ambitious than just simple statistical interpretations we develop versions of reality called “exophysical ontologizations” or “contextual ontologies.”
Primas also conjectures a reality-per-se he calls “endophysics,” but this cannot be described directly.
Reality vs Its Forms
D’Espagnat concludes the chapter by saying that “unquestionably” some reality exists on its own, but what form it takes depends a lot on ourselves and the abstractions we perform.
There is No Path
25 April 2010
Jigsaw Puzzle
Ploughing along through the quantum fields, I present my summary of some issues Bernard d’Espagnat raises in chapter four of his book On Physics and Philosophy (see publisher’s listing).
Holism vs Multitudinism
Previously D’Espagnat had been making the case that violation of Bell’s inequalities shows that localized particles cannot be making up the universe.
Double-slit Gas vs No Gas
Now he asks us to imagine a double-slit experiment where you can fill the room with gas and then measure the interference pattern.
Interference vs Non-interference
As a mind experiment, at least, one would stop seeing interference patterns when the space between light source and detector screen is filled with gas.
Full Particle vs Fraction of a Particle
In the original experiment with no gas the detector shows full particles not fractions of particles. This suggests each particle went through one and only one of two slits.
But then there’d be no reason for the interference pattern. D’Espagnat notes two solutions.
End Points vs In Between
The Broglie–Bohm model suggests a point-like particle is guided by a non-localized field or wave that interacts with both slits to indicate the interference pattern.
A different approach is to consider a concept’s “domain of validity”: it might be valid to think of a particle at the start and finish but not in between.
Just Waves vs Particles as Waves
The quantum wave function encodes information about the particle starting at the source. In between source and detector are the particles these waves or wave functions?
D’Espagnat says we should avoid overobjectifying. Keep in mind the concept’s domain of validity.
Path vs No Path
Whatever the formalism says, most physicists somehow imagine there is a real particle travelling from start to finish. And indeed if you apply the gas then the particle seems to have a definite path between source and detector.
Classical Particles vs Classical Gas
But if the transmitted particles are now acting classically then so should the gas particles. However, related experiments show a kind of interference pattern called “phenomena B,” so the gas is not acting classically.
Probability of Measurement vs Being
The wave function gives the probability a particle will be observed at various spots on the detector.
If you talk of the likelihood of its “being” somewhere then the particle would have to be pointlike, and hence there’d be no interference patterns.
Strongly Objective vs Weakly Objective
D’Espagnat proposes calling direct statements about attributes, “strongly objective,” and procedural statements about what you’ll find, “weakly objective.”
The Born rule takes the quantum wave function and yields probabilities of observation. It doesn’t describe “position” in a strongly objective way.
A “weakly objective” statement implies that there’s an observer (so it sounds subjective), but that the observations will be the same for any observer whatsoever (doesn’t sound quite so subjective now).
Observations vs Descriptions
The traces of “microblobs” in a cloud chamber are easily interpreted as well-defined trajectories.
However, quantum theory relies on “weak objectivity” and assigns probabilities that a microblob will be observed at some position.
Instead of “trajectories” we have “traces”: alignments of bubbles or microblobs. A theory that predicts observations can therefore compete with a descriptive theory.
Wide vs Narrow Domain of Validity
Quantum mechanics textbooks rarely discuss “domain of validity,” leaving the reader with the impression that a measurement “reduces the wave” and turns it into a pointlike particle.
This causes “ontological incongruities.” Neither “Heisenberg representation” nor a modified logic could restore strong objectivity to “orthodox” quantum mechanics.
Bohr vs Strong Objectivity
Some supporters of strong objectivity thought Bohr’s “intersubjectivity” of quantum description would rescue them. (But see below…)
Bohr’s Intersubjectivity vs Sociologists’ Intersubjectivity
D’Espagnat notes that the “intersubjectivity” used by philosophers and sociologists are at a higher level than the “raw, unanalyzed” sensations of quantum observation.
Only the second can be universal—and it’s not even assuming ontological reality.
Copernicus’s “Big” Revolution vs Quantum Mechanics’ “Small”
Some people say Copernicus’s revolution was much greater than quantum mechanics. But using quantum physics to explain classical physics undermines the standpoint of a strong objectivist.
Innovative Probabilities vs Innovative Objectivity
D’Espagnat also claims that the quantum physics’ main innovation compared to classical is that its statements are weakly objective.
Less innovative is quantum physics’ use of intrinsic probabilities.
Weakly Objective Knowledge vs No-influence Signals
As previously noted, the “supplementary theorem” says faster-than-light influences between particles cannot convey matter, energy, or usable signals.
But how can you have influences without signals?
Since quantum mechanics is so successful, why not apply its weak objectivity to special relativity?
If you do so, then the emphasis on knowledge rather than realism means you don’t have to worry about signals.
No signals… no influences.
Philosophers vs Reality “As It Really Is”
Philosophers know there’s no proof that our representations of reality show “reality as it really is.” So they’re ahead of most physics textbooks.
Physics vs Realist Language
But philosophers still use a realist language “as if” we could describe reality-per-se. Physics tells us to give up this language, or at least let go of its universalism.
Ordinary vs Quantum Complementarity
In ordinary life two separate photographs of the same object provide more information than just one photograph.
Bohr’s complementarity principle says you can’t do that with quantum systems.
Human-independent vs Weak Objectivity
Bohr adds that experimental conditions are an “inherent element” of our descriptions.
The experimentalist chooses those conditions, so “physical reality” cannot be “human-independent” reality.
Therefore reality cannot be strongly objective. The hopes of the strong objectivists were dashed.
Praise vs Use
Bohr admitted his thoughts about micro-objects were weakly objective.
Physicists formally praised the principle, but rarely used it on the measurement problem or anything else.
D’Espagnat won’t be using the principle either.
Multiple States vs Single Pointer
Imagine a group of electrons that have three possible states: state a, state b, and state c.
Let state c be the sum of the other two states. Then shouldn’t a measuring instrument’s pointer be in state A and state B at the same time?
Theoretical vs Practical Measurements
Decoherence theory tells us that quantum objects easily interact with their immediate environment—often before they can interact with a measuring instrument.
An ensemble of electrons in states a and states b produce such complex values that it’s almost impossible to measure them.
Therefore you don’t have to worry in practice about a fuzzy pointer, though there are other (rare) experiments that show quantum superposition among macro-objects.
Similarly it would be almost impossible to measure correlations between photons and gas particles in the double-slit experiment.
Simple Subjectivity vs Intersubjectivity
In the double-slit experiment without gas it’s often said you’ll get no interference fringes if you try to measure which slit the particle passes through.
That sounds like simple subjectivity. But it actually depends on the instrument you try to use. If it’s set up incorrectly the fringes show up. If the instrument has a fault the fringes show up.
Since you’re depending on the instrument, you’re measuring a “public” property, hence it’s intersubjective, which is to say, weakly objective.
Strongly Objective vs Epistemological Truth
If you exclude practically impossible tests then you can make statements that are empirically true or false such as talking about macroscopic objects and classical fields.
They’re weakly objective statements that are “epistemologically” true or false.
Someone who doesn’t know quantum mechanics or doesn’t believe in its universality might think they’re strongly objective.
Observing vs Observed Instrument
Properties of a physical system are encoded into a “wave function” or “state vector.” The Schrödinger equation and Born rule can then calculate probabilities of certain observations.
The mathematical formalism says we can ignore details of the observer’s eye, optical neurons, and so on.
John von Neumann said various elements can be assigned to either the quantum side or the macrosystem side of a calculation.
Because of this “von Neumann chain” an instrument can either be an observer or the observed.
It’s usually easiest to put the instrument on the classical, perceiving subject side.
Born Rule vs Wave Collapse
It can be hard to distinguish using the Born rule to come up with probabilities of observations on the one hand and talk of a “reduction” or “collapse” of the wave function.
Descriptive vs Predictive Method
Although useful in practice, this “reduction” is not required by quantum theory. For a simple photon pair the descriptive approach may be replaced by the predictive method.
Realism vs Non-realism
D’Espagnat says these concerns about the conceptual foundations of quantum mechanics make the issue of realism even more pressing.
Unreality at the Local
23 March 2010
Quantum entanglement
Correlating Reading and Understanding
Chapter three of On Physics and Philosophy was a pretty frustrating read.
D’Espagnat seems to be drawing several threads slowly towards a core conclusion, but it often feels like an extremely meandering trail.
Working from the top down, here are some of the points he seems to be making.
The Basic Theme
Bell’s theorem makes predictions about how observations of two particles relate to each other.
Bell’s inequalities predict certain correlations at a distance when you assume locality and a belief in free experimental choice.
The locality condition (more or less) states that influences can’t travel faster than the speed of light.
The belief in free experimental choice lets the experimenter decide what to test, otherwise it’s hard to do empirical science.
If your results don’t match Bell’s inequalities at least one assumption (locality or free choice) is wrong.
A pair of photons created by the same atom will be entangled. We can measure a photon and and its distant partner.
Do that with enough photons and we get a pattern to compare with Bell’s inequalities.
Experiments by Aspect and others show that Bell’s inequalities are violated at the microscopic level.
Since we want to believe in experimenters’ free will, locality must be discarded.
Supplementary Points
A “supplementary theorem” devised by four scientists states that if there are faster-than-light influences between the photons in the pair, these influences cannot convey matter, energy, or usable signal.
If one rejects objectivist realism then locality or nonlocality is a meaningless distinction. But nonseparability remains in play.
If you try to add hidden local variables to specify the particles’ polarizations in advance (right at the source) then you get the wrong predictions.
Even More Points to Ponder
The violation of Bell’s inequalities is often assumed to imply the falsity of all hidden variable theories.
In fact the experimental data only disprove the falsity of local hidden variable theories.
Non-local hidden variable theories such as de Broglie’s or Bohm’s match the predictions of standard quantum mechanics (at least non-relativistically). A pilot wave controlling localized particles must still travel through both slits of a double-slit experiment.
By extension, the disproving of locality shows the limitations of Descartes’ “divisibility by thought.”
You cannot spatially divide the photon pair’s wave function and still get the predictions of quantum mechanics that Bell’s theorem and the Aspect-type experiments say you should.
Last revised 18 June 2010
A Mysterious Trajectory
13 March 2010
Mysterious Trajectories
Construction vs Deconstruction
Continuing on with my binary (but gentle) deconstruction of Bernard d’Espagnat’s On Physics and Philosophy, we can now proceed to chapter two, entitled “Overstepping the Limits of the Framework of Familiar Concepts.”
Here are some points I’ve gleaned, which I’ve repackaged into the black-and-white dichotomies I love so much.
Aristotle vs Galileo (Reprise)
Although Galileo believed in a priori mathematical concepts, d’Espagnat says Galileo’s two main scientific contributions — inertia and the relativity of motion — were based on sensory data derived from inclined planes and moving ships.
Primary vs Secondary Qualities
Galileo believed he could make observations of “primary” qualities to find out what “really is.” He could then reject the supposedly illusory “secondary” qualities.
Galileo vs Descartes
Descartes was the other “founding father of modern science,” but more of a philosopher than Galileo. He justified his realism from “cogito ergo sum” (I think therefore I am), an ontological argument for an infinite Being that could deceive us, self-evident truths, and finally fundamental notions of form, size, and motion, which are true because they’re clear.
He believed in “near realism”: nature can be described through basic notions of figure, size, and motion. Natural and manmade objects are similar but differ in size.
D’Espagnat says Galileo, Aristotle, and Descartes were all objectivist realists, even multitudinist ones. However, Descartes was more concerned with metaphysics and his system of thought could hardly have led to Galilean relativity, says d’Espagnat.
Petitot vs d’Espagnat
D’Espagnat rejects Jean Petitot’s view that Galilean relativity’s space and time are “desubjectized mental forms.” Galileo, responds d’Espagnat, kept the number of intrinsic properties to a few, but he still believed in their reality. Relative positions and motions are normally said to be real, and neither Galilean relativity nor Newtonian mechanics are inconsistent with absolute space.
Forces vs Fields
Galilean ontology led to the notion of forces, which could produce action at a distance. However, forces were still properties of objects. No object? No force. In the 19th century the notion of fields was introduced. For instance, an electromagnetic field can exist “in vacuum” devoid of electric “sources.”
Physics vs Realism of the Accidents
As physics joined with mathematics to overcome the old familiar concepts the other sciences looked for simpler notions allowed through a belief in the realism of the accidents.
A quantum wave is a function of many variables: the number of particles times three (for each particle’s x,y,z coordinates). So the wave has no value at a particular three-dimensional point. Unlike a field one can compare to gelatin, the wave is hardly realism-friendly.
Relativity vs “Universal Thing-ism”
Although relativity replaced objects with events, it didn’t reject realism. Realism of the events is allowed. Naïve realism of “universal thing-ism” is not.
Objects are sequences of events but the events (and geometry of space and time) still supposedly exist, even if they’re perceived differently by different observers.
Useful Notions vs Actual Existence
Physicists explain complex features that are visible by simple invisible ideas that work well. The temptation is then to think these “clear, distinct ideas” (as Descartes might say) also “exist.”
The notion of “electrons” is great for explaining things, but when you start to think of an electron as localized, one per orbit, the picture gets misleading.
To say each electron exists simultaneously on all “allowed” orbits is much less misleading.
Trajectories vs Quantum Probabilities
Put a bubble chamber where it can capture cosmic rays. Soon tracks will appear that we will be tempted to call “trajectories.” We imagine a particle started out in space along some trajectory and reached the bubble chamber, leaving a trail.
Quantum mechanics says there are no such trajectories. The liquid in the bubble chamber reacts with radiation, and the quantum probability that two adjacent atoms will get excited is essentially zero unless they’re almost exactly aligned with the direction of the radiation.
Quantum Field Theory vs Dirac’s Virtual Sea
Quantum pioneer Paul Dirac correctly predicted that every time a fermion (such as an electron) is created so should its opposite partner, an anti-fermion (such as a positron).
He visualized a sea of invisible and nonlocalized fermions. A particle is created when it escapes this sea, leaving a “hole” in the sea with opposite properties. It wasn’t a very credible theory.
Quantum field theory says the existence of a particle is just a state. Existence is a property of “something” — but everyday physicists are reluctant to commit to whether this “something” actually exists.
Hidden Variables vs Quantum Completeness
Einstein and others disliked that quantum theory doesn’t describe the world as it “really” is. Instead of the quantum wave only predicting probabilities people like Louis de Broglie and David Bohm theorized about specific particles on specific paths moving in a way that produced the same predictions.
These specific paths are determined through invisible factors not included in quantum theory.
Such “hidden variable” theories run into problems when paired particles grow distant and one particle ends up experimentally detected. Assuming a specific trajectory doesn’t eliminate nonlocality, as Bell’s Theorem shows.
Ontology vs Pseudo-ontology
Physicists avoid espousing “near realism” or “realism of the accidents.” They try to remain “open” about the concepts they use. Instead they’ve developed a pseudo-ontology using diagrams.
Feynman Formalism vs “Something’s” State
Feynman diagrams help physicists navigate through complicated formulas. An “H” diagram will show one particle on the move, emitting a virtual particle, absorbed by the second incident particle, and they continue on their way.
In this system there is no state of “something” that’s changed. If pressed, a physicist may say the elements of such a diagram are just a “way of speaking.” But the danger is we think they’re actual names of actual things.
Description of Experience vs “Reality Out There”
While some physicists feel empirical evidence gives us only a description of experience, other physicists feel some reality is “obviously” out there. These two positions are distinguished by three factors.
Experiential vs Realist Objectivity
In chapter four d’Espagnat will explore objectivity. For now he says the realist has more stringent standards for objectivity than the experiential proponent.
General Laws vs Specific Entities
A realist looks for specific objects with specific properties, and likely relies on counterfactuality (inferring that unobserved objects still retain their properties).
An experientialist may believe in an explanation of the world compatible with the realist’s, but works from general laws to account for specific observations.
Quantum Completeness vs Incompleteness
A realist may look for hidden factors to preserve the belief that localized particles and trajectories “really” exist. Founders of quantum theory retorted that quantum theory is a complete description of reality. Additional factors just don’t exist.
Strong Completeness vs Weak Completeness
Slight problem with saying hidden variables “don’t exist.” It makes definite statements about nonexistence in a similar way to a realist’s pronouncements on existence. This early position might be called “strong completeness.”
“Weak completeness” just says that no competing theory can make predictions about atomic phenomena that aren’t also correctly predicted by quantum theory (as Henry Stapp puts it).
Compatible vs Incompatible with Quantum Theory
D’Espagnat says weak completeness is compatible with his approach in the book. If another theory imagines reality’s structure with additional parameters or variables — but has no “false consequences” — then he’ll say it’s “compatible with truth.”
He says his approach will be one of “enlightened agnosticism.”
Dividing up the Wholeness of the Text
11 March 2010
Quantum Ripples
The Reader vs The Read
As I make my way through Bernard d’Espagnat’s On Physics and Philosophy (Princeton University Press, 2006) I’m taking notes to keep track of his often dense arguments.
Although I don’t think “ontological” reality is necessarily binary, I do find for my own study purposes that extracting an A vs. B division can be helpful when making summaries.
So in that spirit, here are some of the dichotomies that d’Espagnat points to in the first thirty pages or so of his book, including the front material and first chapter.
These are just my impressions of some of the issues d’Espagnat raises, so check the original text for all the juicy stuff.
French vs English Edition
D’Espagnat writes that no significant material on the philosophical issues he raises had been published in between the French edition (2002) and the English one four years later.
He says that for the English edition he made some of his remarks stronger and clearer, and he also had a chance to respond to critics of his latest work.
Philosophy vs Science
D’Espagnat says that philosophers need to go beyond their own “cogitation” and examine evidence from other fields.
Epistemology vs Physics
D’Espagnat says that in theory epistemology bridges science and philosophy, but all too often epistemologists think having a broad idea of the results of 20th-century physics is enough. He says they should pay far closer attention to the details.
The Right Answer vs Not the Wrong Answer
D’Espagnat notes criticism that science in the past has been wrong, and will likely be wrong in the future. Science’s positions change over time.
While acknowledging that scientific theories can evolve, d’Espagnat says that science can at least indicate to philosophers which positions are no longer viable, for instance: “Being is not this.”
Ontological Reality vs Empirical Reality
What is “really” out there is ontological reality, and different philosophies take different positions on whether we can “access” that reality. In other words, they have different views on whether our perceptions and our immediate inferences from those perceptions (our empirical reality) can lead us to an understanding of (the ontological) reality behind the veil.
Nonseparability vs Separability
D’Espagnat believes that quantum experiments (empirical reality) indicate that nonseparability underlies any notion of a “mind-independent reality” (ontological reality).
Nonseparability comes up in the way that two particles originally paired up can travel a great distance but measurement of one particle’s state then limits the possible results when measuring the other particle’s state later.
The opposite position, separability, implies that objects separated by distance are indeed distinct entities that cannot directly influence each other.
Nonseparability vs Action at a Distance
Nonseparability refers to a quantum connection between what we might think are distinct entities some distance apart. In classical physics there’s also an influence but that would be through action at a distance, such as gravity, called a force or (later) field.
Empirical Limitations vs A Mirage
D’Espagnat does not believe empirical knowledge is a mirage, even if it can’t lead us to direct knowledge of the underlying “ground of things” (which he says cannot be described analytically).
He says that the symmetries and regularities of our empirical evidence will presumably correspond “to some form of the absolute” even if this correspondence is obscure.
Quantum Predictions vs Descriptions of Reality
D’Espagnat says that despite appearances quantum theory makes predictions (which have been well verified) instead of describing a “mind-independent reality.”
Aristotle vs Galileo
D’Espagnat says that Aristotle’s wide-ranging data are collected by the senses, data that Aristotle regards as basically lying on the same plane.
Galileo (and the more philosophical Descartes) uses a hierarchy of concepts, some considered basic, which are then used to explain other concepts. It’s a mechanistic world view in which some concepts can be built up from smaller ones.
However, both Aristotle and Galileo view their fundamental ideas as “common sense” and so neither doubts they are talking about some kind of (what we’d call) an “ontological reality.”
Objects vs Properties
Two particles collide. In some situations instead of just continuing on their way or destroying each other they’ll survive and create new particles.
How did they create these new particles? Well, the original particles have motion, and it’s this motion that creates the new particles.
But this raises an interesting issue: a property of the particles creates more particles, which are objects.
It’s as strange, says d’Espagnat, as if the height of the Eiffel Tower managed to create a second Eiffel Tower.
Quantum Approaches vs Basic Ideas
D’Espagnat notes that general quantum rules lead to at least three different theoretical approaches for making predictions — predictions that are basically the same.
He says that this undermines the idea that there are “basic notions” out there acting as the “real” foundation for other ideas.
Creation vs Change of State
Classical physics might see the creation or destruction of particles, but quantum physics sees these transitions as changes in various states of “Something.”
Building Blocks vs Wholeness
This “Something” suggests a wholeness of some sort instead of classical physics’ multitudinist world view founded on localized atoms or particles as basic building blocks.
Physicists’ Practical vs Theoretical Concerns
Although a physicist might acknowledge that particles and trajectories don’t “really” exist, they are useful concepts that have a pseudo-reality, especially in Feynman’s approach to quantum calculations.
Idealism vs Realism
Idealism says our only knowledge of the outside world comes from our (often mistaken) senses. Realism believes there’s a mind-independent reality we can either gain access to or say something about.
Counterfactuality vs Observation
If we leave an office with books on the shelves we normally assume that the books are still there even if we’re not observing them. This is called counterfactuality.
Stability vs Realism of the Accidents
Some observations are relatively stable, leading some people to see them as pointing to stable features of the world.
Other sense impressions change frequently. A belief we could term realism of the accidents entails that these quickly changing contingent “accidents” also point to something real.
Realism of the Accidents vs Realism of the Events
Galileo appears to have believed in the realism of the accidents. Space and time are real (despite Galilean relativity) but relative positions are “accidents” and hence “paradigmatically true.”
Einstein emphasized events but still believed that there were elements out there that physics could determine were true or not. Hence he believed in what could be called realism of the events.
UPDATE (15 April 2010)
Princeton University Press offers the first chapter as a sampler.
Entropy on Hold
18 February 2010
Philosophy and Physics
I’m going to put the entropy book on hold for a bit and get back to it later. The basic theme of Arieh Ben-Naim’s book seems quite blunt and compelling: that entropy can be understood only through atoms or molecules — allowing for the distinction between micro- and macrostates — and that the apparently inexorable increase in entropy is the result of some macrostates having lots of possible microstates. (The author introduces different terms: “dim” vs. “specific” events, respectively.)
Ben-Naim presents various mathematical models using simple rules and shows how the system’s “entropy” rises to a relatively likely macrostate (or close range of macrostates), especially in systems with a large number of identical components. These more likely macrostates “hide” information in the form of their many microstates, which although distinct on the microscopic level produce no observable change in some all-encompassing macrostate. In many ways the book has a simple and direct message, but his side pleas against other interpretations and definitions of entropy (such as increasing disorder instead of increasing amount of missing information) can make for a confusing read.
In any event, more about that in the future when I have the energy to re-read Entropy Demystified. A few days ago I bought a book that’s even closer to my core interests. Bernard d’Esagnat, writing in his On Physics and Philosophy (Princeton University Press, 2006), states he will avoid discussing metaphysics in detail, but:
Still, in the present book the metaphysical domain has willy-nilly to be approached in one respect. For indeed one piece of information that contemporary physics clearly yields, as we shall see, is the absolute necessity of carefully distinguishing between two concepts of reality. One of them is ontological reality, that is, the notion referred to when “what exists independently of our existence” is thought of or alluded to. The other one, empirical reality, is the set of phenomena, that is, the totality of what human experience, seconded by science, yields access to. (p. 4)
D’Espagnat intends to describe the philosophical importance of quantum results on “nonseparability” with its nonlocality and holism, the universal applicability of quantum rules, and finally “quantum measurement theory and the immensely puzzling riddle of the nature of consciousness” (Schrödinger’s cat and all that). I’ll post comments as I make my way through the book. |
57c3bae590d436a6 | English polski
PSNC Institutional Repository
English polski
Narrow by attributes
Narrow by collections
Search results
Publications that matched query:
[Abstract = "An implementation of numerical algebraic methods of solving a stationary one-dimensional Schrödinger equation (SODSE) is presented. In the framework of the proposed approach, SODSE is converted into an algebraic eigenvalue problem, which represents a discrete version of studied problem on an equally spaced grid. The AMSSE program written in Delphi calculates eigenvalues and corresponding eigenvectors by means of various methods and algorithms described here. It is an efficient and valuable computational environment, which can be used in science and nanotechnology. Arbitrary potentials can be introduced into AMSSE program in the form of analytic formulae or data tables, or with the mouse. The user-friendly graphical interface takes advantage of full capabilities of the Windows operating system. Main program features are described. Efficiency and accuracy of different numerical algorithms are comprehensively tested and compared. Factors influencing accuracy are discussed. Examples are widely presented. Matrix approach extension to the case of an effective-mass equation is mentioned."] |
35645ada56407a58 | Physics of Complex Systems
Rydberg molecules
Rydberg atoms are true giants in the atomic world with sizes reaching 10 micrometer for n=300 and constitute mesoscopic entities. They allow to probe classical-quantum correspondence and the transition from the quantum to the classical world. They also provide a useful testing ground for concepts of non-linear dynamics since even moderate external (magnetic or electric) fields provide strong non-separable perturbations. Rydberg atoms have recently attracted attention also in the quest for building blocks of quantum information systems. Rydberg atoms have been proposed as a quantum phase register and as quantum gates employing the Rydberg dipole blockade. The latter makes use of the strong long-range dipole-dipole interactions which conditionally block excitations of atoms in the vicinity of a Rydberg atom already created. More generally, an ensemble of ultracold Rydberg atoms constitutes a strongly interacting many-body system far from the ground state.
Rydberg molecules are another interesting topic which can, for example, be used as a sensitive probe for the spatial distribution of ultracold gases. When a Rydberg atom is excited in a dense gas of atoms, one or more ground-state atom(s) can be found within the Rydberg electron orbit. For an attractive interaction between the quasi-free Rydberg electron and a ground-state atom an ultralong-range Rydberg molecule can be formed. Typically the interaction and the resulting binding energy are very small and a low kinetic energy of a ground state atom is required and the formation of Rydberg molecules is observed in an ultracold gas of atoms (of the order of 1 μK or less). Within the Born-Oppenheimer approximation, the molecular potential experienced by a ground state atom is approximately proportional to the squared wave function of Rydberg electron. This is because the interaction is larger at higher probability density of Rydberg electron (Fig.1). The vibrational levels are formed by trapping a ground state atom within these potential wells.
Fig.1 : Molecular potential (black line) for a 5s38s 3S1 strontium atom together with the wavefunctions for the v = 0 (red solid line), v = 1 (green dashed line), and v = 2 (blue dot-dashed line) molecular vibrational states. The binding energy corresponding to each wave function is indicated by its axis.
For a Rydberg dimer the lowest energy vibrational state has a wavefunction well localized in space around Rn = 2 n2 (a.u.) (Fig.1). Thus, the probability of creating the dimer molecule will depend on the likelihood of initially finding a pair of ground-state atoms with the appropriate internuclear separation, Rn. By varying the principal quantum number of the Rydberg atom the wavefunction can be localized at different positions and a pair distribution at different distances can be probed. For example, the excitation of Rydberg dimers between n=30 and 50 leads to a pair correlation in the range of R = 90nm and 250nm. This technique has been applied to an ultracold gas of bosons as well as fermions and the exchange hole in the two-body correlation of a fermion gas has been observed.
The Rydberg molecule can also be extended to involve more than one ground-state atom. An excitation of Rydberg trimer in which two ground-state atoms are bound may open a possibility to probe a three-body correlation of ultracold gases. By increasing the density of a gas or the principal quantum number n, the number of ground state atoms within a Rydberg molecule can be increased. Currently, a Rydberg molecule involving up to about 10000 atoms has been created.
For more information about this topic, please consult the following websites:
Ultrashort pulses
Ultrashort pulses as first characterized at TU Wien are the ultimate tool to investigate the fundamentals of light-matter interactions and the meaning of time and time differences in quantum mechanics. We have studied, e.g., the full two-electron dynamics of helium atoms irradiated by strong and short laser pulses to calculate the apparent time delay of photoemission from different initial states of the parent atom (together with MPQ Garching, TU Munich). To this end, we have solved the two-electron Schrödinger equation for helium and have successfully simulated so-called attosecond streaking and RABBITT experiments both of which measure the relative time delay of different electron groups reaching the detector. Our simulation results are used to determine time delays on an absolute scale and have allowed for a detailed look into photon-induced excitation processes in atoms. Our simulation results were also used to work out the time evolution of the temporal build-up of interference structures in photoemission (Fano resonances, MPI Heidelberg). These interferences are caused by two alternative pathways to the same final state, e.g., direct emission vs. delayed emission via a metastable excited state. Pump-probe experiments with variable delay between the two pulses interrupt the build-up of the resonance and allow for taking snapshots of the process thereby showing the temporal evolution of the multipath interference.
Fig. 2: Temporal build-up of a two-path interference in the photoionization of helium. Results of our simulations are compared to experimental data. Asymptotically, the energy spectrum converges to the well-known Fano shape.
Another interesting application of attosecond science is the non-linear upconversion of photons of a strong incident laser pulse to high-order multiples of the original photon energy in a process called high-harmonics generation. Recently, upconversion factors of more than 1000 could be reached at TU Wien (Institute for Photonics) in this process. Our work helps to interpret the experimental results and to optimize this process aimed at generating XUV pulses of sub-as duration. Lately, attosecond experiments have also been performed on extended systems such as dielectrics, where the high target-atom density raises hopes to generate harmonics with larger intensity. Due to the inherent multi-particle nature of such systems, simplified models have been invoked that have given first qualitative insight in the light-driven electronic processes in dielectrics. However, we could show that such simplifications often fail to reproduce even the qualitative behavior of realistic systems let alone provide a quantitative prediction for any observable in experiment. We have taken the first steps to a multi-scale description of laser-solid interactions combining the microscopic electronic motion described by the Schödinger equation with the mesoscopic world of light propagation (Maxwell's equations).
Fig. 3: High harmonic spectrum induced in diamond by a linearly polarized few-cycle laser pulse as a function of the position along the propagation direction inside a 1 µm thick diamond crystal.
Working together with experimental groups in Munich, Zurich, and Graz we have elucidated the interplay of different processes involved in light-solid and light-liquid interactions and have helped interpret fundamental experiments in the field possibly paving the way to light-driven electronics on the PHz scale.
Dynamics of many-body systems
Quantum many-body systems are at the core of current interest both in experimental as well as in theoretical physics, since a better understanding of these systems bears a large potential for technology and applications. Table-top sources of coherent X-ray light from strongly driven gases of atoms might become available in near future. Highly accurate gravimeters are being constructed using the coherence of matter waves. Petahertz electronics is at reach based on the driving of currents in dielectrics by femtosecond laser pulses. All these applications share that the underlying effects originate from or exploit non-equilibrium quantum matter. Many novel experimental tools are being developed that allow for unprecedented driving for example by strong and ultrashort laser pulses, by bichromatic fields, or by structured light, and are being used to explore new effects and applications.
Fig.4: The two-particle reduced density matrix (left) and the cumulant measuring two-particle correlations (right) of the beryllium atom.
At the same time, theoretical tools to describe these systems are lagging behind those available for systems at rest. In our research, we address exactly this discrepancy. We use and develop novel theoretical approaches to describe quantum systems out of equilibrium. Starting point of our investigations into non-equilibrium quantum matter is the many-body time-dependent Schrödinger equation. The complexity of this equation increases exponentially with the number of particles in the system. It thus cannot be solved numerically exactly except for the simplest systems consisting of just a few particles. A simple estimate shows that storing the wavefunction of, e.g. , the lithium atom with reasonable resolution would exceed the storage capacity of current supercomputers not to mention performing calculations with it. One way around is to avoid the wave function and use a reduced object such as the particle density. The time-dependent density functional theory, e.g., follows these lines by using the particle density as the fundamental object. However, it suffers from unknown energy functionals because quantum correlations cannot be easily taken into account. We have developed a time-dependent quantum many-body approach that uses the two-particle reduced density matrix instead. In this way all two-particle correlations are incorporated. Since all fundamental interactions can be regarded as pairwise interactions, two-body correlations are the most important ones. With the new method, we were able to solve a wide variety of problems ranging from the multi-electron dynamics of atoms driven by strong and ultrashort pulses to quench dynamics of ultracold atoms in optical lattices.
For more information about this topic, please consult the following website:
Quantum transport through nanostructures
Novel nanomaterials and their heterostructures play a central role in current computational material science research. Recent experimental progress has seen an appearance of several two-dimensional solids with a wide range of electronic properties, ranging from the famous semi-metal graphene to strongly insulating hexagonal boron nitride. Modern synthesis techniques for the creation and manipulation of stable layers of two-dimensional crystals have now become well developed. Likewise, noble metal nanoclusters feature high catalytic reactivity and sharp plasmonic resonances that can be tailored to specific energies. By depositing and subsequent transfers, individual layers may be combined like a sandwich, stacking layers in a predefined sequence. The resulting van der Waals heterostructures exhibit interesting new effects that go beyond the physics accessible only by a single layer. Nanostructure devices composed of such components promise a wide range of applications, from highly efficient catalysts or solar cells to ultra-low-power nanoelectronics.
Our research focuses on simulating new nanostructure materials composing realistic nanodevices, including defects and impurities. The term nanodevice in this context refers to the typical device size, up to a few micrometers and containing millions of atoms, but still below mesoscopic dimensions. Consequently, quantum effects play an important role. The theoretical description of these systems thus poses a challenging multi-scale problem, requiring active method development. We simulate transport through nanodevices built by our experimental collaborators (for example at RWTH Aachen or ETH Zürich) as well as the electronic structure and properties of low-dimensional materials and their heterostructures.
Fig. 5: (left panel) Graphene nano-constriction sandwich device fabricated by the Stampfer group at RWTH Aachen, superposed with the image of a scattering state calculated using our tight-binding approach. Scale bar is 500 nm. (Middle panel) Conductance measurements for two different cool-downs (green and black) and our theory (blue). (Right panel) Local density of states in a graphene nano-constriction for energies close (upper panel) and far away (lower panel) from the Fermi energy.
Fig. 6: (a) Hexagonal graphene flake on single hBN layer (carbon: blue, nitrogen: green, boron: red) and substrate. The lattice mismatch generates a moiré pattern with unit length depending on the angle of rotation between the two layers (yellow diamond). (b) Graphene flake in an effective embedding potential which replaces hBN and the substrate - the system size is strongly reduced. (c,d) Density of states of graphene on hexagonal boron nitride, as a function of magnetic field for (c) experiment [Yu 2014] and (d) our theoretical simulations. Notice the linear structures (dashed lines) above and below the Dirac point that emerge due to superlattice effects.
A small lattice mismatch between adjacent layers, for example for graphene and hexagonal boron nitride, gives rise to regular, periodic moiré patterns [see Fig. 6(a)]. Even in structures composed of layers of the same material, twisting the layers with respect to each other will induce moiré potentials, whose periodicity sensitively depends on the twist angle. The resulting heterostructures feature altered electronic properties such as unconventional superconductivity or Mott-insulating phases. From a broader scope, twist angles allow for modifying the band structure of the heterostructure in surprising ways, promising a pathway towards engineering of desired material properties. Unfortunately, theoretical treatment of the large unit cells of moiré patterns, including a substrate and adjacent functional layers makes a full ab-initio treatment challenging. In an ongoing collaboration with Allan McDonald (Houston, TX, USA), we are developing moiré potentials for twisted trilayer graphene as well as transition metal dichalcogenides. Using an effective moiré potential [see Fig. 6(b)], we can reproduce the observed fine structures resulting from the moiré, see Fig. 6(c,d). Using a quantum dot induced by an STM tip, our experimental collaborators at RWTH Aachen were also able to directly probe moiré potentials, which compared well to our simulations.
Non-Hermitian physics and complex scattering
The propagation of waves is a topic spanning many spatial and temporal scales, from the fundamental quantum aspects of light-matter interaction to the scattering of radio waves in complex media like the earth’s atmosphere. On all these levels the input from theoretical physics is essential for understanding and controlling such wave phenomena. At the institute, considerable effort is being dedicated to two specific topics in the vast field of wave physics: (i) the non-Hermitian physics associated with waves in systems that are subject to both amplification (gain) and dissipation (loss) as well as (ii) the scattering of waves in disordered media. In both of these research areas, we collaborate strongly with various experimental teams; to come as closely as possible to the situation encountered in the laboratory, we employ numerical techniques and run simulations on the Vienna Scientific Cluster. In the field of non-Hermitian physics (i), we are especially interested in controlling the scattering properties of systems by tailoring the spatial distribution of gain and loss in them. Using such an approach, we found, e.g., that a highly disordered system can be made completely transparent and even invisible by adding a tailored gain/loss distribution to it [see Fig. 7(a) for the case of a Gaussian laser beam]. Special attention also receives a particular non-Hermitian singularity, called an exceptional point, that leads to quite a number of fascinating phenomena that we could recently demonstrate in collaboration with different experimental groups in nano-photonics and in laser physics.
Fig. 7: (a) A Gaussian laser beam entering a disordered medium from the left gets scattered and builds up a highly complicated interference pattern (left panel). By adding a tailored distribution of gain and loss to this medium, the beam can propagate like in free space (right panel). (b) A specially designed laser beam that applies a well-defined torque onto the quadratic target in the middle, turning it in clockwise direction.
In the field of complex scattering (ii) we are pursuing the challenging goal of controlling how light propagates through disordered media. (Think here of the speckle patterns arising when directing a laser beam at a piece of paper.) How to deal with the complex interferences arising in this context is a challenging question arising in many fields of physics—from biomedical optics to observational astronomy. What comes to our advantage here is the fact that scattering is a deterministic process—at least for classical waves—such that the shape of an incident wave front determines how the wave will propagate through a medium. This insight forms the basis for a series of modern experiments that use spatial light modulators to characterize and to control light fields even in strongly disordered media. Our contributions to this newly emerging field of wave front shaping include, e.g., a concept to generate waves that follow a specific path across a disordered medium or that focus onto a designated point inside of it. Moreover, we also showed how to design waves in order to micro-manipulate a target embedded inside a disordered environment [see Fig. 7(b) for an example]. By tuning the incident wave front we could recently achieve the first realization of a random anti-laser, i.e. the time-reverse of a random laser in the sense that a random medium was shown to perfectly absorb a suitably engineered incoming wave front.
|
8be149836d364291 | How Quantum Mechanics and General Relativity Can Be Brought Together
This paper describes an easy and teaching way how quantum mechanics (QM) and general relativity (GR) can be brought together. The method consists of formulating Schrödinger’s equation of a free quantum wave of a massive particle in curved space-time of GR using the Schwarzschild metric. The result is a Schrödinger equation of the particle which is automatically subjected to Newtons’s gravitational potential.
Share and Cite:
Suda, M. (2016) How Quantum Mechanics and General Relativity Can Be Brought Together. Journal of Modern Physics, 7, 523-527. doi: 10.4236/jmp.2016.76054.
Received 25 February 2016; accepted 25 March 2016; published 28 March 2016
1. Introduction
The problem of synthesis of QM and GR has been the subject of much discussion among physicists in recent years. In this short paper, we try to tackle this question by subjecting the Schrödinger equation of a free quantum wave to the non-Euclidian geometry of space-time developed in the formalism of general relativity.
The motivation to do this is justified by the effort to find an easy and pedagogical way of understanding how the most important physical theories developed in the 20th century, QM and GR, can be brought together in the limit of quantum particles that have extremely small masses compared to cosmological objects.
In doing so, we begin by writing down the well-known non-relativistic Schrödinger equation which describes a quantum particle of mass at rest (e.g. a neutron) affected by a radial symmetric potential [1] :
is the wavefunction depending on position r and time t, is the Laplace operator and Planck’s constant. In the case of Newton’s gravitation, is written as
G is the constant of gravitation and M the mass which causes gravitation (e.g. mass of Earth).
We investigate stationary solutions of Equation (1) by using the ansatz and obtain
omitting reference to r for function. is the (negative) binding energy (is the frequency) and can be written as for a particular momentum of a particle bounded in the potential.
Equation (3) can be treated in complete analogy to the quantization of electron energies in an hydrogen atom, described in standard textbooks of quantum mechanics [1] , to obtain energy states and wave functions of a massive particle bounded in Newton’s potential ( [2] , Section 3.4.3 therein).
Going back to Equation (3), we initially consider a free quantum wave with and obtain the stationary Schrödinger equation
with plane-wave solution and. T is the kinetic energy which is identical to the total energy in this special case where. Here, K specifies the momentum of the free particle.
Because of the radial-symmetric potential Equation (2), we switch to spherical coordinates rewriting for s-waves as
with spherical wave solution of a free quantum wave.
2. Free Quantum Wave in Curved Space-Time of GR
Now let’s switch to the relativistic point of view.
Taking GR into account (e.g. [3] ), four dimensions (space and time) have to be considered: . Here c is the velocity of light. denote the spherical coordinates. In the following, we use the covariant and contravariant notation of GR (lower and upper indices). Therefore, the four coordinates can be merged to, where.
Now, the following idea is discussed: embedding the QM-formalism of a free wave into space-time- formalism of GR, we can change Equation (5) in complete formal analogy by rephrasing into
denotes the Laplace operator of a diagonal metric in four dimensions
applied to [4] . The quantity g denotes the negative determinant of the metric which is specified below. We will use the so-called Schwarzschild metric (see below).
The right hand side of Equation (6) uses the relativistic momenta [3]
together with the well-known energy relation. The quantity E denotes Einstein’s total energy. Moreover, and. The quantity denotes the mass at rest and v (k) the velocity (momentum) vector of the free particle. Using we can extract
where. From Equation (6) to Equation (8) follows that does not now mean the
kinetic energy T of a free particle in a flat space of Euclidian geometry (as in Equation (4)) but denotes the kinetic energy of this particle bounded in the space-time geometry of GR where the gravitational potential plays a crucial role. We will prove this fact below.
Immediately, one deduces from Equation (7). In summary, Equation (6) reads
This equation describing a quantum wave in curved space-time of GR is our starting point for further conside- rations. This quantum wave is not free anymore because it is affected by the non-Euclidian geometry of space- time. We will see below that this is equivalent to a quantum wave described by a Schrödinger equation in Eu- clidian geometry where Newton’s gravitational potential is included (see Equation (17)).
As promised above the diagonal metric we use is the so-called inverse spherical Schwarzschild metric (e.g. [3] )
is called Schwarzschild radius.
If one gets the inverse spherical Minkowski metric. The square root of the negative determinant of the metric yields. Now Equation (9) can be figured out easily accounting for Einstein’s summation convention. One obtains the following partial differential equation:
The subscripts t and r of the wave function denote partial derivatives of time t and coordinate r, re- spectively. Moreover it is assumed that only.
Initially we would like to mention that in case of Equation (11) yields the Klein-Gordon-Schrödinger equation [5] .
In order to solve Equation (11) for we choose the product ansatz and obtain
The LHS depends only on t, the RHS only on r. Therefore we can equalize each individual side with which should be a constant. We sum up the rest energy and the binding energy (where,) yielding the total energy:
From the LHS of Equation (12) we obtain. Hence, the RHS of Equation (12) yields
We consider. This can be called “Newtonian approximation”. The reason for that can be justified as follows: The Schwarzschild radius of Earth amounts to according to Equation (10) and the radius of Earth is on average. On obtains. This should be the scope of application of Equation (14) on Earth for r as well. We therefore neglect the terms on the LHS of Equation (14) as a first approximation and on the RHS is excellently approximated through. The result of this approach is
which can be rewritten by using Equation (13) and Equation (5) as
because. Multiplying each side with the factor and using from Equation (10) leads to
where we have moved, Newton’s gravitational potential of Equation (2), to the left side. Immediately one recognizes that Equation (17) is identical to Equation (3). This means that we obtained the stationary non- relativistic Schrödinger equation including Newton’s gravitational potential.
3. Conclusion
From the considerations above one can conclude that by embedding the Schrödinger equation of a free quantum wave (which is defined in Euclidian space) into curved space-time of GR (which is defined in non-Euclidian space) we obtain the Schrödinger equation of a quantum wave which is subjected to Newton’s gravitational potential. Moreover, it has been shown that Newton’s potential energy comes from the Schwarzschild metric of GR. The space-time geometry of GR applied to a free quantum wave causes Newton’s gravitational force to appear automatically in the Schrödinger equation. In this sense, QM and GR can be harmonized if the “Newtonian approximation” (defined through the ratio Schwarzschild radius/position coordinate to be much smaller than 1) is taken into consideration and they can be brought together without any difficulty.
I am grateful to M. Faber, F. Laudenbach and F. Hipp for many discussions and I. Glendinning for revising the manuscript.
Conflicts of Interest
The authors declare no conflicts of interest.
[1] Cohen-Tannoudji, C., Diu, B. and Laloe, F. (1977) Quantum Mechanics I and II. John Wiley and Sons, New York London Sydney Toronto.
[2] Giese, E., Zeller, W., Kleinert, S., Meister, M., Tammer, V., Roura, A. and Schleich, W. (2015) The Interface of Gravity and Quantum Mechanics Illuminated by Wigner Phase Space.
[3] Cheng, T.-P. (2000) Relativity, Gravitation and Cosmology. 2nd Edition, Oxford University Press, Oxford.
[4] Spiegel, M.R. (1999) Vectoranalysis, Schaum’s Outline. McGraw-Hill, New York.
[5] Rebhan, E. (2010) Theoretische Physik: Relativistische Quantenmechanik, Quantenfeldtheorie und Elementarteilchentheorie. Spektrum.
Copyright © 2022 by authors and Scientific Research Publishing Inc.
Creative Commons License
|
1509b759f8abece3 | Physical Chemistry. The Particle in a Box I: the Schrödinger Equation in One-dimension
In 1926, the Austrian physicist Erwin Schrödinger (1887-1961) made a fundamental mathematical discovery that had a profound impact on the study of the molecular world (in 1933, Schrödinger was awarded with the Nobel prize in Physics just 7 years later his breakthrough discovery). He discovered that a state of a quantum system composed by particles (such as electrons and nucleons) can be described by postulating the existence of a function of the particle coordinates and time, called state function or wave function (\Psi, psi function). This function are solution of a wave equation: the so-called the Schrödinger equation (SE). Although the SE equation can be solved analytically only for relatively simple cases, the development of computer and numerical methods has made possible the application of SE to study complex molecular.
Continue reading |
7fd6724ab517a7fc | Unlikely site for historic moment in physics: RDU
[John Archibald] Wheeler struggled to mend a rift in physics between general relativity and quantum mechanics—a rift called time. One day in 1965, while waiting out a layover, Wheeler asked colleague Bryce DeWitt [at UNC Chapel Hill] to keep him company for a few hours. In the [Raleigh-Durham International] terminal, Wheeler and DeWitt wrote down an equation for a wavefunction, which Wheeler called the Einstein-Schrödinger equation, and which everyone else later called the Wheeler-DeWitt equation. (DeWitt eventually called it “that damned equation”)….
“His work on the physics of black holes had led him to suspect that time, deep down, does not exist. Now, at the airport, that damned equation left Wheeler with a nagging hunch that time couldn’t be a fundamental ingredient of reality. It had to be, as Einstein said, a stubbornly persistent illusion….
“In recent years, Stephen Hawking… has been developing an approach known as top-down cosmology…. By applying the laws of quantum mechanics to the universe as a whole, Hawking carries the torch that Wheeler lit that day back at the North Carolina airport….”
— From “Haunted by His Brother, He Revolutionized Physics” by Amanda Gefter at Nautilus magazine (Jan. 16, 2014) |
f9d6a3d1011853b3 | IA Scholar Query: All Labels Are Not Created Equal: Enhancing Semi-Supervision via Label Grouping and Co-Training. https://scholar.archive.org/ Internet Archive Scholar query results feed en info@archive.org Sun, 31 Jul 2022 00:00:00 GMT fatcat-scholar https://scholar.archive.org/help 1440 Industry–Academia Research Collaboration and Knowledge Co-creation: Patterns and Anti-patterns https://scholar.archive.org/work/oth5rxzekbaspdplo7jmtodwqa Increasing the impact of software engineering research in the software industry and the society at large has long been a concern of high priority for the software engineering community. The problem of two cultures, research conducted in a vacuum (disconnected from the real world), or misaligned time horizons are just some of the many complex challenges standing in the way of successful industry–academia collaborations. This article reports on the experience of research collaboration and knowledge co-creation between industry and academia in software engineering as a way to bridge the research–practice collaboration gap. Our experience spans 14 years of collaboration between researchers in software engineering and the European and Norwegian software and IT industry. Using the participant observation and interview methods, we have collected and afterwards analyzed an extensive record of qualitative data. Drawing upon the findings made and the experience gained, we provide a set of 14 patterns and 14 anti-patterns for industry–academia collaborations, aimed to support other researchers and practitioners in establishing and running research collaboration projects in software engineering. Dusica Marijan, Sagar Sen work_oth5rxzekbaspdplo7jmtodwqa Sun, 31 Jul 2022 00:00:00 GMT Foundations for Meaning and Understanding in Human-centric AI https://scholar.archive.org/work/joonfnadkrf2pab5rsubz2f7sq MUHAI is a European consortium funded by the EU Pathfinder program that studies how it is possible to build AI systems that rest on meaning and understanding. We call this kind of AI meaningful AI in contrast to AI that rests exclusively on the use of statistically acquired pattern recognition and pattern completion. Because meaning and understanding are rather vague and overloaded notions there is no obvious research path to achieve it. The consortium has therefore set up a task early on in the project to explore how understanding is being discussed and treated in other human-centred research fields, more specifically in social brain science, social psychology, linguistics, semiotics, economics, social history and medicine. Our explorations have yielded a wealth of insights: about understanding in general and the role of narratives in this process, about possible applications of meaningful AI in a diverse set of human-centred fields, and about the technology gaps that need to be plugged to achieve meaningful AI. This volume summarizes the outcome of these consultations. It has three main parts: I. A general introduction, II. A series of chapters reporting on what understanding means in various human-centered research fields other than AI, and III. A short conclusion identifying key research topics for meaning-based human-centric AI. Steels, Luc (ed.) work_joonfnadkrf2pab5rsubz2f7sq Sun, 19 Jun 2022 00:00:00 GMT China's Foreign Policies Today https://scholar.archive.org/work/dv34lrrbl5ghnoe5m6l47pog5u Since Xi Jinping took power in 2012, China's foreign policy has significantly shifted from a defensive to an assertive approach. For decades, Beijing worked to integrate into the liberal international order, presenting itself as a peacefully rising power. By contrast, however, under Xi's leadership, the country is attempting to create a global system that is more favourable to its own interests. The Report examines China's current foreign policy approach, and the drivers behind the country's shift away from tradition. What are the main features of China's foreign policy today? How are decisions being taken, and to what extent do interest groups continue to have a say in decision-making after the recent power centralisation? Axel Berkofsky, Giulia Sciorati work_dv34lrrbl5ghnoe5m6l47pog5u Fri, 17 Jun 2022 00:00:00 GMT The socially constructed and embodied meanings of effectiveness in the lives of physical education teachers: An ethnographic study https://scholar.archive.org/work/graro6wdgbb6bo5xnn3hikcy2q While teacher effectiveness research is well established, much of it centres around linking teaching to pupil achievement and identifying specific teacher characteristics. In physical education, little evidence has been generated through gaining an understanding of teachers' experiences and by listening to their voices. To establish how physical educators have come to conceptualise effectiveness, this inquiry moves away from simply documenting teachers' assertions, by focusing upon its construction through the contexts of their professional and personal lives. In understanding how effectiveness has been socially constructed, it was therefore necessary to use a flexible approach that considered teacher subjectivities. An ethnographic study was conducted in the physical education department at Northton High School* over a period of nine months. Methods were guided by Wolcott's (2008) thinking around experience, enquire and examine, and specific data were generated using participant observation, a field-work diary, biographical semi-structured interviews, and by scrutinising school documentation. The resulting data was subject to thematic analysis, while a series of life-history narratives were used to demonstrate an understanding of how effectiveness came to be conceptualised by the different teachers in the department. Findings highlighted how differing sets of beliefs about the nature of physical education operated amidst several whole school discourses that focused specifically upon the teachers meeting Ofsted criteria for 'Outstanding' teaching, and sustaining high levels of examination success. This led to contestation, co-operation and conflict; and micropolitically, power was used to consolidate and strengthen the whole school discourses, leading to performative and normalised teaching behaviours. To help develop teaching to consistently 'Outstanding' levels, a CPD initiative based upon Ofsted criteria was introduced and used as an instrument to fabricate the Senior Leaders' vision for effectiveness. This no [...] Alan Thomson work_graro6wdgbb6bo5xnn3hikcy2q Thu, 16 Jun 2022 00:00:00 GMT Radiomics and artificial intelligence in prostate cancer: new tools for molecular hybrid imaging and theragnostics https://scholar.archive.org/work/snbe2dneivau7nprivwlyq35vm AbstractIn prostate cancer (PCa), the use of new radiopharmaceuticals has improved the accuracy of diagnosis and staging, refined surveillance strategies, and introduced specific and personalized radioreceptor therapies. Nuclear medicine, therefore, holds great promise for improving the quality of life of PCa patients, through managing and processing a vast amount of molecular imaging data and beyond, using a multi-omics approach and improving patients' risk-stratification for tailored medicine. Artificial intelligence (AI) and radiomics may allow clinicians to improve the overall efficiency and accuracy of using these "big data" in both the diagnostic and theragnostic field: from technical aspects (such as semi-automatization of tumor segmentation, image reconstruction, and interpretation) to clinical outcomes, improving a deeper understanding of the molecular environment of PCa, refining personalized treatment strategies, and increasing the ability to predict the outcome. This systematic review aims to describe the current literature on AI and radiomics applied to molecular imaging of prostate cancer. Virginia Liberini, Riccardo Laudicella, Michele Balma, Daniele G. Nicolotti, Ambra Buschiazzo, Serena Grimaldi, Leda Lorenzon, Andrea Bianchi, Simona Peano, Tommaso Vincenzo Bartolotta, Mohsen Farsad, Sergio Baldari, Irene A. Burger, Martin W. Huellner, Alberto Papaleo, Désirée Deandreis work_snbe2dneivau7nprivwlyq35vm Wed, 15 Jun 2022 00:00:00 GMT Combined inorganic and isotopic signatures for the geographical discrimination of olive oil https://scholar.archive.org/work/hasumypuf5bxbc3gxzhiqk2pd4 The globalization of the food industry has raised consumer interest in the geographical origin and the quality of food products. The global increase in food production and consumption, however, has led to fraudulent practices spreading. It threatens both the health of consumers and the economic balance of the food industry, which suffers huge financial loss every year. Olive oil is one of the most adulterated food products. As a result, a large array of analytical strategies was proposed for the geographical authentication of olive oil. The most reliable approaches that have demonstrated promising results for the geographical traceability of food products were based on the multi-elemental and isotopic fingerprinting. Nevertheless, trace elements, initially found at low to critically low concentrations in olive oil, are dissolved in a complex lipid matrix and thus the samples introduction in plasma-based instruments and the precise measurements of chemical components are challenging. This study presents a reliable analytical approach based on a three dimensional geographic information: (1) the mineral composition of the soil through the analysis of trace elements; (2) the geological background through the analysis of Sr isotopic composition; and (3) the pedo-climatic context through the determination of stable isotopes of carbon in olive oils. First, the trace elements were quantified in olive oils from Tunisia, Spain and France with high precision and accuracy by quadrupole ICP-MS following an optimized analytical procedure. The elemental concentrations combined with chemometrics allowed to classify olive oils according to their geographical provenance. Subsequently, an innovative method was developed and successfully applied for the quantitative extraction of Sr from olive oil matrix and accurate measurement of 87Sr/86Sr isotopic ratio by MC-ICP-MS. The conservation of 87Sr/86Sr isotopic ratios during the transfer of Sr from the soil to the plant and during olive oil extraction was demonstrated. The results were [...] Emna Nasr, Olivier F.X. Donard, Houyem Abderrazak work_hasumypuf5bxbc3gxzhiqk2pd4 Wed, 15 Jun 2022 00:00:00 GMT Evaluating science outreach: case study in the Wohl Reach Out Lab https://scholar.archive.org/work/hpz6flr46fejpjfpcblrn7v2ue The science sector has inequitable participation from certain socio-demographic groups; women, black and working-class individuals are under-represented. These inequalities are thought to emerge in early life and formal education may exacerbate them. This study evaluates the work of an initiative which aims to redress this inequality: the Wohl Reach Out Lab (WROL). The WROL is a specialist outreach centre at Imperial College London. It was founded to encourage under-represented students to continue participating in science by providing hands- on science experiences. The WROL hosts several outreach programmes, including a partnership with a local school. This study focusses on the school-WROL partnership programme. Using a mixed-methods approach, three questions are answered: 1. What are the desired outcomes of the WROL? 2. Does visiting the WROL increase students' levels of science capital? 3. What are the features and pedagogical practices of the WROL? Data were collected using a combination of methods. Interviews were conducted with visiting teachers and the session leaders to determine what they hoped the WROL would achieve. Questionnaires were used to measure students' levels of science capital both before and after visiting the WROL to assess any resulting changes. Group interviews with students were also used to supplement the questionnaire data. Finally, WROL sessions were observed to reveal common pedagogical practices. This study found that teachers value the WROL as an informal educational setting, because it provides students with non-school experiences of science. Observation showed some practices within the WROL successfully delivered this, by providing non-standard equipment and university mentors. However, this thesis also describes a tension between formal and informal learning in the WROL, where sessions often replicated elements of 'school science', rather than providing students with a new experience. This thesis concludes with recommendations, building on the WROL's existing strengths to make [...] Roberts Zivtins, Robert Winston work_hpz6flr46fejpjfpcblrn7v2ue Wed, 15 Jun 2022 00:00:00 GMT Analysis of Cross-Combinations of Feature Selection and Machine-Learning Classification Methods Based on [18F]F-FDG PET/CT Radiomic Features for Metabolic Response Prediction of Metastatic Breast Cancer Lesions https://scholar.archive.org/work/lo57ol7yrjgjxfomvabmz5llfa This study aimed to identify optimal combinations between feature selection methods and machine-learning classifiers for predicting the metabolic response of individual metastatic breast cancer lesions, based on clinical variables and radiomic features extracted from pretreatment [18F]F-FDG PET/CT images. Methods: A total of 48 patients with confirmed metastatic breast cancer, who received different treatments, were included. All patients had an [18F]F-FDG PET/CT scan before and after the treatment. From 228 metastatic lesions identified, 127 were categorized as responders (complete or partial metabolic response) and 101 as non-responders (stable or progressive metabolic response), by using the percentage changes in SULpeak (peak standardized uptake values normalized for body lean body mass). The lesion pool was divided into training (n = 182) and testing cohorts (n = 46); for each lesion, 101 image features from both PET and CT were extracted (202 features per lesion). These features, along with clinical and pathological information, allowed the prediction model's construction by using seven popular feature selection methods in cross-combination with another seven machine-learning (ML) classifiers. The performance of the different models was investigated with the receiver-operating characteristic curve (ROC) analysis, using the area under the curve (AUC) and accuracy (ACC) metrics. Results: The combinations, least absolute shrinkage and selection operator (Lasso) + support vector machines (SVM), or random forest (RF) had the highest AUC in the cross-validation, with 0.93 ± 0.06 and 0.92 ± 0.03, respectively, whereas Lasso + neural network (NN) or SVM, and mutual information (MI) + RF, had the higher AUC and ACC in the validation cohort, with 0.90/0.72, 0.86/0.76, and 87/85, respectively. On average, the models with Lasso and models with SVM had the best mean performance for both AUC and ACC in both training and validation cohorts. Conclusions: Image features obtained from a pretreatment [18F]F-FDG PET/CT along with clinical vaiables could predict the metabolic response of metastatic breast cancer lesions, by their incorporation into predictive models, whose performance depends on the selected combination between feature selection and ML classifier methods. Ober Van Gómez, Joaquin L. Herraiz, José Manuel Udías, Alexander Haug, Laszlo Papp, Dania Cioni, Emanuele Neri work_lo57ol7yrjgjxfomvabmz5llfa Tue, 14 Jun 2022 00:00:00 GMT The Use of the Exploratory Sequential Approach in Mixed-Method Research: A Case of Contextual Top Leadership Interventions in Construction HS https://scholar.archive.org/work/a2nxcjxyifdkvakeqioz5csbzi Quality and rigour remain central to the methodological process in research. The use of qualitative and quantitative methods in a single study was justified here against using a single method; the empirical output from the literature review should direct the current worldview and, subsequently, the methodologies applied in research. It is critical to gather contextual behavioural data from subject matter experts—this helps establish context and confirm the hypotheses arising from the literature, which leads to the refinement of the theory's applicability for developing a conceptual model. This paper identified the top leaders in construction organisations as subject matter experts. Nine semi-structured interviews were conducted, representing the South African construction industry grading. The output of the refined hypothesis was followed by a survey that targeted n = 182 multi-level senior leaders to gather further perspectives and validate the conceptual model. The outcome resulting from the rigorous validation process adopted—the analysis process, which included Spearman rank correlation, ordinal logistic regression and multinomial generalised linear modelling—demonstrated that the lack of H&S commitment in top leadership persists, despite high awareness of the cruciality of H&S in their organisations. Contextual competence, exaggerated by the local setting, is one source of this deficiency. This paper provides guidelines for using the exploratory sequential approach in mixed-method research to effectively deal with contextual issues based on non-parametric modelling data in top leadership H&S interventions. Siphiwe Gogo, Innocent Musonda work_a2nxcjxyifdkvakeqioz5csbzi Tue, 14 Jun 2022 00:00:00 GMT Exploring how to improve access to psychological therapies on acute mental health wards from the perspectives of patients, families and mental health staff: qualitative study https://scholar.archive.org/work/oajcfq43wbfwfjbqftbyx3dpia Psychological therapy is core component of mental healthcare. However, many people with severe mental illnesses do not receive therapy, particularly in acute mental health settings. This study identifies barriers to delivering and accessing psychological therapies in acute mental health settings, and is the first to recommend how services can increase access from the perspectives of different stakeholders (staff, patients and carers). Sixty participants with experiences of acute mental health wards (26 staff, 22 patients and 12 carers) were interviewed about barriers to accessing therapy in in-patient settings and how therapies should be delivered to maximise access. Four themes were identified: (a) 'Models of care', including the function of in-patient wards, beliefs about the causes of mental health problems and the importance of strong leadership to support psychosocial interventions; (b) 'Integrated care', including the importance of psychologists being ward-based, as well as having strong links with community teams; (c) 'Acute levels of distress', including factors that aggravate or ameliorate the impact of this on engagement in therapy; and (d) 'Enhancing staff capability and motivation', which is influenced by contextual issues. It is possible to improve access to therapy through strong leadership (that is supportive of talking treatments), flexible delivery of therapy (that considers short admissions) and a whole-systems approach that promotes ward staff understanding of the psychosocial causes of mental illness and staff well-being. It is essential to ensure continuity between in-patient and community therapy services, and for wards to have physical space to carry out therapy. Katherine Berry, Jessica Raphael, Gillian Haddock, Sandra Bucci, Owen Price, Karina Lovell, Richard J Drake, Jade Clayton, Georgia Penn, Dawn Edge work_oajcfq43wbfwfjbqftbyx3dpia Tue, 14 Jun 2022 00:00:00 GMT Hydrological Resources in Transboundary Basins Between Mexico and The United States: El Paso del Norte and The Binational Water Governance https://scholar.archive.org/work/2v2kuf3i3zhnbf6rt3uala7wxi Hace varias generaciones, la región del Paso del Norte era eso: el paso hacia el norte de lo que entonces era aún territorio mexicano. "El Paso" era una referencia a un cambio geográfico, social e, incluso, a una perspectiva diferente del mundo y del futuro. La migración hacia el norte siempre ha formado parte de la conciencia social y del sentido antropológico de los mexicanos, primero bajo dominio español en su estrategia de evangelizar la región norte del territorio y después como país independiente como fenómeno migratorio hacia la búsqueda de mejores oportunidades. Hacia finales del siglo pasado, el crecimiento poblacional, el desarrollo tecnológico, la promoción de la migración como política pública, la apertura del comercio exterior y el desarrollo exponencial del sector maquilador provocaron un crecimiento desproporcional y una redefinición de la frontera a como la conocemos el día de hoy: una región rica y poderosa, con un crecimiento descontrolado, desconectada del resto de sus países y enfrentando los niveles de vulnerabilidad ambiental (encabezada por la hídrica, por supuesto) más altos del continente americano. La cuenca del río Bravo es considerada la cuenca con mayor estrés hídrico en el mundo. La lógica fronteriza adquiere variantes únicas que incrementan los problemas de ambas naciones a una escala binacional. La región es reconocida como la quinta economía mundial, con un cruce de un millón de personas y 300 000 automóviles diarios. Al mismo tiempo, las comunidades fronterizas arrojan los más altos índices de inseguridad social en ambos países, con niveles de educación y de calidad de vida por debajo de las medias nacionales. Paralelamente, el nivel de interacción y de integración económica y social es tal que la región fronteriza se entiende más en la lógica fronteriza que en sus respectivos contextos domésticos nacionales. La centralización del lado mexicano, como sistema político operativo, y la descentralización y autonomía del lado de Estados Unidos, ha fomentado la inercia de la integr [...] Alfredo Granados Olivas, Hugo Luis Rojas Villalobos, Luis Carlos Alatorre Cejudo work_2v2kuf3i3zhnbf6rt3uala7wxi Mon, 13 Jun 2022 00:00:00 GMT A Review: Machine Learning for Combinatorial Optimization Problems in Energy Areas https://scholar.archive.org/work/eoc35bzoxrdutfzf6j2vrnqzpa Combinatorial optimization problems (COPs) are a class of NP-hard problems with great practical significance. Traditional approaches for COPs suffer from high computational time and reliance on expert knowledge, and machine learning (ML) methods, as powerful tools have been used to overcome these problems. In this review, the COPs in energy areas with a series of modern ML approaches, i.e., the interdisciplinary areas of COPs, ML and energy areas, are mainly investigated. Recent works on solving COPs using ML are sorted out firstly by methods which include supervised learning (SL), deep learning (DL), reinforcement learning (RL) and recently proposed game theoretic methods, and then problems where the timeline of the improvements for some fundamental COPs is the layout. Practical applications of ML methods in the energy areas, including the petroleum supply chain, steel-making, electric power system and wind power, are summarized for the first time, and challenges in this field are analyzed. Xinyi Yang, Ziyi Wang, Hengxi Zhang, Nan Ma, Ning Yang, Hualin Liu, Haifeng Zhang, Lei Yang work_eoc35bzoxrdutfzf6j2vrnqzpa Mon, 13 Jun 2022 00:00:00 GMT Development and application of a transparent μECoG array and optrode microdrive for combined electrophysiology and optophysiology https://scholar.archive.org/work/atwllegonvbz7hlm6rk5hurftm Conventional extracellular in vivo electrophysiology has advanced our understanding of brain function. Thereby, improvements in electrode design and materials have led to new recoding possibilities. Previously, recordings of exclusively isolated single neurons had led to the neuron doctrine in which the neuron is the structural and functional unit. The development of multielectrode devices allowed to record single-neuron ensembles and distributed mesoscopic population activity. Thus, the provided evidence shifted the perspective towards neural networks as functional units creating behavior and cognition. Nevertheless, our understanding of the nervous system is still limited, and the measuring tools' capabilities restrict new insights. Improving neural recording devices but also making them more easily available would facilitate new knowledge. The general limitations of chronic in vivo electrophysiology are device stability and signal stability. The mechanical mismatch of electrode and brain tissue results in an acute and chronic inflammatory response. Neural signal quality degrades over time through electrode movements and the encapsulation by a glial sheath. This suggests that current neural implants require additional innovations that improve chronic recording stability. Advancements in optogenetic tools to manipulate neural activity require neural devices that incorporate simultaneous electrophysiological recordings and optophysiological modulation. In this dissertation, I focus on improving mesoscopic electrocorticography (ECoG) recordings and microscopic single-unit recordings. The main aim is to improve recording devices that are still widely used: platinum μECoG arrays and tetrode microdrives. I aim to improve ECoG signal stability by increasing the structural biocompatibility of the device. Single-unit stability should be achieved by both stable tetrode positions over sessions and precisely advanced tetrodes. Both devices should combine electrophysiological recordings and simultaneous optical modulation o [...] Marcel Brosch, Universitäts- Und Landesbibliothek Sachsen-Anhalt, Martin-Luther Universität, Frank Ohl work_atwllegonvbz7hlm6rk5hurftm Mon, 13 Jun 2022 00:00:00 GMT How to Develop Trust in the Distrusted Banking System of the Kurdistan Region in Iraq https://scholar.archive.org/work/a5wdo365z5dkloxeyjt4g6l3ku This study merged qualitative and quantitative approaches in sequential and equal weight. The primary data collection for this study started with the collection of qualitative data through conducting semi-structured interviews with bank managers and government officials at the ministry of finance in order to identify the main obstacles that they believed faced the current banking system in the KR. Analysis of these results led to the design of a questionnaire survey involving 520 current and potential bank customers to identify the main barriers which cause a continuing lack of trust amongst KR people in the banking system. The study confirmed that the banking system in the KR has either never gained or completely lost the trust of the KR. It has identified several reasons for this including inertia amongst national banks, limited literacy in certain sections of the population and a risk-adverse culture. One of the other major factors preventing the use of banks within the KR is the lack of trust in banks. However, the study has also identified sections of the population who do have a positive interaction with banks, these are mainly younger adults who have been exposed to western banking systems including e-banking. Mohammed Mohammed Ameen work_a5wdo365z5dkloxeyjt4g6l3ku Fri, 10 Jun 2022 00:00:00 GMT Science, Education and Innovations in the Context of Modern Problems https://scholar.archive.org/work/l4lmgxr2hzannkk5nz3uaqjcem Conference Proceedings (International) ISSN 2790-0169 E-ISSN 2790-0177 OCLC Number 1322801874 Short Title SEI Abbreviated key-title Sci. educ. innov. context mod. probl. ISBN 978-1451-11-764-6 Editor Nasir Mammadov, Mammad Chairman of Editorial Board Dr. Uma Shankar Yadav (India) Publisher International Meetings and Conferences Research Association E-mail (Submission & Contact) editor@imcra-az.org Topics Science and Social Sciences (no Art and Humanities) Frequency Bi-monthly (6 in a year, from 2021) Number of Regular Issues 6 Issues Number or Special Issues No special issue. Number or articles in Regular Issue 10-40 articles Education and Innovations in the Context of Modern Problems Science work_l4lmgxr2hzannkk5nz3uaqjcem Fri, 10 Jun 2022 00:00:00 GMT Investigation of chemical reactivity by machine-learning techniques https://scholar.archive.org/work/nihx2cehlba6pffgth4oaackg4 The concepts of potential energy surface (PES) and molecular geometry, defined within the Born-Oppenheimer (BO) approximation, are essential for computational chemistry. The PES is a multi-dimensional function of atomic coordinates and can be obtained by the solution of the electronic Schrödinger equation (SE). While estimating individual points on the PES by first-principles methods, such as density functional theory (DFT), for even moderately sized molecular and material systems is computationally expensive, approximate methods allow for simulations of large systems over long time scales. Machine-learned interatomic potentials (MLIPs) have been gaining in importance since, once trained, they hold the promise to be as accurate as the reference ab-initio electronic structure method while having an efficiency on par with empirical force fields. The derivation of a molecular representation is crucial for designing sample-efficient and accurate MLIPs, irrespective of the employed machine learning (ML) algorithm. Here, a novel molecular fingerprint referred to as Gaussian moment (GM) representation is developed. The GM representation is atom-centered, includes both structural and alchemical information of the local atomic neighborhood, and accounts for all essential invariances (translations, rotations, and permutations of like atoms). It is defined by pairwise atomic distance vectors and its runtime and memory complexity scale linearly with the number of atoms in the local neighborhood. Combined with atomistic neural networks (NNs), GM results in the Gaussian moment neural network (GM-NN) approach, which enables the generation of MLIPs with accuracy and efficiency similar to or better than other established ML models. The GM-NN source code is available free of charge from gitlab.com/zaverkin_v/gmnn. Another intriguing aspect of MLIPs is the generation of highly informative training data sets and consequently, uniformly accurate machine-learned PESs, by applying active learning (AL) strategies. The fundamental quantity of AL is the query strategy -an algorithmic criterion for deciding whether a given configuration should be included in the training set or not. This criterion is defined here by employing the uncertainty estimate derived in the optimal experimental design (OED) framework. The proposed AL scheme allows for a more efficient estimation of the uncertainty of atomistic NNs. Thus, it allows for a more efficient I generation of transferable and uniformly accurate potentials by selecting the most informative or extrapolative configurations. Aside from the conventional MLIPs, which typically aim to predict scalar energies, a methodology for learning the relationship between a structure and the respective tensorial property by atom-centered NNs has been proposed. To learn tensorial properties, specifically, the zero-field splitting (ZFS) tensors, the output of an NN is re-weighted by a tensor that satisfies the symmetry of the former. It has been shown that the proposed methodology can achieve high accuracy and has excellent generalization capability for out-of-sample configurations. Thus, it has been used to study the structural dependence of the ZFS tensor. Moreover, it has been demonstrated that complex processes such as spin-phonon relaxation can be investigated by employing machine-learned surrogate models. Finally, the developed ML approaches have been used to study various surface processes in interstellar environments. Specifically, the adsorption and desorption dynamics of N and H 2 on different surfaces have been investigated, providing binding energies, sticking coefficients, and desorption temperatures. The diffusion of a nitrogen atom on the surface of amorphous solid water (ASW) at low temperatures has drawn particular attention. The study requires long time scales, short time steps in direct molecular dynamics (MD), and a very accurate PES. It has been achieved by combining MLIP driven MD simulations, free energy sampling using well-tempered metadynamics, and kinetic Monte Carlo (kMC) simulations based on the minima and saddle points on the free-energy surface (FES). The study revealed that N atoms, as a paradigmatic case for light and weakly bound adsorbates, can hardly diffuse on bare ASW at 10 K. Surface coverage may change that considerably, increasing the effective diffusion coefficient over 9-12 orders of magnitude. II Zusammenfassung Die Konzepte der Potentialenergiefläche (PES, engl. für Potential Energy Surface) und der Molekülgeometrie sind in der Born-Oppenheimer-Näherung definiert und bilden eine Grundlage für die computergestützte Chemie. Die PES ist eine mehrdimensionale Funktion der Atomkoordinaten und kann durch die Lösung der elektronischen Schrödingergleichung erhalten werden. Die Berechnung einzelner Punkte auf der PES via First-Principles-Methoden, wie z. B. die Dichtefunktionaltheorie (DFT), wird bereits für Molekül-und Materialsysteme mittlerer Größe sehr rechenintensiv. Auf der anderen Seite ermöglichen Näherungsverfahren atomistische Simulationen großer Systeme über lange Zeitskalen. Mit ihrer, zur entsprechenden abinitio Referenzmethode ähnlichen, Präzision gewinnen die maschinell erlernten interatomaren Potentiale (MLIP, engl. für Machine Learned Interatomic Potential) an Bedeutung. Ein weiterer Vorteil ist die Recheneffizienz, vergleichbar zu empirischen Kraftfeldern. Die Herleitung einer molekularen Repräsentation ist entscheidend für die Entwicklung von einem dateneffizienten und genauen MLIP und ist unabhängig vom maschinellen Lernverfahren. In dieser Arbeit wird eine alternative Methode entwickelt, die im Folgenden als Gauß Moment (GM) Darstellung bezeichnet wird. Die GM-Darstellung ist auf einem Atom zentriert, enthält sowohl strukturelle als auch chemische Informationen der lokalen atomaren Umgebung und berücksichtigt alle wichtigen Invarianzen (Translationen, Rotationen und Permutationen von gleichartigen Atomen). Sie wird ausschließlich durch Abstandsvektoren zwischen benachbarten Atomen definiert. Außerdem skaliert die GM linear mit der Atomanzahl in der lokalen atomaren Umgebung. Kombiniert mit atomistischen neuronalen Netzen (NNs) ergibt sich der Ansatz des Gauß Moment Neuronalen Netzwerkes (GM-NN). Dieser ermöglicht die Erzeugung von maschinell erlernten (ML, engl. für Machine Learning) Potentialen, die im Vergleich zu etablierten ML-Modellen vergleichbar oder besser in puncto Präzision und Recheneffizienz sind. Der GM-NN-Quellcode ist unter gitlab.com/zaverkin_v/gmnn frei verfügbar. Ein weiterer wichtiger Aspekt von MLIPs ist die Generierung von hochinformativen Trainingsdatensätzen und damit gleichmäßig genauen ML-PESs. Dies kann durch Anwendung von Methoden des aktiven Lernens (AL) erreicht werden. Der Hauptbestandteil jeder AL-Methode III ist ein algorithmisches Kriterium für die Entscheidung, ob eine gegebene Konfiguration in den Trainingsdatensatz aufgenommen wird oder nicht. Ein solches Kriterium wird hier auf Basis der Unsicherheitsschätzung im Rahmen der optimalen Versuchsplanung (OED, engl. für Optimal Experimental Design) definiert. Der entwickelte AL-Algorithmus ermöglicht eine zeiteffizientere Schätzung der Unsicherheit atomistischer NNs. Durch die Auswahl der informativsten bzw. extrapolativsten Konfigurationen aus einem Trainingsdatensatz können übertragbare und gleichmäßig akkurate ML-Potentiale effizient erzeugt werden. Neben den konventionellen MLIPs, die typischerweise skalare Energien vorhersagen, wurde hier eine Methode zum Erlernen der tensoriellen Molekül-und Materialeigenschaften durch atomzentrierte NNs eingeführt. Um die entsprechenden Eigenschaften, insbesondere den Tensor der Nullfeldaufspaltung (ZFS, engl. für Zero-Field Splitting), zu modellieren, wird die Ausgabe eines NN durch einen weiteren Tensor neu gewichtet. Dieser erfüllt die Symmetrie der zu modellierenden Eigenschaft. Die entwickelte Methode bietet eine hohe Genauigkeit und besitzt außerdem eine ausgezeichnete Generalisierungsfähigkeit auf Konfigurationen, die während des Trainings nicht benutzt wurden. Konkret wurde die Methode für die Erforschung der Abhängigkeit des ZFS-Tensors von der Molekülstruktur benutzt. Darüber hinaus konnte die Möglichkeit zur Untersuchung komplexer Prozesse, z. B. der Spin-Phonon-Relaxation, durch den Einsatz von ML-Modellen gezeigt werden. Schließlich wurde eine Vielzahl von Oberflächenprozessen in interstellarer Umgebung untersucht, um die entwickelten ML-Methoden anwendungsbezogen zu nutzen. Insbesondere wurde die Adsorptions-und Desorptionsdynamik von N und H 2 auf verschiedenen Oberflächen simuliert sowie die Bindungsenergien, Adsorptionskoeffizienten und Desorptionstemperaturen berechnet. Ein besonderes Augenmerk wurde auf die Diffusion eines Stickstoffatoms auf amorphen Eisoberflächen bei niedrigen Temperaturen gelegt. Die entsprechende Studie erfordert lange Zeitskalen, kurze Zeitschritte in der direkten Moleküldynamik (MD) und eine hohe Genauigkeit der PES. Dies wurde durch die Kombination von MD-Simulationen auf einem MLIP, dem Sampling der Freie-Energie-Fläche (FES, engl. für Free-Energy Surface) mit der Methode der wohltemperierten Metadynamik und kinetischen Monte-Carlo-Simulationen (kMC) erreicht. Dabei wurden die Minima und Sattelpunkte auf der FES für die entsprechenden kMC-Simulationen verwendet. Das Resultat zeigte, dass N-Atome als paradigmatischer Fall für leichte und schwach gebundene Adsorbate auf den unkontaminierten, amorphen Eisoberflächen bei 10 K kaum diffundieren. Darüber hinaus konnte gezeigt werden, dass die Präsenz von anderen, inerten Atomen oder Molekülen den effektiven Diffusionskoeffizienten über neun bis zwölf Größenordnungen beeinflusst. IV Peer-reviewed publications This cumulative dissertation summarizes results that have been published in [1]: V. Zaverkin and J. Kästner: Gaussian Moments as Physically Inspired Molecular Descriptors for Accurate and Scalable Machine Learning Potentials. Viktor Zaverkin, Universität Stuttgart work_nihx2cehlba6pffgth4oaackg4 Fri, 10 Jun 2022 00:00:00 GMT Science, Education and Innovations in the Context of Modern Problems (2790-0169) https://scholar.archive.org/work/4xyg3zpegjfa3jhovesb5fpzki Conference Proceedings (International) ISSN 2790-0169 E-ISSN 2790-0177 OCLC Number 1322801874 Short Title SEI Abbreviated key-title Sci. educ. innov. context mod. probl. ISBN 978-1451-11-764-6 Editor Nasir Mammadov, Mammad Chairman of Editorial Board Dr. Uma Shankar Yadav (India) Publisher International Meetings and Conferences Research Association E-mail (Submission & Contact) editor@imcra-az.org Topics Science and Social Sciences (no Art and Humanities) Frequency Bi-monthly (6 in a year, from 2021) Number of Regular Issues 6 Issues Number or Special Issues No special issue. Number or articles in Regular Issue 10-40 articles Education and Innovations in the Context of Modern Problems Science work_4xyg3zpegjfa3jhovesb5fpzki Fri, 10 Jun 2022 00:00:00 GMT A robotic vehicle system for operation with gas pipelines https://scholar.archive.org/work/x2eohnvdxnfpjiys3kq4qsrska The research presented in this dissertation considers the design, implementation and validation of a robotic system applied in constrained gas pipe environments. The dissertation describes a robotic vehicle system with optimised mechanical parts and dimensions, combined with intelligent control strategy, and a cableless communication feature for performance enhancements. The implemented enhancements increase the ability of the robotic system to perform self-navigation and movement in the pipes. The research has focused on solving the navigation problem in pipe configurations. This goal is addressed by using reactive sensors with an advanced fuzzy control technique. In pipe environments, an 'a priori ' plan cannot be generated to perform reasonably in the face of uncertainty, nor can all contingencies that may arise be anticipated. Instead, navigation must be carried out based on current information and the system's own states at all times, proceeding in a sensor-driven manner, rather than attempting to impose the execution of a planned method. In addition, the dynamics of the vehicle itself often play an important role in determining which actions may be achieved and which actions are to be avoided. The research identifies an unique fuzzy inference technique that combines information from several different sources to be used to perform the navigation task. A careful evaluation of current state-of-the-art systems revealed the inadequacy of using a cable for data link and power supply purposes. Cable is one of the restrictions that limits the operation of an in-pipe operation device, due to handling problems. A selection of cable-free methods has been examined in this dissertation to offer solution in the aspect of communication. An optical method using a laser is determined to be an appropriate approach, and further experimental studies verify its feasibility. The combination of the mentioned abilities has enabled the realisation of a novel robotic system for pipeline operation. Its aim is to increase pipe operati [...] Jiun Keat Ong work_x2eohnvdxnfpjiys3kq4qsrska Thu, 09 Jun 2022 00:00:00 GMT User activity detection using encrypted in-app data https://scholar.archive.org/work/dypczsxusvgwdnfdpdbrkj755m The advancements in the Internet technology and computer networks have led to an increased importance of network traffic classification. Significant amount of attention to network traffic classification has been given from both industry and academia. Network traffic classification has many possibilities to solve personal, business, Internet service provider and government network problems such as anomaly detection, quality of service control, application performance, capacity planning, traffic engineering, trend analysis, interception and intrusion detection. There are different methods to perform network traffic classification. However, it is not always reliable to apply traditional port-based and payload-based methods because many current applications have started to use dynamic port allocation and payload encryption. Recent research initiatives have put significant attention on applying machine learning techniques. To enhance and maintain privacy and security encryption technologies are applied in different levels of the communication process. However, this research shows the possibility to perform network traffic classification even in encrypted domain and infer information of mobile users. Side-channel information such as frame length, inter arrival time, direction (outgoing / incoming) of packets which may leak from encrypted traffic flows are used to perform the traffic classification. The research presented in this thesis focuses on identifying user actions performed on mobile applications. A user's online activities performed on mobile apps are sensitive and contain private information. Rather than identifying coarse-grained activities such as browsing, downloading, uploading etc., identifying fine-grained user activities such as posting a photo on Facebook, posting a video on Facebook, posting a long text on Facebook, posting a short text on Facebook etc., provides more valuable information for an analyst to recognise the users where confidential information is retained. To achieve robustness of the cla [...] Omattage Madushi H. Pathmaperuma work_dypczsxusvgwdnfdpdbrkj755m Thu, 09 Jun 2022 00:00:00 GMT Covid19/IT the digital side of Covid19: A picture from Italy with clustering and taxonomy https://scholar.archive.org/work/vmm2wd7mvfbopcdqijajdg4okq The Covid19 pandemic has significantly impacted on our lives, triggering a strong reaction resulting in vaccines, more effective diagnoses and therapies, policies to contain the pandemic outbreak, to name but a few. A significant contribution to their success comes from the computer science and information technology communities, both in support to other disciplines and as the primary driver of solutions for, e.g., diagnostics, social distancing, and contact tracing. In this work, we surveyed the Italian computer science and engineering community initiatives against the Covid19 pandemic. The 128 responses thus collected document the response of such a community during the first pandemic wave in Italy (February-May 2020), through several initiatives carried out by both single researchers and research groups able to promptly react to Covid19, even remotely. The data obtained by the survey are here reported, discussed and further investigated by Natural Language Processing techniques, to generate semantic clusters based on embedding representations of the surveyed activity descriptions. The resulting clusters have been then used to extend an existing Covid19 taxonomy with the classification of related research activities in computer science and information technology areas, summarizing this work contribution through a reproducible survey-to-taxonomy methodology. Vincenzo Bonnici, Giovanni Cicceri, Salvatore Distefano, Letterio Galletta, Marco Polignano, Carlo Scaffidi, Chi-Hua Chen work_vmm2wd7mvfbopcdqijajdg4okq Thu, 09 Jun 2022 00:00:00 GMT |
fb804b21ce00f160 | Planned seminars
Room 6.2.33, Faculty of Sciences of the Universidade de Lisboa
Alberto Saldaña
Alberto Saldaña, Universidad Nacional Autónoma de México
Fractional derivatives are commonly used to model a variety of phenomena, but… what does it mean to have a logarithmic derivative? And what would it be used for?
In this talk we focus on the logarithmic Laplacian, a pseudodifferential operator that appears as a first order expansion of the fractional Laplacian of order 2s as s goes to zero. This operator can also be represented as an integrodifferential operator with a zero order kernel.
We will discuss how this operator can be used to study the behavior of linear and nonlinear fractional problems in the small order limit. This analysis will also reveal a deep and interesting mathematical structure behind the set of solutions of Dirichlet logarithmic problems.
Room P3.10, Mathematics Building Instituto Superior Técnico
Rainer Mandel
Rainer Mandel, Karlsruher Institut für Technologie
We present new existence results for nontrivial solutions of some biharmonic Nonlinear Schrödinger equation in $\mathbb{R}^N$ that are based on a constrained minimization approach. Here the main difficulty comes from the fact that spherical rearrangements need not decrease the energy so that more sophisticated arguments are needed to overcome the lack of compactness. A new and intrisically motivated tool is given by a new class of Gagliardo-Nirenberg inequalities where, essentially, the Laplacian in the classical Gagliardo-Nirenberg inequality is replaced by the Helmholtz operator. Having explained the relevance of such inequalities for our analysis, we comment on their proofs and related questions from Harmonic Analysis. Finally, we shall mention a symmetry-breaking phenomenon related to our results that was recently observed by Lenzmann and Weth. Accordingly, the talk covers topics from the Calculus of Variations as well as Harmonic Analysis or, more specifically, Fourier Restriction Theory.
Room 6.2.33, Faculty of Sciences of the Universidade de Lisboa
Hermano Frid
Hermano Frid, Instituto de Matemática Pura e Aplicada
In this talk we introduce models of short wave-long wave interactions in the relativistic setting. In this context the nonlinear Schrödinger equation is no longer adequate for describing short waves and is replaced by a nonlinear Dirac equation. Two specific examples are considered: the case where the long waves are governed by a scalar conservation law; and the case where the long waves are governed by the augmented Born-Infeld equations in electromagnetism. This is a joint work with JOÃO PAULO DIAS. |
558db06a8d4c57b5 | Increasing the accuracy of quantum-mechanical simulations for strongly correlated functional materials by designing effective Hamiltonians
1. Increasing the accuracy of quantum-mechanical simulations for strongly correlated functional materials by designing effective Hamiltonians
28211 / Many-particle physics, Nanoporous materials
Promotor(en): V. Van Speybroeck, F. Verstraete / Begeleider(s): A. Hoffman, L. Vanderstraeten, T. Braeckevelt
Background and problem
In principle, the Schrödinger equation allows us to fully predict a system’s multidimensional wavefunction and behavior. However, solving this equation exactly is not trivial and even impossible for realistic materials containing more than a handful degrees of freedom. Density functional theory (DFT), in which the 3D electron density is the central quantity instead of the multidimensional wavefunction, is therefore the most popular quantum mechanical approach in solid state science nowadays. However, the accuracy of a DFT calculation strongly depends on which of the many available exchange-correlation functionals is chosen.1 Furthermore, the currently available functionals all fail in the case of systems exhibiting strong electron correlation. This typically occurs when electrons become localized, density fluctuations vary widely over the system, and bands crossing the Fermi energy have a narrow bandwidth. This is the case, for instance, in transition metal-compounds, rare-earth compounds, and organic conductors. Strong electron correlation should therefore be taken into account by using higher-level simulation techniques, such as the random phase approximation (RPA) or wavefunction based tensor networks. The latter method however come at a higher computational cost, therefore a viable strategy consists in constructing effective Hamiltonians, where higher accuracy calculations are conducted in a lower dimensional space made of the specific electronic bands that are responsible for the strong electron correlations. This hybrid approach relies on the computational efficiency of DFT and combines it with more accurate methods for those states that are responsible for strong electron correlations. A key question is the construction of the effective Hamiltonians for the problematic electron bands.2 The strategy that will be adopted here is to start from the DFT calculations, isolate the bands that are close to the Fermi level and construct maximally localized Wannier functions, which are a set of real-space localized orbitals forming the real-space counterpart of Block orbitals.2,3 They have the advantage that they span precisely the required energy range and that they naturally incorporate hybridization and bonding appropriate to their local environment. Using Wannier functions, the effective systems, with substantially fewer degrees of freedom, can be simulated at a higher level of theory, whereas the other bands are still treated with DFT.
In this thesis, it our aim to develop a methodology to construct the effective Hamiltonians that are able to treat the electron correlations with higher accuracy. To this end we will design a hybrid RPA/DFT or tensor/DFT scheme to increase the accuracy of quantum-mechanical simulations for technologically important materials which are prone to strong electron correlation. To this end, first the electron bands of the systems will be calculated via DFT and the relevant bands around the Fermi level that need to be considered in the effective Hamiltonian will be selected. These bands should be isolated and should be transformed to maximally localized Wannier functions. These Wannier orbitals then define the low-energy subspace of interest from which in the second stage of this thesis the hopping amplitudes and interaction terms will be computed at the RPA level of theory. Alternatively the effective Hamiltonian in the reduced subspace may be treated using other tensor networks methods, that are being developed within the Quantum Group of Prof. Verstraete. Such hybrid DFT/RPA or DFT/tensor based models show a lot of potential to reach unprecedented accuracy for systems showing strong electron correlations.
Figure 1: Atomic structure of MIL-53(Fe) and CsPbI3, the two materials showing strong electron correlation investigated in this thesis.
Two specific systems featuring strong electron correlation will be investigated in this thesis, both shown in the figure. At first instance, you will use the hybrid RPA/DFT scheme to determine accurate ground state energies of the metastable phases of the flexible metal-organic framework MIL-53(Fe).4 This material can undergo a phase transition between a closed and an open pore phase. Therefore, it can potentially be used in sensing, gas storage, and separation applications. To correctly predict under which conditions it is able to breath, the energies of the different phases should be determined with an error that is as low as possible. DFT fails in this regard, because the presence of various possible spin configurations has a large impact on the electron properties.
A second system of interest in this thesis proposal is the metal halide perovskite CsPbI3.5 This material shows strong light absorption, long charge carrier lifetimes, and high carrier mobility, which makes it very interesting for various optical applications such as solar panels and medical scanners. DFT typically fails to determine accurate band gaps of perovskites. As a result, most simulations on this system have been performed at a very high level of theory with high computational cost. By using the hybrid RPA/DFT scheme, the computational load to determine accurate electron band schemes could be severely reduced, allowing us to consider larger perovskite systems and account for the experimental doping of this system.
This thesis relies on a collaboration between the Center for Molecular Modeling (Van Speybroeck) having ample expertise in the properties of technologically important materials and their simulation with Density Functional Theory Methods and the Quantum Group (Verstraete) having ample expertise in tensor networks and the theory of quantum entanglement and it application to condensed matter systems. The potential student for this topic should have interest in fundamental aspects of quantum simulations and their application to important materials for engineering applications.
1. Study programme
Master of Science in Engineering Physics [EMPHYS], Master of Science in Physics and Astronomy [CMFYST]
random phase approximation, strongly correlated systems, quantum mechanical calculations, Density functional theory, Wannier orbitals
1N. Mardirossian, M. Head-Gordon, Mol. Phys. 115: 2315, 2017.
2N. Marzari, A.A. Mostofi, J.R. Yates, I. Souza, D. Vanderbilt, Rev. Mod. Phys. 84: 1419, 2012.
3G.H. Wannier, Phys. Rev. 117: 432, 1960.
4F. Millange, N. Guillou, R.I. Walton, J.-M. Grenèche, I. Margiolaki, G. Férey, Chem. Commun. 2008: 4732, 2008.
5J.A. Steele, H. Jin, I. Dovgaliuk, R.F. Berger, T. Braeckevelt, H. Yuan, C. Martin, E. Solano, K. Lejaeghere, S.M.J. Rogge, C. Notebaert, W. Vandezande, K.P.F. Janssen, B. Goderis, E. Debroye, Y.-K. Wang, Y. Dong, D. Ma, M. Saidaminov, H. Tan, Z. Lu, V. Dyadkin, D. Chernyshov, V. Van Speybroeck, E.H. Sargent, J. Hofkens, M.B.J. Roeffaers, Science 365: 679, 2019.
Veronique Van Speybroeck |
23b80c9ffac85def | Spectroscopy/Molecular energy levels
Molecular spectroscopy is the study of the interaction of electromagnetic (EM) radiation with matter. It is based on the analysis of EM radiation that is emitted, absorbed, or scattered by molecules, which can give information on:
• chemical analysis (finding a chemical fingerprint, so to speak)
• molecular structure (bond lengths, angles, strengths, energy levels, etc...)
Types of molecular energyEdit
Energy can be stored either as potential energy or kinetic energy, in a variety of ways including:
• Translational energy: small amounts of energy stored as kinetic energy. This is unquantized (can take any value) and hence is not relevant to spectroscopy.
• Rotational energy: kinetic energy associated with the tumbling motion of molecules. This is quantized.
• Vibrational energy: the oscillatory motion of atoms or groups within a molecule (potential energy ↔ kinetic energy exchange). This is quantized.
• Electronic energy: energy stored as potential energy in excited electronic configurations. This is quantized.
This results in a series of molecular energy levels.
Spectroscopy is the measuring of the transitions between levels.
Typical values for energy level separationsEdit
Energies (and wavefunctions) for these different levels are obtained from quantum mechanics by solving the Schrödinger equation. Spectroscopy is used to interrogate these different energy levels.
Electromagnetic radiationEdit
Eletromagnetic wave
Electromagnetic (EM) radiation consists of photons (elementary particles) which behave as both particles and waves.
The image to the right shows the wave-like character associated with a single photon of EM radiation.
• In the x,z plane there is an oscillating electric field (E)
• In the y,z plane there is an oscillating magnetic field (B)
Both are in phase but perpendicular to each other.
Key equationsEdit
• c = speed of light (2.998x108 ms-1)
• λ = wavelength (m)
• ν = frequency (s-1)
• = wavenumber (m-1)
Transitions between energy levelsEdit
• Absorption spectroscopy: a photon is absorbed ("lost") as the molecule is raised to a higher energy level.
• Emission spectroscopy: a photon is emitted ("created") as the molecule falls back to a lower energy level.
Electromagnetic spectrumEdit
Relevant regions for this course:
• Radio → nuclear spin in magnetic field
• Microwave → rotation
• Infrared → vibration
• Ultraviolet → electronic
Common units in spectroscopyEdit
Wavelength, λ
• S.I. unit = metres (m)
• Other units = micrometer (1 μm = 10-6 m) ; nanometer (1 nm = 10-9 m) ; Angstrom (1 Å = 10-10 m)
Frequency, ν
• S.I. unit = Hertz (Hz) or s-1
• Other units = megahertz (1 MHz = 106 Hz) ; gigahertz (1 GHz = 109 Hz)
Energy, E
• S.I. unit = Joules (J)
• For molar energies, multiply by Avogadro's constant (NA) = J mol-1 or kJ mol-1
• S.I. unit = m-1
• Units of cm-1 are most commonly used in spectroscopy
• Molecular spectra are typically recorded as line intensities as a function of frequency, wavelength or wavenumber.
• Remember the importance of using correct units and being able to convert between different ones (see the formulae below).
Unit Conversion: Example
The HCl molecule has a bond dissociation energy of 497 kJ mol-1.
1. Calculate this energy as a wavenumber (units: cm-1)
2. What is the maximum wavelength of light which can photodissociate HCl?
Factors influencing intensity of spectral linesEdit
1. Amount of sample
The intensity of lines on the spectrum will be affected by the amount of sample which light passes through. The intensity of this transmitted light depends on the sample concentration and path length.
• Beer-Lambert Law
2. Population of energy states
A system can undergo a transition from one level, i, to another level, f, but only if it is in the first level i to begin with.
• Boltzmann Distribution
3. Spectroscopic selection rules
A selection rule is a statement about which potential transitions are allowed and which are forbidden. Each spectroscopy has its own selection rules (see later lessons). Not all transitions are allowed even though energy conservation is obeyed.
1. Amount of sampleEdit
Absorbance and transmittanceEdit
Beer-Lambert Law:
Absorbance (A)
Transmittance (T)
• ε = molar absorption coefficient
units of ε: conc-1 x length-1 (usually mol-1 dm3 cm-1)
• incident light intensity.
• transmitted light intensity.
• L = path length (in cm)
• εmax is the maximum absorption coefficient, and is an indication of the intensity of a transition.
2. Population of energy statesEdit
The continuous thermal agitation that molecules experience at any temperature (greater than zero Kelvin) ensures that they are distributed over all possible energy levels.
• Population of a state = the average number of molecules in a state at any given time.
The mathematical formulation of how to calculate the population of a state was provided by Ludwig Boltzmann in the late 19th century.
The Boltzmann distributionEdit
The Boltzmann distribution defines the relative population of energy states (usually the ratio of excited states to ground state).
• kB = Boltzmann constant (= R / NA) = 1.381x10-23 J K-1
• T = temperature (Kelvin)
Effect of temperatureEdit
The Boltzmann distribution is a continuous function.
There is always a higher population in a state of lower energy than in one of higher energy.
At lower temperatures, the lower energy states are more greatly populated. At higher temperatures, there are more higher energy states populated, but each is populated less.
Effect of energy separationEdit
kBT NA ~ 2.5 kJ mol-1 at 300 K.
• Rotational:
• Vibrational:
• Electronic:
Degeneracy = when more than one state has the same energy.
• gi and gf = degeneracies of initial and final states
This is very important for rotational energy levels (see later). As a result, the population of an energy state is then a product of the Boltzmann distribution and the degeneracy.
• This means that the ground state may no longer be the most populated state.
Population of Energy Levels: Example
Assuming that the vibrational energy levels of HCl and I2 are equally spaced, with energy separations of 2990.94 and 216.51 cm-1 respectively, calculate for each case the ratio of the number of molecules in the first two vibrational states relative to the ground state at T = 300 K and 800 K.
3. Spectroscopic selection rulesEdit
Each spectroscopy has its own selection rules, which will be covered later in the course. |
ace3ed9da116685e | Open access peer-reviewed chapter
Dissipative Solitons in Fibre Lasers
Written By
Vladimir L. Kalashnikov and Sergey V. Sergeyev
Submitted: April 28th, 2015 Reviewed: October 9th, 2015 Published: March 2nd, 2016
DOI: 10.5772/61713
From the Edited Volume
Fiber Laser
Edited by Mukul Chandra Paul
Chapter metrics overview
1,998 Chapter Downloads
View Full Metrics
Interdisciplinary concept of dissipative soliton is unfolded in connection with ultrafast fibre lasers. The different mode-locking techniques as well as experimental realizations of dissipative soliton fibre lasers are surveyed briefly with an emphasis on their energy scalability. Basic topics of the dissipative soliton theory are elucidated in connection with concepts of energy scalability and stability. It is shown that the parametric space of dissipative soliton has reduced dimension and comparatively simple structure that simplifies the analysis and optimization of ultrafast fibre lasers. The main destabilization scenarios are described and the limits of energy scalability are connected with impact of optical turbulence and stimulated Raman scattering. The fast and slow dynamics of vector dissipative solitons are exposed.
• Ultrafast fibre laser
• mode-locking
• dissipative soliton
• non-linear dynamics
• vector solitons
• optical turbulence
• stimulated Raman scattering
1. Introduction
Over the last decades, ultrafast fibre laser technologies have demonstrated a remarkable progress. By definition [14], these technologies concern generation, manipulation and application of optical pulses from a fibre laser or a laser-amplifier system with (i) peak power P0 exceeding substantially an average laser power Pave and (ii) pulse widths τ which are much lesser than a laser round-trip period T. Such a definition can be re-interpreted in terms of a laser mode-locking, which means a phase-locked interference of the M laser eigenmodes producing an equidistant train of ultrashort pulses. Then, τ ∝ 1/Mδω (M is a number of locked eigenmodes and δω is an inter-mode frequency interval defined by T ∝ 1/δω) and P0MPave [5,6]. It means that a mode-locked laser generates a comb of equidistant optical frequencies comprising the broad spectral range Mδω. It is clear that the substantial enhancement of P0 (by the factor of M ∝ 1/τδω ~105 ÷ 106, i.e. upto over-MW level [7,8]) and τ–reduction (~T/M, i.e. down to sub-100 fs level [9]) promise an outlook for different applications [10] including non-linear and ultrasensitive laser spectroscopy [1114], biomedical applications [1519], micromachining [20,21], high-speed communication systems [22], metrology [23] and many others [24,25]. The extraordinary peak powers in combination with the drastic pulse width decrease bring the high-field physics on tabletops of a mid-level university lab [2629]. Moreover, the over-MHz pulse repetition rates δω provide the signal rate improvement factor of 103 ÷ 104 in comparison with that of classical chirped-pulse amplifiers [26]. As a result, the signal-to-noise ratio enhances substantially, as well.
Another aspect of the ultrafast laser applications is connected with studying non-linear phenomena [30]. Ultrafast lasers became an effective platform for investigation of general non-linear processes such as instabilities and rogue waves [31,32], self-similarity [33] and turbulence [34]. A coherent self-organization in such non-linear systems [35,36] is the keystone of this review, and it will be considered below in detail. But here, we have to point at the multidisciplinary context of our topic. The ultrafast fibre lasers can be treated as an ideal playground for exploring of non-linear system phenomenology as a whole [37]. Such a playground spans gravity and cosmology [38], condensed-matter physics and quantum field theory [3941], biology, neurosciences and informatics [42,43]. The advance of ultrafast laser technology is that the theoretical insights promise to become directly testable, controllable and, on the other part, the theory can be urged by new precise measurable experimental challenges.
To date, the solid-state lasers allowed generating shortest pulses with highest peak powers directly from an oscillator with high repetition rates (δω > 1 MHz) [4449]. The main advantages of solid-state laser systems are (i) broad gain bands (i.e. very large M) allowing generation of extremely short pulses (τ approaches one optical cycle for such media as Ti:Sp or Cr:chalcogenides) and (ii) covering the spectral range from visible (Ti:Sp) through infra-red (Ti:Sp, Cr:forsterite and Cr:YAG) to mid-infrared (Cr:chalcogenides) wavelengths, as well as (iii) possibility of independent and precise dispersion [50] and non-linearity [46,48] control. Nevertheless, fibre lasers have unprecedented prospects [51] due to (i) possibility of mean power scaling provided by large gain, (ii) high quality of laser mode, (iii) reduced thermo- and environment-sensitivity, (iv) compactness and integrity of laser setup. Additionally, one has to point at broader gain bands of fibre media in comparison with the energy-scalable, thin-disk, solid-state oscillators operating within analogous wavelength ranges [9,52]
The width of a gain band is not a decisive factor per se because both pulse width and its spectrum are affected by various factors including higher-order dispersions, non-linearity, etc. [53,54].
and possibility to break into deep-UV and mid-IR optical spectral ranges [55,56].
In this review, we will concern the concepts of a mode-locking and a dissipative soliton in a nutshell.
2. Mode-Locking
The concept of mode-locking is universal and closely connected with a principle of synchronisation of coupled oscillators [5764]. A laser is, in fact, the interferometer which possesses a set of eigenmodes (longitudinal modes) separated by δω = 2π/Τ. Simultaneously, it is an active resonator, which means an amplification ∆Α of mode amplitude Α during the resonator round-trip in the vicinity of the maximum gain frequency ω0 as ΔA(ω)(g(ω0))A(ω)α(ωω0)2A(ω), where g(ω0) is a gain at the frequency ω0, is a net-loss coefficient and αg(ω0)/δΩ2 takes into account a frequency-dependence of gain coefficient in the vicinity of ω0 defined by a gain bandwidth δΩ. In such an oversimplified model, only one mode with the maximum net-gain σg(ωω0)=0 is generated because the gain coefficient is energy-dependent, that is, it decreases with A (i.e. a gain is saturable that results in a mode competition or mode selection, Figure 1).
Figure 1.
Comb of frequencies generated by a laser with the repetition rate δω. The gain band is centred at ω0. Mode-locking consists in excitation and synchronisation of ω0 ± nδω sidebands.
In the case of active mode-locking, a periodic external modulation with the frequency of δω excites the ±δω sidebands for each mode in the comb so that the modes A(ω), A(ω ±δω) become coupled. In the framework of our oversimplified model, a steady-state regime ΔA = 0 is described by the equation in time domain [5961]:
which is the classical equation for an oscillator in the potential defined by νδω2. This equation has a trivial solution in the form of a Gaussian pulse [5963]: A(t)=A0exp(2ln(2)t2/τ2), where the pulse amplitude A0 is defined by the condition of energy balance of σντ2 (the saturable gain coefficient is energy, i.e. A02 -dependent) and the pulse width τα/ν4. In the general case, the excitation of A(t) in the form of Hermitian–Gaussian solutions of Equation (1) is possible. Since the pulse width is defined by ν so that τ1/δΩδω, the minimum pulse widths of 1/δΩ are hardly reachable due to limitation imposed by δω-value. The situation can be changed in the presence of the self-phase modulation (SPM) [65] and the dynamic gain saturation. Then τ1/δΩσ [66].
Using the non-linear processes such as SPM, loss and gain saturation allows generating the ultrashort pulses due to mechanism of the so-called passive mode-locking [60]. Periodical perturbations caused by transitions through non-linear laser elements such as saturable absorber or gain medium enrich the spectrum with new components ω0±mδω (m = 1, 2,..., M), which becomes locked through non-linear interaction [64]:
where a four-wave non-linear process defined by non-linear susceptibility χ3 mixes the frequencies ω1, ω2, ω3 and ω4 during the propagation through a non-linear medium along the z-coordinate (Δk=k(ω1)k(ω2)k(ω3)k(ω4) is the difference of wavenumbers, c is the speed of light, n is the frequency-dependent refractive index).
Both active and passive mode-locking concepts can be easily united from the point of view of spice-time duality [64,6769]. For instance, let’s consider heat diffusion equation:
where a heat radiated at x = 0 by the point source σ diffuses along x-axis and is absorbed by cooler with the parabolic ‘cooling potential’. The replacements t z and x t result in an equation for ‘diffusion’ of light describing an active amplitude mode-locking (see Eq. (1)):
Eq. (4) is clearly understandable in the Fourier domain: A˜z=σA˜+ν2A˜ω2αω2A˜, where an external (i.e. active) modulation defined by the ν-coefficient ‘diffuses’ (i.e. broadens) a field spectrum A˜ and such diffusion is compensated by spectral dissipation defined by the α-coefficient. In the time-domain, the spectrum broadening corresponds to a light pulse shortening due to parabolic potential action which is balanced by spectral dissipation causing a pulse widening.
The space-time duality can be extended further with the help of diffraction-dispersion duality:
where k and β2 are the wave number and group-delay dispersion coefficients, respectively. Both processes describe the beam/pulse spreading with propagation which is accompanied by phase ϕ profile distortion, i.e. by appearance of the chirp Qd2ϕ/dx2(ord2ϕ/dt2). The active phase modulation from this point of view
(compare with (4)) looking as A˜z=σA˜+iν2A˜ω2αω2A˜ in the Fourier domain describes a ‘diffraction’ (dispersion) in the frequency domain inspired by phase modulator which is balanced by spectral dissipation. The main difference from (4) is that the phase modulation in (6) distorts the phase and thereby produces chirp like the action of thin lens but in the time domain. In other words, the phase modulation in (6) pushes the spectral components out of the point of stationary phase ϕ/t=0, adding the frequency shift νt (Doppler shift) which enhances the spectral dissipation on the pulse wings and thereby forms a pulse like the active amplitude modulator. But the phase profile ϕ(t) is parabolic in this case.
The transition to a passive mode-locking looks straightforward, but one has to be careful in this case. The spice-time duality suggests a simple way to realize the temporal focusing like that in space domain: combination of phase modulation (‘time lens’) from Eq. (6) with dispersion (‘time diffraction’) from Eq. (5) allows compressing a pulse. Therefore, a replacement of time focusing by a time self-focusing (SPM) would provide a laser pulse self-trapping like the effect of laser beam self-trapping:
which is the famous non-linear Schrödinger equation describing propagation of optical solitons in a fiber (β2 < 0 corresponds to an anomalous dispersion, γ is a SPM-coefficient) [35,70,71].
It is appropriate to mention here that the space-time duality x t allows extending the physical context of consideration beyond scopes of optics. For instance, A, E = ∫dx |A|2, and ϕ can be related for a mean-field amplitude, number of particle (mass of condensate) and momentum (wave number) for a Bose–Einstein condensate [39]. Then, it is clear that the dispersion and SPM terms in Eq. (7) describe the kinetic energy and four-particle interaction potential for gas of bosons. Such an interpretation opens a road to a quantum theory of solitons [7274].
Following the same procedure for Eq. (4), describing the active amplitude mode-locking results in the simplest version of equation for a passive mode-locking, so-called cubic non-linear GinzburgLandau equation [35,36,75]:
This equation describes a combined action of saturated net-gain (σ), spectral dissipation (α) and non-linear gain (κ). The last term results from loss saturation in a non-linear absorber with the response time much lesser than the pulse width. As will be shown below, such an assumption is valid for a broad class of fibre mode-locking mechanisms. Physics of passive mode-locking resembles that of active one: self-focusing in time domain causes a spectrum broadening which is balanced by spectral dissipation. Loss and energy-dependent gain are required for developing and stabilizing the mode-locking (all these factors are included in σ-term which is < 0 for a steady-state pulse). Eqs. (7) and (8) have a similar solution A(t)sech(t/τ) but the mathematical structures of these equations differ substantially that created discrepancies between concepts of the ‘true’ [76] and dissipative solitons (DSs, see next section) [36]. Combining Eqs. (7) and (8) gives the famous complex cubic non-linear Ginzburg–Landau equation (cubic CNGLE) [3537,42]:
which is a playground for study of DSs. Equation (9) allows a number of further generalizations such as: (i) description of non-distributed evolution due to dependence of the equation coefficients on z [77]; (ii) generalization of non-linearity type aimed first of all to adequate description of different mode-locking mechanisms (see below); (iii) taking into account the higher-order dispersions, i.e. ω-dependence of β2 [68]; (iv) taking into account the vector nature of light, i.e. transition to a system of coupled two-component CNGLEs [68,7881], etc.
Now let us consider the mode-locking mechanisms for fibre lasers in more detail. Active mode-locking can be utilized for DS generation from a fibre laser [8284], but the widespread mechanism is based on the non-linear polarization rotation (NPR) which uses the effect of intensity-dependent polarization mode coupling in a fibre [8588]. There is voluminous literature concerning the experimental realization of NPR mode-locking in fibre lasers; therefore, our selection of references is rather subjective and concerns the DS context [95111].
It is known [70] that an ideal single-mode fibre supports two degenerate orthogonally polarized modes. However, a real fibre has inherent birefringence caused by core asymmetry or mechanical stress (Figure 2).
Figure 2.
Fibre cross-sections with cylindrical symmetry broken by manufacturing process that leads to fibre birefringence.
Figure 3.
Block scheme of a self-amplitude modulator (SAM) based on NPR. Both linear birefringence and NPR contribute to the change of state of polarization (SOP, rotation angle θ) that leads to intensity-dependent transmission of this scheme.
Since SPM as well as cross-phase modulation (XPM) contribute to refractivity index with the strength defined by field intensity, such a contribution will change the state of polarization (SOP, Figure 3) [60,70,89] that can be described by coupled equations for two orthogonal (x and y) polarization components [70]:
where the dissipative factors from Eq. (9) are taken into account and Δβ=2π/Lb describes a ‘strength’ of linear birefringence (Lb is a beat-length). As was shown in [81,9194,179], the multi-scale averaging technique allows reducing Eq. (10) to the modified scalar non-linear Ginzburg–Landau equation (so-called sinusoidal GinzburgLandau equation) in which the self-amplitude modulation term (SAM, last term in Eq. (9)) is replaced by logQ(|A|2)A, where Q is a complex function defined by birefringence and settings of laser wave plates and polarizer. Such an approach opens a way to multi-parametrical optimization of fibre lasers mode-locked by NPR.
Despite its relative simplicity in principle as well as possibility of all-fibre-integrity of a laser, NPR in the form presented in Figure 3 is too sensitive to laser setup, uncontrollable perturbations and requires a precise manual tuning. The modified SAM setup, which can utilize both NPR and scalar SPM, is shown in Figure 4. It is the so-called non-linear optical loop mirror or figure eight laser (Figure 4) [112,113,265]. In principle, this setup is an all-fibre realization of additive-mode-locking [82,114] with inherently adjusted linear optical propagation lengths for counter-propagating beams. The main control parameter here is the beam splitting ratio ρ controlling the mutual intensities of counter-propagating beams.
The unique property of this SAM setup is its ability to utilize different types of non-linearities for mode-locking (e.g. see [115118]). Different modifications of this mode-locking mechanism have been used in DS fibre lasers [119125]. Nevertheless, a fibre loop defining SAM remains environment- and tuning-sensitive.
Figure 4.
Block-scheme of SAM based on a non-linear optical loop mirror. Splitter splits input laser beam into two counter-propagating ones (red and green arrows) with some splitting ratio ρ. The beams interfere after round-trip and partially return into a laser. The result of interference is intensity-dependent due to NPR or/and SPM within a loop.
There is a class of alternative approaches utilizing non-fibre well-controllable non-linearities for mode-locking by the cost of broken fibre-integrity of a laser. Such an alternative was provided by development of high-non-linear semiconductor saturable absorber mirror (SESAM) [126135]. The point is to put a semiconductor layer into a composed multi-layer mirror with the well-controllable spectral characteristics as well as with the adjustable intensity concertation of penetrating field within a semiconductor layer. In fact, it is an advanced non-linear Fabri–Perot interferometer with the reflectivity coefficient depending on the incident intensity (or energy) [136]. Interaction of light with a semiconductor layer can be characterized roughly as excitation of carriers from a valence band of semiconductor to its conduction band. Excited carriers thermalize inside a conduction band with the character time ~100 femtoseconds. This time defines a fastest response of SESAM to a laser radiation. Then, the thermalized carriers can relax into valence band or intra-band trapping states with the characteristic times from picoseconds to nanoseconds. Thus, SAM due to SESAM is slow in comparison with that due to NPR or SPM because the response times of the lasts are defined by intra-atom polarization dynamics, i.e. these times belong to femtosecond diapason. Additionally, the spectral diapason of SESAM response is substantially squeezed in comparison with that of pure electronic non-linearities due to resonant character of SESAM non-linearity. This can trouble the mode-locking within a spectral range exceeding the SESAM bandwidth. But the reverse side of the SESAM-band squeezing is that a non-linear response of SESAM becomes resonantly enhanced. This means that SESAM can provide more easily starting, stable and controllable mode-locking. The key characteristics of SESAM are [127] loss saturation fluence Es=hν/2σa ( is a photon energy, σa is an absorption cross section), modulation depth μ0=σaN (N is a density of states in semiconductor), relaxation (recovery) time Tr, unsaturable loss, saturable loss bandwidth and level of two-photon absorption.
Akin mode-locking methods providing full fibre-integrity, broadband absorption, sub-picosecond response time and avoiding a complex multi-layer mirror weaving use nanotube and graphene saturable absorbers [30,137143] and other low-dimensional structures [144].
From the theoretical point of view, the response of saturable absorber (SESAM or other quantum-size structures) to a laser field can be very complicate. In principle, one has to take into account finite loss bandwidth, its dispersion, dependence of refractive index on carrier’s (or exciton’s) density (so-called linewidth enhancement), complex kinetics of excitation and relaxation, etc. However, the praxis demonstrated that a simple model of two-level absorber is well working [145]:
with some possible modifications (e.g. see [146]). Since DSs, as a rule, have over-picosecond widths (see next section), one may use an adiabatic approximation for (11) so that the expression for SAM coefficient in the last term in Eq. (9) has to be replaced:
where Ps=Es/Tr is a loss saturation power.
One may propose a hypothesis that an analogue of Kerr-lens mode-locking, which is a basic mechanism for generation of femtosecond pulses from solid-state lasers [60,85,147], can be realized in a fibre laser as well. Such an insight is based on possible enhancement of the laser beam spatial-trapping induced by non-linearity in a medium with spatially inhomogeneous gain/loss or refractivity [148152]. The model for analysis of such phenomena can be based on extension of dimensionality of Eq. (9), with taking into account the diffraction and transverse inhomogeneity of gain, loss or/and refractive index (the last can work as SAM due to the waveguide leaking loss) [153]:
where cylindrical symmetry is assumed, x is a radial coordinate and κ is a coefficient (complex in general case) which describes a transverse inhomogeneity of a fibre. Figure 5 shows the net-gain profiles (a) and the intra-laser pulse energies (b) as function of an effective aperture size obtained on the basis of variational approach for Eq. (13) [153]. The results demonstrate a principal feasibility of the Kerr-lens mode-locking regime for a DS fibre laser.
Figure 5.
(a) Transverse net-gain profiles for different transverse parabolic distributions of net-gain coefficient which can be realized by inhomogeneous doping of fiber or by impact of waveguide leaking loss. (b) The dependence of intra-laser DS energy for a DS Yb-fibre laser with the 14 nm filter bandwidth and the average GDD of 330 fs2/cm [153]. DS collapses for large energies and cannot start for large aperture sizes (here SAM has inverse sign and the continuous-wave generation prevails).
All these mode-locking techniques are realizable for both soliton proper and DS fibre lasers (excluding the Kerr-lens mode-locking which requires sufficiently high pulse energies provided by only a DS laser). Now let’s consider the DSs fibre lasers proper.
3. DS concept: Theory and experiment
A ‘classical’ soliton can be formally defined as a solution of non-linear evolution equation belonging to discrete spectrum of the inverse scattering transform [71,76,154]. The non-linear equations, which can be solved by inverse scattering transform, are ‘exactly integrable’. This means that they are akin to linear equations in some sense. In particular, they obey the superposition principle and, as a result, can be canonically quantized [155,156]. One has to note that integrability of a non-linear evolution equation and non-dissipative (non-Hamiltonian) character of the latter are not equivalent because there are both non-integrable Hamiltonian systems and integrable dissipative ones [36]. The point is that the DS concept is not connected with ‘integrability’; therefore, DSs are not ‘true’ solitons in a mathematical sense. However, many properties of DSs, in particular, their stable localization, robustness in the processes of scattering and interaction, well-organized internal structure, etc., resemble the properties of ‘true’ solitons. Formally, one may define DS as a localized and stable structure emergent in a non-linear dissipative system far from the thermodynamic equilibrium [36]. DSs are abundant in the different natural systems ranging from optics and condensed-matter physics to biology and medicine. In this sense, one may paraphrase that DSs “are around us. In the true sense of the word they are absolutely everywhere” [157]. Therefore, the concept of DS became well established in the last decade [36,37,42,158].
Stability of a DS under condition of strong non-equilibrium can be achieved only due to well-organized energy exchange with environment and subsequent energy redistribution within a DS. It results in energy flux inside a DS and, thereby, in DS phase inhomogeneity [36]. For a simplest case of Eq. (9), which has a DS solution in the form of A(t)=A0sech(t/τ)1+iψ (ψQτ2=τ22ϕ/t2 is a dimensionless chirp parameter) [85,159,160], the DS energy generation [36]
as well as the spectrum |A˜(ω)|2 are shown in Figure 6 in dependence of β2 (the data are based on an approach of [160]). One can see that the spectrum broadening transfers the action of spectral dissipation on a pulse ‘in whole’ into well-structured energy exchange: inflow at pulse centrum and outflow on its wings. The key characteristic of a dissipation inhomogeneity is a chirp, i.e. an inhomogeneity of phase. In absence of the chirp, the spectral dissipation acts on a pulse in whole that, in particular, induces a multi-pulse instability [161]. However, a power-dependent chirp causes inhomogeneity of energy transfer (Figure 7). Energy flows in the region closer to central wavelength where the gain is maximal. This region is located in the vicinity of pulse maximum. Energy flows out from the spectrum wings which are located on the wings of pulse, that is, the pulse localization is supported by spectral dissipation through non-linear mechanism of chirping [42,160,162]. One has to note that a direction of energy fluxes inside a DS depends on parameters and can be inversely related to the direction shown in Figures 6,7 (i.e. energy can flow from wings to centre). The corresponding structure was named dissipative anti-soliton [210].
Thus, an additional mechanism of SAM (in addition to mechanisms considered in the previous section) appears, which provides unique robustness of DSs (i.e. DS exists within a broad range of laser parameters [163,164]).
Figure 6.
(a) Profile of energy generation and (b) logarithm of spectral power in dependence on GDD (normal dispersion range) for a DS of [160].
Below, we will consider a chirp as the essential characteristic of DS [210]. One of the reasons is that the chirp allows DS to accumulate energy Eψ, which means that DS is energy-scalable [37,164167]. The last statement does not mean that a chirp-free pulse is not energy-scalable. However, the energy-scalability of such pulses can be provided by only fine-tuned and separated control of SPM and GDD that can be achieved in solid-state oscillators [26,168] or in large mode area (LMA) fibre lasers [169]. For fibre lasers such an approach entails issues of full-fibre integrability, higher-order mode control [170,171]
However, namely LMA and photonic-crystal fibres could realize a Kerr-lens mode-locking in a fibre laser [152,153].
and thermo-effects impact [172].
Figure 7.
(a) Profile of energy generation and (b) power in dependence on chirp for a DS of Eq. (9): σ = –0.01, κ = 0.025, α = 0.05, A0 = 1, τ = 1.
In the terms of space-time duality (see above), the mechanism of formation of time window, within which a DS is localized, resembles a phenomenon of total internal reflection from some ‘borders’ created by phase discontinuity. Such borders are formally defined by the equivalence of the wave number of out-/in-going radiation k(ω)=β2ω2/2 (wave number of dispersive linear wave) and the DS wave number q=γP0 : k(±Δ)=q, where P0=|Amax|2 is a DS peak power and a DS spectral width is Δ=2γP0/β2 [173]. Since a system is dissipative, the above phase equilibrium has to be supplemented by loss compensation condition: spectral loss αΔ2 has to be compensated by non-linear gain κP0. Combination of above criteria gives a definition of the parametric limits for DS [44,173]:
where E is a DS energy. Eq. (15) is valid for the cubic-quintic CNGLE in which SAM has a form of (κ|A|2κζ|A|4)A, that is, a non-linear gain is saturable (despite the unsaturable SAM in Eq. (9)). A SAM saturability is necessary for DS stabilization [174]. Equality in Eq. (15) corresponds to the DS stability border where σ = 0 (see Eq. (9)). The asymptotic E corresponds to a perfectly energy-scalable DS or to a phenomenon of dissipative soliton resonance (DSR) [37,44,165167,175183], which is sufficiently robust, exists in different SAM environments and even within the anomalous GDD range [178,182]. Important property of DSR is that the DS energy E can be scaled without loss of stability by plain scaling of laser average power or/and its length L [44,180,184]. The chirp scales with length as well. As a result, the DS peak power and spectrum width tend to a constant for fixed parameters of Eq. (9) (i.e. fixed α, β2, σ and κ) and the energy scaling is provided by DS stretching in time domain.
This ideology of energy scaling by the pulse stretching goes back to the so-called wave-breaking-free or stretched pulse fibre lasers where the propagation within the anomalous-dispersion fibre sectors alternates with the propagation under normal GDD action [96,101,185187]. As a result of pulse stretching, the non-linear effects in such systems are reduced, which allows increasing an energy and suppressing a noise. As an alternative approach, one can exclude an anomalous GDD at all and to realize a so-called similariton regime, when a pulse accumulates an extremely large chirp and, thereby, an energy [33,103,188]. However, a self-similar regime is not soliton-like one in nature; therefore, we will focus on the all-normal-dispersion fibre lasers (ANDi) which produces DSs possessing a high stability within a broad range of laser parameters [97,101,185,186,189,190]. In Figure 8, the energy-scalable DS lasers are sub-divided into three main types: (1) all-fibre, (2) fibre with a free-space sector and (2) LMA including rod and photonic-crystal fibre PCF.
The advantage of the first type of lasers is their integrity, which does not require an operational alignment, includes potentially compressing and delivering sections, and provides environment insensitivity and easy integrability with fibre-amplifier cascades [200]. The last advantage is especially attractive because it allows a direct seeding of DS into chirp-pulse amplifier without preliminary pulse stretching. The second type of the DS fibre lasers can be considered as a testbed for development of the first type. No wonder that the results achieved here are more impressive (Figure 8). At last, the third type of the DS fibre lasers is most akin to the thin-disk solid-state ones with simultaneous advantage of the broad gainbands. Such lasers provide the DS energy scalability by scaling of laser beam area in combination with the scaling of laser period and average power. Nevertheless, one has to keep in mind that both LMA and PCF technologies have some disadvantages (see above) which make them similar to solid-state lasers.
Figure 8.
Experimental realizations of DSs in fibre lasers (Refs. [7-9,82,83,100,102,105-107,109-111,122,124,125,130,132,135,138,139,140,141,187,191-209,232]; Ref. [111] corresponds to a DS in the anomalous GDD region).
The diversity of the results obtained (Figure 8) needs a comprehension from a unified viewpoint; therefore, let’s survey briefly some theoretical aspects relevant to the DS fibre lasers. There is vast literature regarding the theory of DSs. Some preliminary systematization can be found in [44]. However, it is necessary at first to declare the stumbling block of this theory: absence of a unified viewpoint. There exist unbroken walls between the circles of scientific community exploiting and exploring the DS concept: walls between the solid-state and fibre laser representations of the theory, condensed-matter one, numerical and analytical approaches, etc. Briefly and conditionally, the relevant theoretical approaches can be divided into (1) numerical, (2) exact analytical and (3) approximated analytical. The last includes the models based on (3.1) perturbative, (3.2) adiabatic models (AM) as well as those based on (3.3) phase-space truncation (i.e. variational approximation (VA) and method of moments (MM)).
As was emphasized repeatedly, both linear and non-linear dissipations are crucial for the DS formation. The simplest and most studied models for such a type of phenomena are based on the different versions of CNGLE (e.g. Eq. (9)).
Extensive numerical study of DSs of the cubic-quintic CNGLE has been carried out by N. N. Akhmediev with co-authors [3537,42,157,166,175,176,178,182]. The simulations have allowed finding the DS stability regions for some two-dimensional projections of CNGLE parametrical space. The summarizing description of the results obtained is presented in [44]. Most impressive results are: (i) parametric space of DS has a reduced dimensionality resulting, in particular, in the appearance of DSR; (ii) DSR remains in a model with lumped evolution that is typical for the most of fibre lasers; (iii) DSR and, correspondingly, DS exist within the anomalous GDD region as well. However, the main shortcomings of the numerical approaches are: the parametrical space under consideration is not physically relevant, and the true dimensionality of DS parametric space is not identified. It is clear that the only advanced and self-consistent analytical theory of DS would provide, in particular, a true representation of DS parametric space and DSR conditions.
As was mentioned above, the evolution equations describing DSs are not-integrable. The efforts based on the algebraic techniques [62,213,214] and aimed to finding the generalized DS solutions of CNGLE were not successful to date. Nevertheless, few exact partial DS-solutions are known. For instance, sole known exact analytical DS-solution of cubic-quintic CNGLE is [110,166,176,182,189,211,212]:
where A0, B, τ, ψ and ϕ are real constants [189]. This solution belongs to a fixed-point solution class, which means that it exists only if some constraints are imposed on the cubic-quintic CNGLE parameters. Solution (16) provides with important insights into properties of DSs. In particular, the systematical classification of DS spectra (truncated concave, convex, Lorentzian and structured spectra) and DS temporal profiles (from sech-shaped A(t) to tabletop one) is possible in the framework of analytical approach. The transition to a DSR-regime reveals itself in the ‘time-spectral’ duality shown in Figure 9 [141,189]. The sense of this ‘duality’ will be explained below from the point of view of adiabatic theory of DSs.
Figure 9.
‘Time-spectral duality’ for an energy-scalable DS: bell-like time-profile (blue curve on the left picture) transforms into tabletop one with truncated edges (right) and, inversely, truncated tabletop spectrum (red curve on the left picture) transforms into bell-like one (right) with the DS energy growth.
The crucial shortcoming of the approach based on few exact DS solutions of evolution equations is that the strict restrictions are imposed on the equation parameters. As a result, the DS cannot be traced within a broad multidimensional parametric range and the picture obtained is rather sporadic and is of interest only in the close relation with the numerical results and experiment. Some additional information can be obtained on the basis of perturbation theory which provides with a quite accurate approximation for a low-energy DS [215217].
Most powerful approaches to the theory of DSs have been developed in the framework of approximated techniques (for review see [44]): AM [165,167,217221], VA [77,177,222224,225] and MM [175,210,222,226]. The most impressive results obtained are: (i) physically relevant representation of DS parametric space was revealed (it is a so-called master diagram, see [44] for review and Figure 10); (ii) such a representation allows understanding the structural properties of DS and its energy-scaling laws (i.e. DSR conditions) for different mode-locking techniques; (iii) DS dynamics and an issue of optimal arrangement of laser elements providing the maximum DS stability and energy have been explored [224,225,227,228]; (iv) vectorial extension of VA concerning a vector DS (VDS) was endeavoured [229].
Figure 10.
DS master diagrams for the cubic-quintic CNGLE (black) and the CNGLE with a SAM defined by Eq. (12) (red) (in the last case κ=1/γPs). DSR ranges correspond to a so-called positive branch of DS [44,167], which has a highest DS energy-scalability and stability [221]. The dashed curve corresponds to a DS stability border obtained from numerical simulations of cubic-quintic CNGLE taking into account a quantum noise [243]. Dot blue curve shows the DS border under effect of SRS [260]. The parametrical space shown is the physically relevant parametrical space of DS. Points correspond to different scenarios of DS destabilization (see text).
Both AM and VA demonstrate two-dimensional representation of DS parametric space in the form of master diagram. Dimensionality can grow with complication of CNGLE non-linearity when SPM becomes saturable so that the cubic non-linear term in Eq. (9) has to be replaced by [(κκζ|A|2)i(γ+χ|A|2)]|A|2A [217]. This effect can appear in a fibre laser with NPR (e.g. see [179] where such a completely cubic-quintic CNGLE is connected with the NPR mode-locking technique). In this case, DS soliton exists in both normal and anomalous GDD regions [175,182,233].
The master diagram is a manifold of isogains (i.e. curves with σ=const0). Figure 10 demonstrates the zero-isogains (σ = 0) corresponding to the DS stability limit (upper curves) as well as the borders between ‘energy-scalable’ and ‘energy-non-scalabe’ branches of DS (lower curves, see [44,167] for a formal definition
Energy-non-scalable branch has two distinguishing characteristics: it turns into solution of Eq. (9) with ζ,χ0 (‘Schrödinger limit’ [218]) and is unstable in absence of dynamic gain saturation, i.e. if σ is not energy-dependent [221].
). Figure 10 demonstrates that the saturation of SAM (so-called reverse saturable absorber provided, for instance, by NPR or graphene [141]; black curves) enhances the DS stability in comparison with an unsaturable SAM (red curves). Since limCconstE= (const the DSR range and its maximum value are of 2/3) for a saturable SAM (cubic-quintic CNGLE, black curves in Figure 10). Such a property of isogain curves corresponds to the DSR phenomenon. Since the stability threshold is defined by the condition of C2αγ/β2κ2/3, one may conclude that the broadening of spectral filter band (or gainband) enhances stability against multi-pulsing (α1/δΩ2, see above) [107,108]. Simultaneously, SPM has to be balanced by GDD (Cγ/β2) that, in combination with normalization of energy, gives the energy-scaling law EL along a DSR curve. VA predicts [230]:
which agrees with experimental observations of linear growth of DS energy with bandwidth [107,108] as well with a rule EL since β2L in the frameworks of distributed CNGLE.
In the case of unsaturable SAM corresponding to SESAM, some nanotube and graphene absorbers, Kerr-lensing, etc. (see Eq. (12)), the energy scaling requires scaling of the control parameter C. In this case, the asymptotic energy scaling law for Eκ2/γα1 becomes [44]:
The spectral properties of DS are described clearly in the frameworks of AM [44,167]. In the simplest case of cubic-quintic CNGLE, the DS spectrum A˜(ω) in the limit of |ψ|1 is a Lorentzian profile which has a characteristic width ΩL and is truncated at frequencies ±∆ [44,167,218]:
where H is a Heaviside function. The DS energy is
Here, we trace the zero-isogain σ = 0. The DS time-profile is defined by an implicit expression:
with the DS width of τΔ1(Δ2+ΩL2)1. Now, there are the following limiting cases:
It is clear that in this ‘low-energy’ sector the DS time-profile is bell-like and its spectrum has tabletop form (ΩLΔ). In the DSR limit, one has:
that is, a DS in the DSR sector has a flattop temporal profile and a Lorenzian spectrum (ΩLΔ). Eqs. (24) demonstrate that asymptotical growth of DS energy leads to a spectral condensation (ΩL0) without a parallel temporal thermalization (τ), which means an inevitable destabilization of a plain energy-scalability [180]. This conclusion does not mean a participial impossibility of DS energy-scaling in the frameworks of cubic-quintic CNGLE model. For instance, a saturable SPM allows DSs with tabletop profiles and |dϕ/dt| on the pulse edges. Such a DS possesses enhanced energy scalability and was observed experimentally [231].
4. DS spectrum and stability
As was explained, the dual balances in frequency domain:
are formative for DS existence and stabilization. No wonder that the spectrum of DS is benchmark of its inherent properties.
Figure 11.
Perturbed DS spectrum [241].
Prior to consider the aspects of interweaving of spectral and stability properties of DSs, one has to point to a possibility of multi-wavelength multi-pulsing DSs provided by DS robustness. As was demonstrated in [234] theoretically, the multi-DSs compounds in a mode-locked laser can be stabilized at multiple frequencies. Experimentally, such multi-frequency DS compounds can be realized by birefringence filters with a periodical (interference-like) dependence of transmission on wavelength under conditions of sufficiently broad gainband and powerful pump [235239]
A multi-porting configuration of a DS laser supports even simultaneous generation of conventional and dissipative wavelength-separated solitons [240].
Figure 12.
Exploding DS corresponding to parameters of point A in Figure 10. Left: contour-plot of instant power, right: 3D-graph of instant power in dependence on local time t and propagation distance z (arbitrary units) [243].
As was demonstrated in previous section, DS has non-trivial internal structure due to energy fluxes inside it. The elements of this structure (internal modes) can be excited that causes pulsating or chaotic dynamics of DS with preservation of its temporal and spectral localization [241]. The spectral envelope acquires a shape of ‘glass with boiling water’ (Figure 11). Appearance of such perturbations is understandable in frameworks of the DS perturbation theory in spectral domain [242]. One has to note that such perturbations take a place inside the DS stability region where σ < 0 (below the corresponding upper curves in Figure 10). Above the stability boarder (‘no DS’ region in Figure 10), there are three main destabilization scenarios [243]. For small energies and in the vicinity of stability border (point A in Figure 10), DS is exploding (Figure 12), which means its aperiodic disappearance with excitation of continuous waves and subsequent DS recreation [37,244248]. With the energy growth (point B in Figure 10), the rogue DSs develop (Figure 13) [37,249253]. Such a regime can be interpreted as DS structural chaotization, that is, generation of multiple DSs with strong interactions causing extreme dynamics.
Figure 13.
Rogue DSs corresponding to parameters of point B in Figure 10. Left: contour-plot of instant power, right: 3D-graph of instant power in dependence on local time t and propagation distance z (arbitrary units) [243].
Figure 14.
DS molecule corresponding to parameters of point C in Figure 10. Left: contour-plot of instant power, right: 3D-graph of instant power in dependence on local time t and propagation distance z (arbitrary units) [243].
For sufficiently large energies in the vicinity of stability border (point C in Figure 10), the typical destabilization scenario is the generation of multiple DSs (Figure 14). The source of this destabilization is the growth of spectral dissipation caused by DS spectral broadening with approaching to stability border so that the DS splitting becomes more energy advantageous [161]. Moreover, the DS splitting can be enhanced by its phase inhomogeneity because the gain (energy in-flow) is maximum at the points of stationary phase dϕ/dt=0 [254]. Such a splitting can result in an extreme dynamics like solitonic turbulence (Figure 13) [255] or regular multi-DS complexes (Figure 14, so-called DS molecules) which can have non-extreme internal dynamics (soliton gas) and interact with a background (soliton liquid) [256-259]. If the dynamic gain saturation (see [44]) contributes, the multi-DS complexes can evolve slowly into a set of equidistant DSs with repetition rate multiple of laser one (harmonic mode-locking) [158,159].
The numerical simulations of cubic-quintic CNGLE with taking into account a quantum noise validated the fact of inconsistency of spectral condensation and absence of temporal thermolization that breaks the DS energy scalability (see previous section) [243]. As a result, the DS stability region breaks abruptly with energy growth (dashed curve in Figure 10) and multitude of turbulent scenarios of DSs evolution develops (Figure 15) [34,243].
Figure 15.
DS turbulent regimes (contour-plot of instant power, arbitrary units) corresponding to the parameters of points D, E, F, G and H in Figure 10 [243].
Serious limitations on power and energy scalability of DSs in fibre lasers arise from stimulated Raman scattering (SRS) [2,109]. The stability border of DS under action of SRS is shown in Figure 10 by dot blue curve (DS is stable on the left of this curve) [260]. As was found, SRS enhances the tendency to multi-pulsing with energy growth caused by enhancement of spectral dissipation due to SRS [260]. Simultaneously, generation of anti-Stokes radiation causes chaotization of DS dynamics and irregular modulation of DS temporal and spectral profiles [261] (Figure 16). DS profile remains localized, but it is strongly cut by colliding dark and grey soliton-like structures [34].
As was shown, the DS dynamics can be regularized by formation of dissipative Raman soliton (DRS). DRS can exist in the form of DS which is Stokes-shifted due to self-Raman scattering (Figure 17, a) [260] or as bound DS–DRS complex (Figure 17, b) [262]. In the last case, stabilization is achieved by feedback, i.e. reinjection of Stokes signal through a delay line [263]. In the absence of a feedback, the Raman pulse is noisy [264] (see dash line spectrum in Figure 16, b).
Figure 16.
Wigner function of turbulent DS in the presence of SRS.
Figure 17.
(a) Wigner function of single DRS [260] and (b) spectrum of bound DS–DRS complex [262].
5. Vector DSs
As was pointed above, SOP can play leading role in a fibre laser dynamics. In particular, it can contribute to mode-locking or/and spectral filtering. However, diapason of polarization phenomena in a DS fibre laser spreads essentially broader. As was found, intrinsic fibre birefringence (Figure 2) can lead to DS splitting into two independent SOPs [78]. This phenomenon is used to realize the NPR mode-locking mechanism where a DS SOP evolves (or remains locked) as a whole during propagation [265-269]. The polarization dynamics can be fast (T) or slow (>>T) and vary from regular (with possible period multiplication or harmonic mode-locking) to chaotic one [270,271]. There are evidences of ultrafast SOP evolution when SOP changes across a DS profile [272].
The specific multiple pulse instability of vector dissipative solitons (VDSs) leads to generation of the bound states of DSs with different SOPs (vector soliton molecules) which are locked by a non-linear coupling [273,274] or by a group-velocity locking produced by spectrum shift between DSs with different SOP [275]. As was shown experimentally (Figure 18) [276], the dynamics of VDS molecules can be highly non-trivial and demonstrate both fast and slow periodic switching between fixed SOPs as well as SOP procession, which is especially interesting for fibre laser telecommunications based on polarization multiplexing.
Figure 18.
Polarization dynamics of VDS molecules in an Er-fibre laser mode-locked by carbon nanotubes [276]. Top row demonstrates the evolution of Stokes parameters. The bottom row reproduces this evolution on the Poincaré sphere (each point corresponds to SOP after one laser round-trip).
The important breakthrough in the recent theory of VDSs is the demonstration of insufficiency of approaches based on the coupled CNGLEs (like (10)) for adequate description of DS polarization dynamics. It was demonstrated that an active medium polarizability contributes to DS dynamics substantially [277]. As was shown, the SOP-sensitive interaction between DS and a slowly relaxing active medium with taking into account the birefringence of fibre laser elements and light-induced anisotropy caused by elliptically polarised pump field change the SOP at a long time scale that results in fast and slowly evolving SOPs of VDSs (Figure 19).
The non-trivial contribution of active medium kinetics and polarizability with taking into account the pump SOP and SPM demonstrates a complex dynamics including spiral attractors and dynamic chaos (Figure 20) [278]. One may assume that such a non-trivial polarization dynamics is of great importance for DS energy scaling, in particular, due to vector nature of SRS [279]. These topics remain unexplored to date.
Figure 19.
Polarization dynamics of VDS with taking into account an Er-fibre polarizability [277]. Both fast and slow SOP dynamics exist in dependence on fiber laser birefringence strength.
Figure 20.
Dynamics of active fibre inversion components, which are polarization-sensitive (n12 and n22) and non-sensitive (n0), with the corresponding slow SOP evolution of VDS [278].
6. Conclusion
The recent progress in development of ultrafast fibre lasers and advances in exploring of DS are interrelated. DSs allowed scoring a great success in ultrashort pulse energy scalability that is defined by unprecedented stability and robustness of DS. At this moment, it is possible to achieve over-MW peak powers for sub-100 fs pulses directly from a fibre laser at over-MHz repetition rates. New spectral diapasons became reachable owing to development of mid-IR active fibres and using the frequency-conversion directly in a laser. Development of new mode-locking techniques, especially based on using of SESAMs, graphene and another quantum-sized structure allowed improving a laser stability, integrity and environment insensitivity. A great advance has been achieved in the theory of DSs. New powerful analytical techniques based on extensive numerical simulations and experimental advances extended understanding of the DS fundamental properties and revealed new prospects in improvement of characteristics of ultrafast fibre lasers. Based on achieved results, one may outline some unresolved problems. As was found, there are stability limits for a DS energy scaling imposed by optical turbulence and SRS. Deeper insight into the nature of these phenomena could allow to overcome these limits without substantial complication of laser setup. Simultaneously, control of intra-laser spectral conversion is a direct way to broadening of spectral range. Then, the dynamics and properties of VDSs remain scantily explored. Recent studies demonstrated a multitude of polarization phenomena, which cannot be grasped in frameworks of existing models. In particular, polarizability and kinetics of an active fibre in combination with birefringence of a laser in a whole can contribute non-trivially to a laser dynamics. As an additional aspect of further development, one may point at the development of new mode-locking techniques, which could improve DS stability and integrity of a fibre laser, decrease pulse width and extend a diapason of pulse repetition rates. At last, one has to remember that a fibre laser is an ideal playground for study of complex non-linear phenomena and, undoubtedly, new bridges between different fields of science will be built with a further progress of ultrafast fibre lasers.
This work was supported by FP7-PEOPLE-2012-IAPP (project GRIFFON, No. 324391).
1. 1. Fermann ME, Hartl I. Ultrafast fiber laser technology. IEEE J Sel Topics in Quantum Electron 2009;15(1):191–206. DOI: 10.1109/JSTQE.2008.2010246.
2. 2. Fermann ME, Galvanauskas A, Sucha G, Harter D. Fiber-lasers for ultrafast optics. Appl Physics B 1997;65:259–75.
3. 3. Fermann ME, Hartl I. Ultrafast fiber lasers. Nature Photonics 2013;7:868–74. DOI: 10.1038/NPHOTON.2013.280.
4. 4. Limpert J, Röser F, Schreiber Th, Tünnermann A. High-power ultrafast fiber laser systems. IEEE J Sel Top Quantum Electron 2006;12(2):233–44. DOI: 10.1109/JSTQE.2006.872729.
5. 5. Lamb WE. Theory of an optical laser. Phys Rev 1964;134:A1429–50.
6. 6. Siegman AE. Lasers. Sausalito: University Science Book; 1986. 1283 p.
7. 7. Lefranҫois S, Kieu K, Deng Y, Kafka JD, Wise FW. Scaling of dissipative soliton fiber lasers to megawatt peak powers by use of large-area photonic crystal fiber. Opt Lett 2010;35(10):1569–71.
8. 8. Baumgartl M, Lecaplain C, Hideur A, Limpert J, Tünnermann C. 66 W average power from a microjoule-class sub-100 fs fiber oscillator. Opt Lett 2012;37(10):1640–2. DOI: 10.1364/OL.37.001640.
9. 9. Chong A, Renninger WH, Wise FW. Route to the minimum pulse duration in normal-dispersion fiber lasers. Opt Lett 2008;33(22):2638–40. DOI: 10.1364/OL.33.002638.
10. 10. Sucha G. Overview of industrial and medical applications of ultrashort pulse lasers. In: Fermann ME, Galvanauskas A, Sucha G. (Eds.) Ultrafast Lasers: Technology and Applications. New York: Marcel Dekker, Inc.; 2003. pp. 323–358.
11. 11. Xu C, Wise FW. Recent advances in fibre lasers for nonlinear microscopy. Nature Photonics 2013;7:875–82. DOI: 10.1038/NPHOTON.2013.284.
12. 12. Müller M, Squier J. Nonlinear microscopy with ultrashort pulse lasers. In: Fermann ME, Galvanauskas A, Sucha G. (Eds.) Ultrafast Lasers: Technology and Applications. New York: Marcel Dekker, Inc.; 2003, pp. 661–97.
13. 13. Kalashnikov VL, Sorokin E. Soliton absorption spectroscopy. Phys Rev A 2010;81:033840. DOI: 10.1103/PhysRevA.81.033840.
14. 14. Kalashnikov VL, Sorokin E, Sorokina IT. Chirped dissipative soliton absorption spectroscopy. Opt Express 2011;19(18):17480–92.
15. 15. Kurtz RM, Sarayba MA, Juhasz T. Ultrafast lasers in ophthalmology. In: Fermann ME, Galvanauskas A, Sucha G. (Eds.) Ultrafast Lasers: Technology and Applications. New York: Marcel Dekker, Inc.; 2003, pp. 745–65.
16. 16. Clowes J. Next generation light sources for biomedical applications. Optic Photonic 2011;3(1):36–8.
17. 17. Fujimoto JG, Brezinski M, Drexler W, Hartl I, Kärtner F, Li X, Morgner U. Optical coherence tomography. In: Fermann ME, Galvanauskas A, Sucha G. (Eds.) Ultrafast Lasers: Technology and Applications. New York: Marcel Dekker, Inc.; 2003, pp. 699–743.
18. 18. Drexler W, Fujimoto JG. (Eds.) Optical Coherence Tomography. Berlin: Springer-Verlag; 2008. 1346 p.
19. 19. Lanin AA, Fedotov IV, Sidorov-Biryukov DA, Doronina-Amitonova LV, Ivashkina OI, Zots MA, Sun C-K, Ilday FÖ, Fedotov AB, Anokhin KV, Zheltikov AM. Air-guided photonic-crystal-fiber pulse-compression delivery of multimegawatt femtosecond laser output for nonlinear-optical imaging and neurosurgery. Appl Phys Lett 2012;100:101104. DOI: 10.1063/1.3681777.
20. 20. Osellame R, Cerullo G, Ramponi R. (Eds.) Femtosecond Laser Micromachining. Heidelberg: Springer; 2012. 483 p.
21. 21. Gattass RR, Mazur E. Femtosecond laser micromachining in transparent materials. Nat Photon. 2008;2:219–25. DOI: 10.1038/nphoton.2008.47.
22. 22. Nakazawa M. Ultrahigh bit rate communication system. In: Fermann ME, Galvanauskas A, Sucha G. (Eds.) Ultrafast Lasers: Technology and Applications. New York: Marcel Dekker, Inc.; 2003. pp. 611–660.
23. 23. Udem Th, Holzwarth R, Hänsch TW. Optical frequency metrology. Nature 2002;416:233–7. DOI: 10.1038/416233a.
24. 24. Liu Y, Tschuch S, Rudenko A, Dürr M, Siegel M, Morgner U, Moshammer R, Ullrich J. Strong-field double ionization of Ar below the recollision threshold. Phzs Rev Lett 2008;101:053001. DOI: 10.1103/PhysRevLett.101.053001.
25. 25. Sciaini G, Miller RJD. Femtosecond electron diffraction: heralding the era of atomically resolved dynamics. Rep Prog Phys 2011;74:096101. DOI: 10.1088/0034-4885/74/9/096101.
26. 26. Südmeyer T, Marchese SV, Hashimoto S, Baer CRE, Gingras G, Witzel B, Keller U. Femtosecond laser oscillators for high-field science. Nat Photon 2008;2:599–604. DOI: 10.1038/nphoton.2008.194.
27. 27. Krausz F, Ivanov M. Attosecond physics. Rev Mod Phys 2009;81(1):163–234. DOI: 10.1103/RevModPhys.81.163
28. 28. Pfeifer T, Spielmann C, Gerber G. Femtosecond x-ray science. Rep Prog Phys 2006;69(2):443–505. DOI: 10.1088/0034-4885/69/2/R04.
29. 29. Mourou GA, Tajima T, Bulanov SV. Optics in the relativistic regime. Rev Mod Phys 2006;78(2):309–71. DOI: 10.1103/RevModPhys.78.309.
30. 30. Martinez A, Sun Z. Nanotube and graphene saturable absorbers for fibre lasers. Nat Photon 2013;7:842–5.
31. 31. Lecaplain C, Grelu Ph, Soto-Crespo JM, Akhmediev N. Dissipative rogue waves generated by chaotic bunching in a mode-locked laser. Phys Rev Lett 2012;108:233901.
32. 32. Dudley JM, Dias F, Erkintalo M, Genty G. Instabilities, breathers and rogue waves in optics. Nat Photon 2014;8:755–64.
33. 33. Dudley JM, Finot Ch, Richardson DJ, Millot G. Self-similarity in ultrafast nonlinear optics. Nat Phys 2007;3:597–603.
34. 34. Turitsyna EG, Smirnov SV, Sugavanam S, Tarasov N, Shu X, Babin SA, Podivilov EV, Churkin DV, Falkovich G, Turitsyn SK. The laminar–turbulent transition in a fibre laser. Nat Photon 2013;7:783–6. DOI: 10.1038/NPHOTON.2013.246.
35. 35. Akhmediev NN, Ankiewicz A. Solitons: Nonlinear Pulses and Beams. London: Chapman & Hall; 1997.
36. 36. Akhmediev NN, Ankiewicz A. (Eds.) Dissipative Solitons. Berlin: Springer-Verlag; 2005.
37. 37. Grelu Ph, Akhmediev N. Dissipative solitons for mode-locked lasers. Nat Photon 2012;6:84–92. DOI: 10.1038/nphoton.2011.345
38. 38. Faccio D, Belgiorno F, Cacciatori S, Gorini V, Liberati S, Moschella U. (Eds.) Analogue Gravity Phenomenology. Analogue Spacetimes and Horizons, from Theory to Experiment. Heidelberg: Springer; 2013. p. 438. DOI: 10.1007/978-3-319-00266-8.
39. 39. Kevrekidis PG, Frantzeskakis DJ, Carretero-González R. (Eds.) Emergent Nonlinear Phenomena in Bose-Einstein Condensates. Berlin: Springer-Verlag; 2008.
40. 40. Yang Y. Solitons in Field Theory and Nonlinear Analysis. New York: Springer; 2001.
41. 41. Abdulaev FK, Konotop VV. (Eds.) Nonlinear Waves: Classical and Quantum Aspects. Dordrecht: Kluwer Academic Pub.; 2004.
42. 42. Akhmediev NN, Ankiewicz A. (Eds.) Dissipative Solitons: From Optics to Biology and Medicine. Berlin: Springer-Verlag; 2008.
43. 43. Naruse M, (Ed.) Nanophotonic Information Physics. Berlin: Springer-Verlag; 2014.
44. 44. Kalashnikov VL. Chirped-pulse oscillators: route to the energy-scalable femtosecond pulses. In: Al-Khursan A. (Ed.) Solid State Laser. InTech, 2008; pp. 145–184. DOI: 10.5772/37415.
45. 45. Baer CRE, Heckl OH, Saraceno CJ, Schriber C, Kränkel C, Südmeyer T, Keller U. Frontiers in passively mode-locked high-power thin-disk laser oscillators. Opt Express 2012;20:7054–65. DOI: 10.1364/OE.20.007054.
46. 46. Saraceno CJ, Emaury F, Heckl OH, Baer CRE, Hoffmann M, Schriber C, Golling M, Südmeyer Th, Keller U. 275 W average output power from a femtosecond thin disk oscillator operated in a vacuum environment. Opt Express 2012;20:23535–41. DOI: 10.1364/OE.20.023535.
47. 47. Zhang J, Brons J, Lilienfein N, Fedulova E, Pervak V, Bauer D, Sutter D, Wei Zh, Apolonski A, Pronin O, Krausz F. 260-megahertz, megawatt-level thin-disk oscillator. Opt Lett 2015;40:1627–30. DOI: 10.1364/OL.40.001627.
48. 48. Brons J, Pervak V, Fedulova E, Bauer D, Sutter D, Kalashnikov V, Apolonskiy A, Pronin O, Krausz F. Energy scaling of Kerr-lens mode-locked thin-disk oscillators. Opt Lett 2014;39:6442–5. DOI: 10.1364/OL.39.006442.
49. 49. Naumov S, Fernandez A, Graf R, Dombi P, Krausz F, Apolonski A. Approaching the microjoule frontier with femtosecond laser oscillators. New J Phys 2005;7:216. DOI: 10.1088/1367-2630/7/1/216.
50. 50. Fedulova E, Fritsch K, Brons J, Pronin O, Amotchkina T, Trubetskov M, Krausz F, Pervak V. Highly-dispersive mirrors reach new levels of dispersion. Opt Express 2015;23:13788–93. DOI: 10.1364/OE.23.013788.
51. 51. Richardson DJ, Nilsson J, Clarkson WA. High power fiber lasers: current status and future prospects. J Opt Soc Am B 2010;27:B63–B92.
52. 52. Sraceno CJ, Heckl OH, Baer CRE, Schriber C, Golling M, Beil K, Kränkel ST, Huber G, Keller U. Sub-100 femtosecond pulses from a SESAM modelocked thin disk laser. Appl Phys B 2012;106:559–62. DOI: 10.1007/s00340-012-4900-5.
53. 53. Zhang J, Brons J, Seidel M, Pervak V, Kalashnikov V, Wei Z, Apolonski A, Krausz F, Pronin O. 49-fs Yb:YAG thin-disk oscillator with distributed Kerr-lens mode-locking. In: CLEO/Europe-EQEC Scientific Programme; 21–25 June; Munich, Germany. 2015. p. 166 (PD-A.1 WED).
54. 54. Sorokin E, Kalashnikov VL, Naumov S, Teipel J, Warken F, Giessen H, Sorokina IT. Intra- and extra-cavity spectral broadening and continuum generation at 1.5 \mu m using compact low energy femtosecond Cr:YAG laser. Applied Phys B 2003;77(2–3):197–204.
55. 55. Joly NY, Nold J, Chang W, Hölzer P, Nazarkin A, Wong GKL, Biancalana F, Russel P. St J. Bright spatially coherent wavelength-tunable deep-UV laser source using an Ar-filled photonic crystal fiber. Phys Rev Lett 2011;106:203901.
56. 56. Jackson SD. Towards high-power mid-infrared emission from a fiber laser. Nat Photon 2012;28:423–31. DOI: 10.1038/NPHOTON.2012.149.
57. 57. Acerbron JA, Bonilla LL, Vicente CJP, Ritort F, Spigler R. The Kuramoto model: a simple paradigm for synchronization phenomena. Rev Mod Phys 2005;77:137–85.
58. 58. Pikovsky A, Rosenblum M, Kurths J. Synchronization: a universal concept in nonlinear sciences. Cambridge: Cambridge University Press; 2001.
59. 59. Kuizenga DJ, Siegman AE. FM and AM mode locking of the homogeneous laser - Part I: Theory. IEEE J Quantum Electron 1970;6(11):694–708. DOI: 10.1109/JQE.1970.1076343.
60. 60. Haus HA. Mode-locking of lasers. IEEE J Sel Topic Quantum Electron 2000;6(6):1173–85.
61. 61. Haus HA. Short pulse generation. In: Duling IN, III (Ed.) Compact Sources of Ultrashort Pulses. Cambridge: Cambridge University Press; 1995. pp. 1–56.
62. 62. Kalashnikov VL. Mathematical Ultrashort-Pulse Laser Physics [Internet]. 15/09/2000 [Updated: 29/03/2002]. Available from:
63. 63. Kuizenga DI, Siegman AE. Modulator frequency detuning effects in the FM mode-locked laser. IEEE J Quantum Electron 1970;QE-6:803–8.
64. 64. Akhmanov SA, Vysloukh VA, Chirkin AS. Optics of Femtosecond Laser Pulses. New York: AIP; 1992.
65. 65. Haus HA, Silberberg Y. Laser mode locking with addition of nonlinear index. IEEE J Quantum Electron 1986;QE-22(2):325–31.
66. 66. Kalashnikov VL, Poloyko IG, Mikhaylov VP. Generation of ultrashort pulses in lasers with external frequency modulation. Quantum Electron 1998;28(3):264–8.
67. 67. Kolner BH. Space-time duality and the theory of temporal imaging. IEEE J Quantum Electron 1994;QE-30(8):1951–63.
68. 68. van Howe J, Xu Ch. Ultrafast optical signal processing based upon space-time dualities. IEEE J Lightwave Technology 2006;24(7):2649–62. DOI: 10.1109/JLT.2006.875229.
69. 69. Salem R, Foster MA, Gaeta AL. Application of space-time duality to ultrahigh-speed optical signal processing. Adv Optics Photon 2013;5:274–317. DOI: 10.1364/AOP.5.000274.
70. 70. Agrawal GP. Nonlinear Fiber Optics. Third Edition ed. San Diego: AP; 2001. 466 p.
71. 71. Newell AC. Solitons in Mathematics and Physics. Philadelphia: SIAM; 1985. 246 p.
72. 72. Lai Y, Haus HA. Quantum theory of solitons in optical fibers. I. Time-dependent Hartree approximation. Phys Rev A 1989;40:844–53.
73. 73. Lai Y, Haus HA. Quantum theory of solitons in optical fibers. II. Exact solution. Phys Rev A 1989;40:854–66.
74. 74. Yoon B, Negele JW. Time-dependent approximation for a one-dimensional system of bosons with attractive δ-function interactions. Phys Rev A 1977;16:1451.
75. 75. Haus HA. Theory of mode-locking with a fast saturable absorber. J Appl Phys 1975; 46:3049–58.
76. 76. Ablowitz MJ, Clarkson PA. Solitons, Nonlinear Evolution Equations and Inverse Scattering. Cambridge: Cambridge University Press; 1991. 516 p.
77. 77. Turitsyn SK, Bale BG, Fedoruk MP. Dispersion-managed solitons in fibre systems and lasers. Phys Rep 2012;521:135–203. DOI: 10.1016/j.physrep.2012.09.004.
78. 78. Menyuk CR. Nonlinear pulse propagation in birefringent optical fibers. IEEE J Quantum Electron 1987;23(2):174–6.
79. 79. Menyuk GR. Pulse propagation in an elliptically birefringent Kerr medium. IEEE J Quantum Electron 1989;25(12):2674–82.
80. 80. Hasegawa A. Optical Solitons in Fibers. Berlin: Springer-Verlag; 1990. 79 p.
81. 81. Ding E, Renninger WH, Wise FW, Grelu Ph, Shlizerman E, Kutz JN. High-energy passive mode-locking of fiber lasers. Int J Optics 2012;2012(ID354156):1–17. DOI: 10.1155/2012/354156.
82. 82. Wang R, Dai Y, Yan L, Wu J, Xu K, Li Y, Lin J. Dissipative soliton in actively mode-locked fiber laser. Optics Express 2012;20(6):6406–11.
83. 83. Koliada NA, Nyushkov BN, Ivanenko AV, Kobtsev SM, Harper P, Turitsyn SK, Denisov VI, Pivtsov VS. Generation of dissipative solitons in an actively mode-locked ultralong fibre laser. Quantum Electron 2013;43(2):95–8. DOI: 10.1070/QE2013v043n02ABEH015041.
84. 84. Wang R, Dai Y, Yin F, Xu K, Li J, Lin J. Linear dissipative soliton in an anomalous-dispersion fiber laser. Optics Express 2014;22(24):29314–20. DOI: 10.1364/OE.22.029314.
85. 85. Haus HA, Fujimoto JG, Ippen EP. Analytic theory of additive pulse and Kerr lens mode locking. IEEE J. Quantum Electron 1992;28(10):2086–96.
86. 86. Fermann ME, Andrejco MJ, Silberberg Y, Stock ML. Passive mode locking by using nonlinear polarization evolution in a polarization-maintaining erbium-doped fiber. Optics Lett 1993;18(11):894–6. DOI: 10.1364/OL.18.000894.
87. 87. Hofer M, Ober MH, Haberl F, Fermann ME. Characterization of ultrashort pulse formation in passively mode-locked fiber lasers. IEEE J Quantum Electron 1992;28(3)DOI:720–8.
88. 88. Fermann ME. Nonlinear polarization evolution in passively modelocked fiber lasers. In: Duling IN, III (Ed.) Compact sources of ultrashort pulses. Cambridge: Cambridge University Press; 1995. pp. 179–207.
89. 89. Winful HG. Polarization instabilities in birefringent nonlinear media: application to fiber-optic devices. Optics Lett 1986;11(1):33–5.
90. 90. Kalashnikov VL, Kalosha VP, Mikhailov VP. Self-mode locking of continuous-wave solid-state lasers with a nonlinear Kerr polarization modulator. J Opt Soc Am B 1993;10:1443–6.
91. 91. Ding E, Kutz JN. Operating regimes, split-step modeling, and the Haus master mode-locking model. J Opt Soc Am B 2009;26(12):2290–300.
92. 92. Ding E, Shlizerman E, Kutz JN. Generalized master equation for high-energy passive mode-locking: the sinusoidal Ginzburg-Landau equation. IEEE J Quantum Electron 2011;47(5):705–14.
93. 93. Komarov A, Leblond H, Sanchez F. Multistability and hysteresis phenomena in passively mode-locked fiber lasers. Phys Rev A 2005;71(5):053809.
94. 94. Komarov A, Leblond H, Sanchez F. Quantic complex Ginzburg-Landau model for ring fiber lasers. Phys Rev E 2005;72(2):025604.
95. 95. Matsas VJ, Newson TP, Richardson DJ, Payne DN. Selfstarting passively mode-locked fibre ring soliton laser exploiting nonlinear polarization rotation. Electron Lett 1992;28(15):1391–3.
96. 96. Tamura K, Ippen EP, Haus HA, Nelson IE. 77-fs pulse generation from a stretched-pulse mode-locked all-fiber ring laser. Optics Lett 1993;18(13):1080–2.
97. 97. Chong A, Buckley J, Renninger W, Wise F. All-normal-dispersion femtosecond fiber laser. Optics Express 2006;14(21):10096–100.
98. 98. Zhao LM, Tang DY, Wu J. Gain-guided soliton in a positive group-dispersion fiber laser. Optics Lett 2006;31(12):1788–90.
99. 99. Cabasse A, Ortac B, Martel G, Hideur A, Limpert J. Dissipative solitons in a passively mode-locked Er-doped fiber with strong normal dispersion. Optics Express 2008;16(23):19323–9.
100. 100. Kieu K, Renninger WH, Chong A, Wise FW. Sub-100 fs pulses at watt-level powers from a dissipative-soliton fiber laser. Optics Lett 2009;34(5):593–5.
101. 101. Ruehl A, Wandt D, Morgner U, Kracht D. Normal dispersive ultrafast fiber oscillators. IEEE J Sel Topic Quantum Electron 2009;15(1):170–81.
102. 102. Wu X, Tang DY, Zhang H, Zhao LM. Dissipative soliton resonance in an all-normal-dispersion erbium-doped fiber laser. Optics Express 2009;17(7):5580–4.
103. 103. Renninger WH, Chong A, Wise F. Self-similar evolution in an all-normal-dispersion laser. Phys Rev A 2010;82:021805(R).
104. 104. Zhao L, Tang D, Wu X, Zhang H. Dissipative soliton generation in Yb-fiber laser with an invisible intracavity bandpass filter. Optics Lett 2010;35(16):2756–8.
105. 105. Chichkov NB, Hausmann K, Wandt D, Morgner U, Neumann J, Kracht D. High-power dissipative solitons from an all-normal dispersion erbium fiber oscillator. Optics Lett 2010;35(16):2807–9.
106. 106. Baumgartl M, Ortac B, Lecaplain C, Hideur A, Limpert J, Tuenermann A. Sub-80 fs dissipative soliton large-mode-area fiber laser. Optics Lett 2010;35(13):2311–3.
107. 107. Lecaplain C, Baumgartl M, Schreiber T, Hideur A. On the mode-locking mechanism of a dissipative-soliton fiber laser. Optics Express 2011;19(27):26742–51.
108. 108. Zhang Z, Dai G. All-normal-dispersion dissipative soliton Ytterbium fiber laser without dispersion compensation and additional filter. IEEE Photon J 2011;3(6):1023–9. DOI: 10.1109/JPHOT.2011.2170057.
109. 109. Kharenko DS, Podivilov EV, Apolonski AA, Babin SA. 20 nJ 200 fs all-fiber-highly chirped dissipative soliton oscillator. Optics Lett 2012;37(19):4104–6.
110. 110. Li X, Wang Y, Zhao W, Liu X, Wang Y, Tsang YH, Zhang W, Hu X, Yang Zh, Gao C, Li Ch, Shen D. All-fiber dissipative solitons evolution in a compact passively Yb-doped mode-locked fiber laser. J Lightwave Tech 2012;30(15):2502–7.
111. 111. Duan L, Liu X, Mao D, Wang L, Wang G. Experimental observation of dissipative soliton resonance in an anomalous-dispersion fiber laser. Optics Express 2012;20(1):265–70.
112. 112. Doran NJ, Wood D. Nonlinear-optical loop mirror. Optics Lett 1988;13(1):56–8.
113. 113. Ippen EP, Haus HA, Liu LY. Additive pulse modelocking. J Opt Soc Am B 1989;6:1736–45.
114. 114. Duling IN III, Dennis ML. Modelocking of all-fiiber lasers. In: Duling IN III, (Eds.) Compact Sources of Ultrashort Pulses. Cambridge: Cambridge University Press; 1995. pp. 140–178.
115. 115. Mark J, Liu LY, Hall KL, Haus HA, Ippen EP. Femtosecond pulse generation in a laser with a nonlinear external resonator. Optics Lett 1989;14(1):48–50.
116. 116. Kalashnikov VL, Kalosha VP, Mikhailov VP, Poloyko IG. Multi-frequency continuous wave solid-state laser. Optics Commun 1995;116(4–6):383–8.
117. 117. Kalashnikov VL, Kalosha VP, Mikhailov VP, Poloyko IG, Demchuk MI. Efficient self-mode locking of continuous-wave solid-state lasers with resonant nonlinearity in an additional cavity. Optics Commun 1994;109:119–25.
118. 118. Kalashnikov VL, Kalosha VP, Mikhailov VP, Poloyko IG, Demchuk MI. Self-mode-locking of cw solid-state lasers with a nonlinear antiresonant ring. Quantum Electron 1994;24(1):35–9.
119. 119. Richardson DJ, Laming RI, Payne DN, Matsas V, Phillips MW. Selfstarting, passively modelocked Erbium fibre laser based on amplifying Sagnac switch. Electronics Lett 1991;27(6):542–3.
120. 120. Nicholson JW, Andrejco M. A polarization maintaining, dispersion managed, femtosecond figure-eight fiber laser. Optics Express 2006;14(18):8160–7.
121. 121. Yun L, Liu X, Mao D. Observation of dual-wavelength dissipative solitons in a figure-eight erbium-doped fiber laser. Optics Express 2012;20(19):20992–7.
122. 122. Wang S-K, Ning Q-Y, Luo A-P, Lin A-B, Luo Z-C, Xu W-C. Dissipative soliton resonance in a passively mode-locked figure-eight fiber laser. Optics Express 2013;21(2):2402–7.
123. 123. Zhao LM, Bartnik AC, Tai QQ, Wise FW. Generation of 8 nJ pulses from a dissipative-soliton fiber laser with a nonlinear optical loop mirror. Optics Lett 2013;38(11):1942–4.
124. 124. Lin H, Guo Ch, Ruan Sh, Yang J. Dissipative soliton resonance in an all-normal-dispersion Yb-doped figure-eight fibre laser with tunable output. Laser Phys Lett 2014;11:085102.
125. 125. Xu Y, Song Y, Du G, Yan P, Guo Ch, Zheng G, Ruan Sh. Dissipative soliton resonance in a wavelength-tunable Thulium-doped fiber laser with net-normal dispersion. IEEE Photonics J 2015;7(3):1502007. DOI: 10.1109/JPHOT.2015.2424855.
126. 126. Keller U. Recent developments in compact ultrafast lasers. Nature 2003;424(14):831–8.
127. 127. Keller U. Semiconductor Nonlinearities for Solid-State Laser Modelocking and Q-switching. In: Garmire E, Kost A. (Eds.) Nonlinear Optics in Semiconductors II (Semiconductors and Semimetals, Vol. 59). San Diego: AP; 1999. pp. 211–86.
128. 128. Keller U, Weingarten KJ, Kaertner FX, Kopf D, Braun B, Jung ID, Fluck R, Hoenninger C, Matuschek N, Aus der Au J. Semiconductor saturable absorber mirrors (SESAM's) for femtosecond to nanosecond pulse generation in solid-state lasers. IEEE J Sel Top Quantum Electron 1996;2(3):435–53.
129. 129. Kaertner F. Lecture Notes: Introduction to Ultrafast Optics, Chapter 8 [Internet]. 2005. Available from:
130. 130. Chong A, Renninger WH, Wise FW. Environmentally stable all-normal-dispersion femtosecond fiber laser. Optics Lett 2008;33(10):1071–3. DOI: 10.1364/OL.33.001071.
131. 131. Cabasse A, Martel G, Oudar JL. High power dissipative soliton in an Erbium-doped fiber laser mode-locked with a high modulation depth saturable absorber mirror. Optics Express 2009;17(12):9537–42.
132. 132. Tang M, Wang H, Becheker R, Oudar J-L, Gaponov D, Godin T, Hideur A. High-energy dissipative solitons generation from a large normal dispersion Er-fiber laser. Optics Lett 2015;40(7):1414–7.
133. 133. Gumenyuk R, Vartianen I, Tuovinen H, Okhotnikov OG. Dissipative dispersion-managed soliton 2mm thulium/holmium fiber laser. Optics Lett 2011;36(5):609–11.
134. 134. Lecourt J-B, Duterte C, Narbonneau F, Kinet D, Hernandez Y, Giannone D. All-normal dispersion, all-fibered PM laser mode-locked by SESAM. Optics Express 2012;20(11):11918–23. DOI: 10.1364/OE.20.011918.
135. 135. Jiang K, Ouyang C, Wu K, Wong JH. High-energy dissipative soliton with MHz repetition rate from an all-fiber passively mode-locked laser. Optics Commun 2012;285(9):2422–5. DOI: 10.1016/j.optcom.2012.01.033.
136. 136. Poloyko IG, Kalashnikov VL. Semiconductor saturable absorber mirrors as mode-locking device for femtosecond lasers: nonlinear Fabri-Perot resonator approach. Optics Commun 1999;168:167–75.
137. 137. Sun Z, Hasan T, Ferrari AC. Ultrafast lasers mode-locked by nanotubes and graphene. Physica E 2012;44:1082–91. DOI: 10.1016/j.physe.2012.01.012.
138. 138. Zhang H, Tang D, Knize RJ, Zhao L, Bao Q, Loh KP. Graphene mode locked, wavelength tunable, dissipative soliton fiber laser. Appl Phys Lett 2010;96:111112.
139. 139. Zhao LM, Tang DY, Zhang H, Wu X, Bao Q, Loh KP. Dissipative soliton operation of an Ytterbium-doped fiber laser mode locked with atomic multilayer graphene. Optics Lett 2010;35(21):3622–4.
140. 140. Cui YD, Liu XM, Zeng C. Conventional and dissipative solitons in a CFBG-based fiber laser mode-locked with a graphene-nanotube mixture. Laser Phys Lett 2014;11:055106.
141. 141. Cheng Zh, Li H, Shi H, Ren J, Yang Q-H, Wang P. Dissipative soliton resonance and reverse saturable absorption in graphene oxide mode-locked all-normal-dispersion Yb-doped fiber laser. Optics Express 2015;23(6):7000–6. DOI: 10.1364/OE.23.007000.
142. 142. Liu X, Cui Y, Han D, Yao X, Sun Zh. Distributed ultrafast laser. Sci Rep2014;5:9101. DOI: 10.1038/srep9101.
143. 143. Im JH, Choi SY, Rotermund F, Yeom D-I. All-fiber Er-doped dissipative soliton laser based on evanescent field interaction with carbon nanotube saturable absorber. Optics Express 2010;18(21):22141–6.
144. 144. Du J, Wang Q, Jiang G, Xu Ch, Zhao Ch, Xiang Y, Chen Y, Wen Sh, Zhang H. Ytterbium-doped fiber passively mode locked by few-layer molybdenum disulfide (MoS2) saturable absorber functioned with enhanced field interaction. Sci Rep 2014;4:6346. DOI: 10.1038/srep06346.
145. 145. Paschotta R, Keller U. Passive mode locking with slow saturable absorbers. Appl Phys B 2001;73:653–62. DOI: 10.1007/s003400100726.
146. 146. Haus HA, Silberberg Y. Theory of mode locking of a laser diode with a multiple-quantum-well structure. J Opt Soc Am B 1985;2(7):1237–43.
147. 147. Kaertner F. Lecture Notes: Introduction to Ultrafast Optics. Chapter 7 [Internet]. 2005. Available from:
148. 148. Lam C-K, Malomed BA, Chow KW, Wai PKA. Spatial solitons supported by localized gain in nonlinear optical waveguides. Eur Phys J Special Topics 2009;173:233–43. DOI: 10.1140/epjst/e2009-01076-8.
149. 149. Sakaguchi H, Malomed BA. Stable two-dimensional solitons supported by radially inhomogeneous self-focusing nonlinearity. Optics Lett 2012;37(6):1035–7. DOI: 10.1364/OL.37.001035.
150. 150. Borovkova OV, Kartashov YV, Vysloukh VA, Lobanov VE, Malomed BA, Torner L. Solitons supported by spatially inhomogeneous nonlinear losses. Optics Express 2012;20(3):2657–67. DOI: 10.1364/OE.20.002657.
151. 151. Kartashov YV, Konotop VV, Vysloukh VA. Two-dimensional dissipative solitons supported by localized gain. Optics Lett 2011;36(1):82–4. DOI: 10.1364/OL.36.000082.
152. 152. Kalosha VP, Chen L, Bao X. Feasibility of Kerr-lens mode locking in fiber lasers. In: Vallée R, Piché M, Mascher P, Cheben P, Côté D, LaRochelle S, Schriemer HP, Albert J, Ozaki T. (Eds.) Proc SPIE 7099, Photonics North 2008; 2–4 June 2008; Montréal, Canada. Bellingham: SPIE; 2008. p. 70990S. DOI: 10.1117/12.807415.
153. 153. Kalashnikov V, Apolonski A. Simulation of a Kerr fiber laser. In: Proc. Meeting on Russian Fiber Lasers; 27–30 March; Novosibirsk, Russia. 2012. pp. 113–114.
154. 154. Shaw JK. Mathematical Principles of Optical Fiber communications. Philadelphia: SIAM; 2004. p. 93.
155. 155. Kaup DJ. Exact quantization of the nonlinear Schroedinger equation. J Math Phys 1975;16:2036–41.
156. 156. Thacker HB, Wilkinson D. Inverse scattering transform as an operator method in quantum field theory. Phys Rev D 1979;19:3660–5.
157. 157. Akhmediev NN, Ankiewicz A. Solitons around us: integrable, Hamiltonian and dissipative systems. In: Porsezian K, Kuriakose VC. (Eds.) Optical Solitons: Theoretical and Experimental Challanges. Berlin: Springer-Verlag; 2002. pp. 105–126.
158. 158. Kuszelewicz R, Barbay S, Tissoni G, Almuneau G. Editorial on dissipative optical solitons. Eur Phys JD 2010;59:1–2. DOI: 10.1140/epjd/e2010-00167-7.
159. 159. Martinez OE, Fork RL, Gordon JP. Theory of passively mode-locked lasers for the case of a nonlinear complex-propagation coefficient. J Opt Soc Am B 1985;2(5):753–60. DOI: 10.1364/JOSAB.2.000753.
160. 160. Haus HA, Fujimoto JG, Ippen EP. Structures for additive pulse mode locking. J Opt Soc Am B 1991;8(10):2068–76.
161. 161. Kalashnikov VL, Sorokin E, Sorokina IT. Multipulse operation and limits of the Kerr-lens mode locking stability. IEEE J Quantum Electron 2003;39(2):323–36.
162. 162. Proctor B, Westwig E, Wise F. Characterization of a Kerr-lens mode-locked Ti:sapphire laser with positive group-velocity dispersion. Optics Lett 1993;18(19):1654–6. DOI: 10.1364/OL.18.001654.
163. 163. Chen S, Liu Y, Mysyrowicz A. Unusual stability of one-parameter family of dissipative solitons due to spectral filtering and nonlinearity saturation. Phys Rev A 2010;81:061806(R).
164. 164. Kalashnikov VL. Chirped-pulse oscillators: route to the energy-scalable femtosecond pulses. In: Al-Khursan AH. (Ed.) Solid State Laser. InTech; 2012. pp. 145–184. DOI: 10.5772/37415.
165. 165. Kalashnikov VL, Apolonski A. Chirped-pulse oscillators: a unified standpoint. Phys Rev A 2009;79:043829.
166. 166. Akhmediev N, Soto-Crespo JM, Grelu Ph. Roadmap to ultra-short record high-energy pulses out of laser oscillators. Phys Lett A 2008;372:3124–8. DOI: 10.1016/j.physleta.2008.01.027.
167. 167. Kalashnikov VL. Chirped dissipative solitons. In: Babichev LF, Kuvshinov VI. (Eds.) Nonlinear Dynamics and Applications. Minsk: 2010. pp. 58–67.
168. 168. Brons J, Pervak V, Fedulova E, Bauer D, Sutter D, Kalashnikov V, Apolonskiy A, Pronin O, Krausz F. Energy scaling of Kerr-lens mode-locked thin-disk oscillators. Optics Lett 2014;39(22):6442–5. DOI: 10.1364/OL.39.006442.
169. 169. Hu M-L, Wang Ch-L, Tian Zh, Xing Q-R, Chai L, Wang Ch-Y. Environmentally stable, high pulse energy Yb-doped large-mode-area photonic crystal fiber laser operating in the soliton-like regime. IEEE Photon Technol Lett 2008;20(13):1088–90. DOI: 10.1109/LPT.2008.924300.
170. 170. Ramachandran S, Fini JM, Mermelstein M, Nicholson JW, Ghalmi S, Yan MF. Ultra-large effective-area, higher-order mode fibers: a new strategy for high-power lasers. Laser Photon Rev 2008;2:429–48. DOI: 10.1002/lpor.200810016.
171. 171. Vukovic N, Healy N, Peacock AC. Guiding properties of large mode area silicon microstructured fibers: a route to effective single mode operation. J Opt Soc Am B 2011;28(6):1529–33. DOI: 10.1364/JOSAB.28.001529.
172. 172. Jansen F, Stutzki F, Otto H-J, Eidam T, Liem A, Jauregui C, Limpert J, Tünnermann A. Thermally induced waveguide changes in active fibers. Optics Express 2012;20(4):3997–4008. DOI: 10.1364/OE.20.003997.
173. 173. Kalashnikov VL, Sorokin E. Dissipative Raman soliton. Optics Express 2014;22(24):30118–26. DOI: 10.1364/OE.22.030118.
174. 174. Chernykh AI, Turitsyn SK. Soliton and collapse regimes of pulse generation in passively mode-locking laser systems. Optics Lett 1995;20(4):398–400. DOI: 10.1364/OL.20.000398.
175. 175. Chang W, Ankiewicz A, Soto-Crespo JM, Akhmediev N. Disssipative soliton resonances. Phys Rev A 2008;78:023830. DOI: 10.1103/PhysRevA.78.023830.
176. 176. Chang W, Ankiewicz A, Soto-Crespo JM, Akhmediev N. Dissipative soliton resonances in laser models with parameter management. J Opt Soc Am B 2008;25(12):1972–7.
177. 177. Kalashnikov VL, Apolonski A. Energy scalability of mode-locked oscillators: a completely analytical approach to analysis. Optics Express 2010;18(25):25757–70.
178. 178. Grelu Ph, Chang W, Ankiewicz A, Soto-Crespo JM, Akhmediev N. Dissipative soliton resonance as a guideline for high-energy pulse laser oscillators. J Opt Soc Am B 2010;27(11):2336–41.
179. 179. Ding E, Grelu Ph, Kutz JN. Dissipative soliton resonance in a passively mode-locked fiber laser. Optics Lett 2011;36(7):1146–8.
180. 180. Kharenko DS, Shtyrina OV, Yarutkina IA, Podivilov EP, Fedoruk MP, Babin SA. Generation and scaling of highly-chirped dissipative solitons in an Yb-doped fiber laser. Laser Phys Lett 2012;9(9):662–8. DOI: 10.7452/Japl.201210060.
181. 181. Cheng Zh, Li H, Wang P. Simulation of generation of dissipative soliton, dissipative soliton resonance and noise-like pulse in Yb-doped mode-locked fiber lasers. Optics Express 2015;23(5):5972–81. DOI: 10.1364/OE.23.005972.
182. 182. Chang W, Soto-Crespo JM, Ankiewicz A, Akhmediev N. Dissipative soliton resonances in the anomalous dispersion regime. Phys Rev A 2009;79:033840. DOI: 10.1103/PhysRevA.79.033840.
183. 183. Zh-Ch, Ning Q-Y, Mo H-L, Cui H, Liu J, Wu L-J, Luo A-P, Xu W-Ch. Vector dissipative soliton resonance in a fiber laser. Optics Express 2013;21(8):109910204. DOI: 10.1364/OE.21.010199.
184. 184. Smirnov SV, Kobtsev SM, Kukarin SV, Turitsyn SK. Mode-locked fibre lasers with high-energy pulses. In: Jakubczak K. (Ed.) Laser Systems for Applications. InTech; 2011. pp. 39–58.
185. 185. Wise FW, Chong A, Renninger WH. High-energy femtosecond fiber lasers based on pulse propagation at normal dispersion. Laser Photon Rev 2008;2(1–2):58–73. DOI: 10.1002/lpor.200710041.
186. 186. Renninger WH, Chong A, Wise FW. Pulse shaping and evolution in normal-dispersion mode-locked fiber lasers. IEEE J Sel Top Quantum Electron 2012;18(1):389–98. DOI: 10.1109/JSTQE.2011.2157462.
187. 187. Nelson LE, Fleischer SB, Lenz G, Ippen EP. Efficient frequency doubling of a femtosecond fiber laser. Optics Lett 1996;21(21):1759–61. DOI: 10.1364/OL.21.001759.
188. 188. Oktem B, Ülgüdür C, Ilday FÖ. Soliton–similariton fibre laser. Nat Photon 2010;4:307–11. DOI: 10.1038/nphoton.2010.33.
189. 189. Renninger WH, Chong A, Wise FW. Dissipative solitons in normal-dispersion fiber lasers. Phys Rev A 2008;77:023814. DOI: 10.1103/PhysRevA.77.023814.
190. 190. Chong A, Renninger WH, Wise FW. Properties of normal-dispersion femtosecond fiber lasers. J Opt Soc Am B 2008;25(2):140–8.
191. 191. Schultz M, Karow H, Prochnow O, Wandt D, Morgner U, Kracht D. Optics Express 2008;16(24):19562–7. DOI: 10.1364/OE.16.019562.
192. 192. Im JH, Choi SY, Rotermund F, Yeom D-I. All-fiber Er-doped dissipative soliton laser based on evanescent field interaction with carbon nanotube saturable absorber. Optics Express 2010;18(21):22141–6. DOI: 10.1364/OE.18.022141.
193. 193. Kieu K, Wise FW. All-fiber normal-dispersion femtosecond laser. Optics Express 2008;16(15):11453–8. DOI: 10.1364/OE.16.011453.
194. 194. Yang H, Wang A, Zhang Zh. Efficient femtosecond pulse generation in an all-normal-dispersion Yb:fiber ring laser at 605 MHz repetition rate. Optics Lett 2012;37(5):954–6. DOI: 10.1364/OL.37.000954.
195. 195. Chichkov NB, Hausmann K, Wandt D, Morgner U, Neumann J, Kracht D. 50 fs pulses from an all-normal dispersion erbium fiber oscillator. Optics Lett 2010;35(18):3081–3. DOI: 10.1364/OL.35.003081.
196. 196. Ruehl A, Kuhn V, Wandt D, Kracht D. Normal dispersion erbium-doped fiber laser with pulse energies above 10 nJ. Optics Express 2008;16(5):3130–5. DOI: 10.1364/OE.16.003130.
197. 197. Lhermite J, Machinet G, Lecaplain C, Boullet J, Traynor N, Hideur A, Cormier E. High-energy femtosecond fiber laser at 976 nm. Optics Express 2010;35(20):3459–61. DOI: 10.1364/OL.35.003459.
198. 198. Buckley J, Chong A, Zhou Sh, Renninger W, Wise FW. Stabilization of high-energy femtosecond ytterbium fiber lasers by use of a frequency filter. J Opt Soc Am B 2007;24(8):1803–6. DOI: 10.1364/JOSAB.24.001803.
199. 199. Buckley JR, Wise FW, Ilday FÖ, Sosnowski T. Femtosecond fiber lasers with pulse energies above 10 nJ. Optics Lett 2005;30(14):1888–90. DOI: 10.1364/OL.30.001888.
200. 200. Renninger WH, Chong A, Wise FW. Giant-chirp oscillators for short-pulse fiber amplifiers. Optics Lett 2008;33(24):3025–7. DOI: 10.1364/OL.33.003025.
201. 201. Chong A, Renninger WH, Wise FW. All-normal-dispersion femtosecond fiber laser with pulse energy above 20 nJ. Optics Lett 2007;32(16):2408–10. DOI: 10.1364/OL.32.002408.
202. 202. Ortaç B, Lecaplain C, Hideur A, Schreiber T, Limpert J, Tünnermann A. Passively mode-locked single-polarization microstructure fiber laser. Optics Express 2008;16(3):2122–8. DOI: 10.1364/OE.16.002122.
203. 203. Lefrancois S, Sosnowski ThS, Liu Ch-H, Galvanauskas A, Wise FW. Energy scaling of mode-locked fiber lasers with chirally-coupled core fiber. Optics Express 2011;19(4):3464–70. DOI: 10.1364/OE.19.003464.
204. 204. Lecaplain C, Ortaç B, Hideur A. High-energy femtosecond pulses from a dissipative soliton fiber laser. Optics Lett 2009;34(23):3731–3. DOI: 10.1364/OL.34.003731.
205. 205. Lecaplain C, Chédot C, Hideur A, Ortaç B, Limpert J. High-power all-normal-dispersion femtosecond pulse generation from a Yb-doped large-mode-area microstructure fiber laser. Optics Lett 2007;32(18):2738–40. DOI: 10.1364/OL.32.002738.
206. 206. Ortaç B, Schmidt O, Schreiber T, Limpert J, Tünnermann A, Hideur A. High-energy femtosecond Yb-doped dispersion compensation free fiber laser. Optics Express 2007;15(17):10725–32. DOI: 10.1364/OE.15.010725.
207. 207. Lhermite J, Lecaplain C, Machinet G, Royon R, Hideur A, Cormier E. Mode-locked 0.5 μJ fiber laser at 976 nm. Optics Lett 2011;36(19):3819–21. DOI: 10.1364/OL.36.003819.
208. 208. Lecaplain C, Ortaç B, Machinet G, Boullet J, Baumgartl M, Schreiber T, Cormier E, Hideur A. High-energy femtosecond photonic crystal fiber laser. Optics Lett 2010;35(19):3156–8. DOI: 10.1364/OL.35.003156.
209. 209. Ortaç B, Baumgartl M, Limpert J, Tünnermann A. Approaching microjoule-level pulse energy with mode-locked femtosecond fiber lasers. Optics Lett 2009;34(10):1585–7. DOI: 10.1364/OL.34.001585.
210. 210. Ankiewicz A, Devine N, Akhmediev N, Soto-Crespo JM. Dissipative solitons and antisolitons. Phys Lett A 2007;370:454–8. DOI: 10.1016/j.physleta.2007.06.001.
211. 211. van Saarlos W, Hohenberg PC. Fronts, pulses, sources and sinks in generalized complex Ginzburg-Landau equations. Physica D 1992;56:303–67.
212. 212. Soto-Crespo JM, Akhmediev NN, Afanasjev VV, Wabnitz S. Pulse solutions of the cubic-quintic complex Ginzburg-Landau equation in the case of normal dispersion. Phys Rev E 55;4:4783–96.
213. 213. Conte R. (Ed.) The Painlevé Property: One Century Later. New York: Springer-Verlag; 1999. 810 p.
214. 214. Greco AM. (Ed.) Direct and Inverse Methods in Nonlinear Evolution Equations. Berlin: Springer; 2003. 282 p.
215. 215. Kivshar YS, Malomed BA. Dynamics of solitons in nearly integrable systems. Rev Mod Phys 1989;61(4):763. DOI:
216. 216. Malomed BA, Nepomnyashchy AA. Kinks and solitons in the generalized Ginzburg-Landau equation. Phys Rev A 1990;42:6009.
217. 217. Kalashnikov VL. Chirped dissipative solitons of the complex cubic-quintic nonlinear Ginzburg-Landau equation. Phys Rev E 2009;80:046606.
218. 218. Podivilov E, Kalashnikov VL. Heavily-chirped solitary pulses in the normal dispersion region: new solutions of the cubic-quintic complex Ginzburg-Landau equation. JETP Lett 2005;82(8):467–71.
219. 219. Kalashnikov VL, Podivilov E, Chernykh A, Apolonski A. Chirped-pulse oscillators: theory and experiment. Appl Phys B 2006;83(4):503–10.
220. 220. Ablowitz MJ, Horikis ThP. Solitons in normally dispersive mode-locked lasers. Phys Rev A 2009;79:063845.
221. 221. Kharenko DS, Shtyrina OV, Yarutkina IA, Podivilov EV, Fedoruk MP, Babin SA. Highly chirped dissipative solitons as a one-parameter family of stable solutions of the cubic-quintic Ginzburg-Landau equation. J Opt Soc Am B 2011;28(10):2314–9.
222. 222. Malomed BA. Variational methods in nonlinear fiber optics and related fields. In: Wolf E. (Ed.) Progress in Optics, Vol. 43. North-Holland: Elsevier; 2002. pp. 71–193.
223. 223. Ankiewicz A, Akhmediev N, Devine N. Dissipative solitons with a Lagrangian approach. Optical Fiber Technol 2007;13(2):91–7.
224. 224. Bale BG, Boscolo S, Kutz JN, Turitsyn SK. Intracavity dynamics in high-power mode-locked fiber lasers. Phys Rev A 2010;81:033828.
225. 225. Bale BG, Kutz JN. Variational method for mode-locked lasers. J Opt Soc Am B 2008;25(7):1193–202. DOI: 10.1364/JOSAB.25.001193.
226. 226. Tsoy EN, Ankiewicz A, Akhmediev N. Dynamical models for dissipative localized waves of the complex Ginzburg-Landau equation. Phys Rev E 2006;73:036621.
227. 227. Bale BG, Kutz JN, Chong A, Renninger WH, Wise FW. Spectral filtering for high-energy mode-locking in normal dispersion fiber lasers. J Opt Soc Am B 2008;25(10):1763–70. DOI: 10.1364/JOSAB.25.001763.
228. 228. Bale BG, Boscolo S, Turitsyn SK. Dissipative dispersion-managed solitons in mode-locked lasers. Optics Lett 2009;34(21):3286–8. DOI: 10.1364/OL.34.003286.
229. 229. Ding E, Kutz JN. Stability analysis of the mode-locking dynamics in a laser cavity with a passive polarizer. J Opt Soc Am B 2009;26(7):1400–11. DOI: 10.1364/JOSAB.26.001400.
230. 230. Kalashnikov VL. Dissipative soliton energy scaling. Phys. Rev. A. Forthcoming.
231. 231. Liu X. Pulse evolution without wave breaking in a strongly dissipative dispersive laser system. Phys Rev A 2010;81(5):053819.
232. 232. Shen X, Li W, Zeng H. Polarized dissipative solitons in all-polarization-maintained fiber laser with long-term stable self-started mode-locking. Appl Phys Lett 2014;105:101109.
233. 233. Chen Sh, Liu Y, Mysyrowicz A. Unusual stability of one-parameter family of dissipative solitons due to spectral filtering and nonlinearity saturation. Phys Rev Lett 1997;79:4047.
234. 234. Farnum ED, Kutz JN. Multifrequency mode-locked lasers. J Opt Soc Am B 2008;25(6):1002–10.
235. 235. Zhang H, Tang DY, Wu X, Zhao LM. Multi-wavelength dissipative soliton operation of an erbium-doped fiber laser. Optics Express 2009;17(15):12692–7.
237. 237. Zhang ZX, Xu Z, Zhang L. Tunable and switching dual-wavelength dissipative soliton generation in an all-normal-dispersion Yb-doped fiber laser with birefringence fiber filter. Optics Express 2012;20(24):26736–42.
238. 238. Xu ZW, Zhang ZX. All-normal-dispersion multi-wavelength dissipative soliton Yb-doped fiber laser. Laser Phys Lett 2013;10:085105.
239. 239. Huang S, Wang Y, Yan P, Zhao J, Li H, Lin R. Tunable and switchable multi-wavelength dissipative soliton generation in a graphene oxide mode-locked Yb-doped fiber laser. Optics Express 2014;22(10):11417–26.
240. 240. Mao D, Liu X, Han D, Lu H. Compact all-fiber laser delivering conventional and dissipative solitons. Optics Lett 2013;38(16):3190–3.
241. 241. Kalashnikov VL, Chernykh A. Spectral anomalies and stability of chirped-pulse oscillators. Phys Rev A 2007;75:033820.
242. 242. Kalashnikov VL. Dissipative solitons: perturbations and chaos formation. In: Skiadas CH, Dimotikalis I, Skiadas C. (Eds.) Chaos Theory: Modeling, Simulation and Applications. Singapore: Worlds Scientific Publishing; 2011. pp. 199–206.
243. 243. Kalashnikov VL. Dissipative solitons in presence of quantum noise. Chaotic Model Simulat 2014;(1):29–37.
244. 244. Soto-Crespo JM, Akhmediev N, Ankiewicz A. Pulsating, creeping, and erupting solitons in dissipative systems. Phys Rev Lett 2000;85(14):2937–40.
245. 245. Soto-Crespo JM, Akhmediev N. Exploding soliton and front solutions of the complex cubic–quintic Ginzburg–Landau equation. Math Computers Simulat 2005;69(5–6):526–36. DOI: 10.1016/j.matcom.2005.03.006.
246. 246. Cundiff ST, Soto-Crespo JM, Akhmediev N. Experimental evidence for soliton explosions. Phys Rev Lett 2002;88(7):073903.
247. 247. Crtes C, Descalzi O, Brand HR. Exploding dissipative solitons in the cubic-quintic complex Ginzburg-Landau equation in one and two spatial dimensions. Eur Phys J Special Topics 2014;223:2145–59. DOI: 10.1140/epjst/e2014-02255-2.
248. 248. Crtes C, Descalzi O, Brand HR. Noise can induce explosions for dissipative solitons. Phys Rev E 2012;85(015205(R)). DOI: 10.1103/PhysRevE.85.015205.
249. 249. Arecchi FT, Bortolozzo U, Montina A, Residori S. Granularity and inhomogenety are the joint generators of optical rogue waves. Phys Rev Lett 2011;106:153901.
250. 250. Onorato M, Residori S, Bortolozzo U, Montina A, Arecchi FT. Rogue waves and their generating mechanisms in different physical contexts. Phys Rep 2013;528:47–89.
251. 251. Solli DR, Ropers C, Koonath P, Jalali B. Optical rogue waves. Nature 2007;450:1054–7.
252. 252. Soto-Crespo JM, Grelu Ph, Akhmediev N. Dissipative rogue waves: Extreme pulses generated by passively mode-locked lasers. Phys Rev E 2011;84(1):016604.
253. 253. Zavyalov A, Egorov O, Iliew R, Lederer F. Rogue waves in mode-locked fiber lasers. Phys Rev A 2012;85:013828.
254. 254. Komarov A, Sanchez F. Structural dissipative solitons in passive mode-locked fiber lasers. Phys Rev E 2008;77:066201.
255. 255. Zakharov V, Dias F, Pushkarev A. One-dimensional wave turbulence. Phys Rep 2004;398:1–65.
256. 256. Yun L, Han D. Bound state of dissipative solitons in a nanotube-mode-locked fiber laser. Optics Commun 2014;313:70–3.
257. 257. Amrani F, Haboucha A, Salhi M, Leblond H, Komarov A, Sanchez F. Dissipative solitons compounds in a fiber laser. Analogy with the states of the matter. Appl Phys B 2010;99:107–14. DOI: 10.1007/s00340-009-3774-7.
258. 258. Kalashnikov VL. Dissipative solitons: structural chaos and chaos of destruction. Chaotic Model Simulat 2011;(1):51–9.
259. 259. Zhang L, Pan Zh, Zhuo Zh, Wang Y. Three multiple-pulse operation states of an all-normal-dispersion dissipative soliton fiber laser. Int J Optics 2014;(169379). DOI: 10.1155/2014/169379.
260. 260. Kalashnikov VL, Sorokin E. Dissipative Raman solitons. Optics Express 2014;22(24):30118–126. DOI: 10.1364/OE.22.030118.
261. 261. Kalashnikov VL. Chaotic dissipative Raman solitons. Chaotic Model Simulat 2014;(4):403–10.
262. 262. Babin SA, Podivilov EV, Kharenko DS, Bednyakova AE, Fedoruk MP, Kalashnikov VL, Apolonski A. Multicolour nonlinearly bound chirped dissipative solitons. Nature Commun 2014;5:4653. DOI: 10.1038/ncomms5653.
263. 263. Bednyakova AE, Babin SA, Kharenko DS, Podivilov EV, Fedoruk MP, Kalashnikov VL, Apolonski A. Evolution of dissipative solitons in a fiber laser oscillator in the presence of strong Raman scattering. Optics Express 2013;21(18):29556–64. DOI: 10.1364/OE.21.02056.
264. 264. Kharenko DS, Bednyakova AE, Podivilov EV, Fedoruk MP, Apolonski A, Babin SA. Feedback-controlled Raman dissipative solitons in a fiber laser. Optics Express 2015;23(2):1857–62. DOI: 10.1364/OE.23.001857.
265. 265. Haus JW, Shaulov G, Kuzin EA, Sanchez-Mondragon J. Vector soliton fiber lasers. Optics Lett 1999;24(6):376–8.
266. 266. Barad Y, Silberberg Y. Polarization evolution and polarization instability of solitons in a birefringent optical fiber. Phys Rev Lett 1997;78(17):3290–3.
267. 267. Lei T, Tu Ch, Lu F, Deng Y, Li E. Numerical study on self-similar pulses in mode-locking fiber laser by coupled Ginzburg-Landau equation model. Optics Express 2009;17(2):585–91.
268. 268. Cundiff ST, Collings BC, Akhmediev NN, Soto-Crespo JM, Bergman K, Knox WH. Observation of polarization-locked vector solitons in an optical fiber. Phys Rev Lett 1999;82(20):3988–91.
269. 269. Akhmediev N, Buryak A, Soto-Crespo JM. Elliptically polarised solitons in birefringent optical fibers. Optics Commun 1994;112(5–6):278–82. DOI: 10.1016/0030-4018(94)90631-9.
270. 270. Wu J, Tang DY, Zhao LM, Chan CC. Soliton polarization dynamics in fiber lasers passively mode-locked by the nonlinear polarization rotation technique. Phys Rev E 2006;74:046605. DOI: 10.1103/PhysRevE.74.046605.
271. 271. Zhang H, Tang DY, Zhao LM, Wu X, Tam HY. Dissipative vector solitons in a dispersion-managed cavity fiber laser with net positive cavity dispersion. Optics Express 2009;17(2):455–60.
272. 272. Kong L, Xiao X, Yang Ch. Polarization dynamics in dissipative soliton fiber lasers mode-locked by nonlinear polarization rotation. Optics Express 2011;19(19):18339–44.
273. 273. Mesentsev VK, Turitsyn SK. Stability of vector solitons in optical fibers. Optics Lett 1992;17(21):1497–9.
274. 274. Zhang H, Tang D, Zhao L, Bao Q, Loh KP. Vector dissipative solitons in graphene mode locked fiber lasers. Optics Commun 2010;283:3334–8. DOI: 10.1016/j.optcom.2010.04.064.
275. 275. Luo Zh-Ch, Ning Q-Y, Mo H-L, Cui H, Liu L, Wu L-J, Luo A-P, Xu W-Ch. Vector dissipative soliton resonance in a fiber laser. Optics Express 2013;21(8):10199–204. DOI: 10.1364/OE.21.010199.
276. 276. Tsatourian V, Sergeyev SV, Mou Ch, Rozhin A, Mikhailov V, Rabin B, Westbrook PS, Turitsyn SK. Polarization dynamics of vector soliton molecules in mode locked fibre laser. Sci Rep 2013;3:3154. DOI: 10.1038/srep03154.
277. 277. Sergeyev SV. Fast and slowly evolving vector solitons in mode-locked fibre lasers. Philos Transac Royal Soc A 2014;372:20140006. DOI: 10.1098/rsta.2014.0006.
278. 278. Sergeyev SV, Mou Ch, Turitsyna EG, Rozhin A, Turitsyn SK, Blow K. Spiral attractor created by vector solitons. Light Sci Applic 2014;3:e131. DOI: 10.1038/lsa.2014.12.
279. 279. Lin Q, Agrawal GP. Vector theory of stimulated Raman scattering and its application to fiber-based Raman amplifiers. J Opt Soc Am B 2003;20(8):1616–31.
• Energy-non-scalable branch has two distinguishing characteristics: it turns into solution of Eq. (9) with ζ, χ→0 (‘Schrödinger limit’ [218]) and is unstable in absence of dynamic gain saturation, i.e. if σ is not energy-dependent [221].
Written By
Vladimir L. Kalashnikov and Sergey V. Sergeyev
|
a4e93fbee0a3b02a | Israeli brain drain?
Israeli brain drain?
University of Southern California professor Arieh Warshel is pictured at an Oct. 9 press conference for his Nobel Prize in chemistry. (USC photo/Gus Ruelas)
What is behind Israel’s recent string of Nobel Prize winners?
It could be that Israelis have a practical way of thinking and strong strategies for solving difficult problems, says Arieh Warshel, who earlier this month was awarded the Nobel Prize in chemistry for his role in developing computer programs that simulate “large and complex chemical systems and reactions.”
An Israeli-American professor at the University of Southern California in Los Angeles, Warshel focused on enzymatic reactions within an all-Jewish team of three researchers sharing the prize. His fellow winners are colleagues Michael Levitt, a professor at the Stanford University School of Medicine who holds Israeli, British and American citizenships; and Martin Karplus, a professor at Harvard University and the University of Strasbourg who holds American and Austrian citizenships.
Warshel and Levitt join a long list of recent Israeli Nobel laureates, particularly in chemistry. Dan Shechtman of the Technion-Israel Institute of Technology won the chemistry prize in 2011, and Ada E. Yonath of the Weizmann Institute of Science won in 2009.
In an exclusive interview with, Warshel said that his main motivation as a scientist is “to be the first to solve how things are working.”
“If the motivation is to make money … those people won’t do original science,” Warshel said, calling this phenomenon “equally bad in Israel and in America.”
The announcement that Israeli citizens working abroad, such as Warshel and Levitt, had won the Nobel Prize sparked a media debate in Israel on the country’s “brain drain,” where promising young professionals leave for better academic and industrial opportunities.
When it comes to governments funding science, Warshel believes that investing in many smaller research projects is better than investing in “one flashy project.” The work of Warshel and his counterparts has long been supported by American federal science grants, but resources are still limited, and in both Israel and the U.S. it has become “less and less likely that the best idea will be funded,” he said.
During the 1960s, in the laboratory of Shneior Lifson at the Weizmann Institute of Science in Rehovot, Warshel and Levitt developed a computer model describing molecules classically, as composed of atoms, and predicted the structure of proteins under various conditions.
According to Warshel, atoms can be described as balls being bonded by springs. You can model a molecule by taking actual balls and connecting them with real springs. Then you can follow how the balls, representing atoms in a molecule, are connected, vibrate, and move.
But in a manmade model, the balls would soon fall apart because of gravity, whereas in molecules the gravitational force is negligible. The alternative is to build a computer model that simulates the behavior of a real molecule. Assuming that the atoms in the molecule behave according to Newton’s laws of physics, which are expessed by classical mechanics theory, and encoding the equations that describe Newtonian movement into the computer program, the behavior of the molecular system can be simulated.
One cannot, however, describe the breaking of a chemical bond with classical mechanics, as a ball and a spring. Warshel’s particular interest has been in modeling enzymatic reactions. Enzyme molecules are complex proteins that exist in most living organisms and engage in catalysis, which often involves the breaking of chemical bonds.
The Schrödinger equation, formulated in the 1920s by the Austrian physicist Erwin Schrödinger, describes how electrons are attracted to the nuclei of atoms. From this development evolved the field of quantum mechanics — an essentially different way of looking at a molecule, from the perspective of subatomic particles, like electrons, that exist inside it.
“There are not only springs of bonds to classical atoms,” Warshel said, “there are also the effects of the charges on classical atoms.”
Quantum mechanics computer modeling creates a map of an entire environment depicting where the electrons are likely to be and allowing researchers to predict what may happen next. But using quantum mechanics to calculate and model an entire environment of atoms that will interact with themselves, and with the electrons, becomes difficult for medium or large-sized molecular systems. It would take years to model larger systems in this way, so Warshel and his fellow researchers developed improved computer modeling systems that look at the molecule both in terms of its classical particles (atoms) and its subatomic particles, like electrons.
“When you do it you actually start to understand how enzymes work,” Warshel explained.
The Swedish Nobel Prize academy called the work by the three scientists “ground-breaking in that they managed to make Newton’s classical physics work side-by-side with the fundamentally different quantum physics. … Previously, chemists had to choose to use either/or.”
There are also practical, real-world implications for the research, both in the commercial world and in medicine, Warshel said.
For example, laundry detergent often has an enzyme that helps it digest dirt from clothes. Hypothetically, the enzyme protein could digest too slowly, or stop working when the temperature rises. “Since this program allows you to understand exactly how [an] enzyme is working, you [could know how to] change some … amino acids in the enzyme, and make it work better,” Warshel said.
There are also enzyme proteins in the body that can mutate and cause cancerous cell division, according to Warshel. “If you understand how they work, you can try to find a way to make the broken enzyme not be so effective. In principle you could look for a drug that when it is bound to the enzyme, it will make it stop working,” he said.
A similar scenario involves the HIV virus. When a new drug that is blocking an enzyme protein in the virus is developed, the virus changes sequence and stops binding well to the drug. But it is possible to look at the enzymes in the virus and analyze the mutations by which the virus tried to unbind and evade the drug, and the mutations that make the normal chemistry of the virus go on. Based on those two factors, researchers can suggest the next move of the virus. “It’s like playing chess,” Warshel said.
“In the cases where you want to understand how the virus or parts of it change in order to have resistance to drugs, knowing to model both the chemistry and the binding, and also knowing to model how stable the enzyme will be, is useful, and I believe would be more useful in the future,” he said.
According to statistics recently reported by Haaretz, Jews comprise 0.2 percent of the world population, yet 22 percent of all Nobel laureates have been Jewish. Warshel suggests that one factor behind this phenomenon is described in the book “The Chosen Few: How Education Shaped Jewish History,” by Maristella Botticini and Zvi Eckstein, which stipulates that the survival of the Jewish faith — and by extension the survival of the Jewish people — has depended for centuries on the ability to read the Torah, enabling Jews to ultimately broaden their own education and develop practical skills. This strongly differentiated Jews from many other populations which for centuries were generally illiterate.
In Warshel’s estimation, there is yet another theory behind Jewish scholarly and scientific success, one that is simpler and hits closer to home.
“There is the idea of the Jewish mother,” he quipped. |
4003cd9b234d887c | 2022-09-25T15:12:37Z https://oai.zbmath.org/v1/
oai:zbmath.org:7562697 2022-07-26T08:45:13Z 35 37 70
Lou, Zhaowei; Si, Jianguo; Wang, Shimin 2022 7562697 English American Institute of Mathematical Sciences (AIMS), Springfield, MO https://zbmath.org/07562697 Content generated by zbMATH Open, such as reviews, classifications, software, or author disambiguation data, are distributed under CC-BY-SA 4.0. This defines the license for the whole dataset, which also contains non-copyrighted bibliographic metadata and reference data derived from I4OC (CC0). Note that the API only provides a subset of the data in the zbMATH Open Web interface. In several cases, third-party information, such as abstracts, cannot be made available under a suitable license through the API. In those cases, we replaced the data with the string 'zbMATH Open Web Interface contents unavailable due to conflicting licenses.' Discrete Contin. Dyn. Syst. 42, No. 9, 4555-4595 (2022) 37K55; 37J40; 70K43; 35Q55 Invariant tori for the derivative nonlinear Schrödinger equation with nonlinear term depending on spatial variable j |
db0d0ade929d94df | Schrodinger’s Equation does not calculate the behavior of quantum particles directly. This function is called wave function. In what follows, all wave functions are assumed to be normalized. The wave function Ψ is a mathematical expression. 6 - A photon with a wavelength of 93.8 nm strikes a... Ch. The wave function Ψ in Schrodinger wave equation, has no physical significance except than it represents the amplitude of the electron wave. ∫ψ*(x,t)ψ(x,t)dx=1 (1), This is called the normalization condition . 6 - Suppose you live in a different universe where a... Ch. There occurs also finite-dimensional Hilbert spaces. For instance, in the function space L2 one can find the function that takes on the value 0 for all rational numbers and -i for the irrationals in the interval [0, 1]. One therefore talks about an abstract Hilbert space, state space, where the choice of representation and basis is left undetermined. {\displaystyle t} The wave function ψ(x,t) is a quantity such that the product. it is a complex quantity representing the variation of a matter wave. The wave function is a complex-valued probability amplitude, and the probabilities for the possible results of measurements made on the system can be derived from it. Some, including Schrödinger, Bohm and Everett and others, argued that the wave function must have an objective, physical existence. The, The set is non-unique. The Schrodinger wave function for a stationary state of an atom is ψ = Af (r)sinθcosθe^iϕ asked Jul 26, 2019 in Physics by Taniska ( 64.3k points) quantum mechanics 2 : a quantum-mechanical function whose square represents the relative probability of finding a given elementary particle within a specified volume of space. It is represented by Greek symbol ψ(psi), ψ consists of real and imaginary parts. Keywords –Wave function, space time interval, space time curvature Physics for Scientists and Engineers – with Modern Physics (6th Edition), P. A. Tipler, G. Mosca, Freeman, 2008, "Einstein's proposal of the photon concept: A translation of the, "The statistical interpretation of quantum mechanics", "An Undulatory Theory of the Mechanics of Atoms and Molecules", Identical Particles Revisited, Michael Fowler, The Nature of Many-Electron Wavefunctions, Quantum Mechanics and Quantum Computation at BerkeleyX,, Creative Commons Attribution-ShareAlike License, Linear algebra explains how a vector space can be given a, In this case, the wave functions are square integrable. The symbol occurs in the wave equation as the amplitude function which needs explanation for better understanding of the electron behavior. 6 - In principle, which of the following can he... Ch. That is has only mathematical significance an do not attach any physical significance to,. This debate includes the question of whether the wave function describes an actual physical wave. The reason for the distinction is that we define the wave function and attach certain meaning to its behavior under mathematical manipulation, but ultimately it is a tool that we use to achieve some purpose. All of these actually appear in physical problems, the latter ones in the harmonic oscillator, and what is otherwise a bewildering maze of properties of special functions becomes an organized body of facts. (a) For a single particle in 3d with spin s, neglecting other degrees of freedom, using Cartesian coordinates, we could take α = (sz) for the spin quantum number of the particle along the z direction, and ω = (x, y, z) for the particle's position coordinates. The square of the wave function, Ψ 2, however, does have physical significance: the probability of finding the particle described by a specific wave function Ψ at a given point and time is proportional to the value of Ψ 2.” Really? The value of the wave function of a particle at a given point of space and time is related to the likelihood of the particle’s being there at the time. #SanjuPhysics 12TH PHYSICS ELECTROSTATICS PLAYLIST SPECTROSCOPY … They wanted a mathematical description for the shape of that wave, and that's called the wave function. Many famous physicists of a previous generation puzzled over this problem, such as Schrödinger, Einstein and Bohr. A wave function describes the state of a physical system, , by expanding it in terms of other possible states of the same system, . This paper describes wave function as function spacetime fluctuation. The energy of an individual photon depends only on the frequency of light, … Currently there is no physical explanation about wave function. These are plane wave solutions of the Schrödinger equation for a free particle, but are not normalizable, hence not in L2. 6 - What does wave-particle duality mean? This means that the solutions to it, wave functions, can be added and multiplied by scalars to form a new solution. A wave function in quantum physics is a mathematical description of the quantum state of an isolated quantum system. Not all functions are realistic descriptions of any physical system. With more particles, the situations is more complicated. SANJU PHYSICS 23,777 views. A brief mathematical state of the Variation Principle. The wave function Ψ in Schrodinger wave equation, has no physical significance except than it represents the amplitude of the electron wave. In the corresponding relativistic treatment, In quantum field theory the underlying Hilbert space is, This page was last edited on 29 October 2020, at 07:02. to be brief, normalized wave functions (or rather the squares of the normalized wave functions) give you the probabilities of finding a particle (or a system of particles) in a certain state (position/momentum, angular momentum, spin, color and so on). The de Broglie-Bohm theory or the many-worlds interpretation has another view on the physical meaning of the wave function then the Copenhagen interpretation of the wave function. It carries crucial information about the electron it is associated with: from the wave function we obtain the electron's energy, angular momentum, and orbital orientation in the shape of the quantum numbers n, l, and ml. More, all α are in an n-dimensional set A = A1 × A2 × ... An where each Ai is the set of allowed values for αi; all ω are in an m-dimensional "volume" Ω ⊆ ℝm where Ω = Ω1 × Ω2 × ... Ωm and each Ωi ⊆ ℝ is the set of allowed values for ωi, a subset of the real numbers ℝ. So this wave function gives you a mathematical description for what the shape of the wave is. [41] A quantum state |Ψ⟩ in any representation is generally expressed as a vector. Variable quantity that mathematically describes the wave characteristics of … To see this, it is a simple matter to note that, for example, the momentum operator of the i'th particle in a n-particle system is, The resulting basis may or may not technically be a basis in the mathematical sense of Hilbert spaces. PHYSICAL SIGNIFICANCE OF WAVE FUNCTIONS (BORN’S INTERPRETATION): The wave function ψ itself has no physical significance but the square of its absolute magnitude |ψ2| has significance when evaluated at a particular point and at a particular time |ψ2| gives the probability of finding the particle there at that time. Does the amplitude function have any physical significance like the one we attach to other waves? First it must be used to generate a wave function (s). Some functions not being square-integrable, like the plane-wave free particle solutions are necessary for the description as outlined in a previous note and also further below. The wave function Ѱ (r,t) describes the position of particle with respect to time . It is a complex quantity. These quantum numbers index the components of the state vector. What is the physical significance of wave function? Equations (16) and (17) are collectively written as, like considerin a two particle like electrons or some others and assosciate the wave function and put them in to debate of normailizatn, is normalizion of wave function possible to explain physically, Your email address will not be published. The normalization condition requires ρ dmω to be dimensionless, by dimensional analysis Ψ must have the same units as (ω1ω2...ωm)−1/2. The functions that does not meet the requirements are still needed for both technical and practical reasons. If these requirements are not met, it is not possible to interpret the wave function as a probability amplitude. 6 - How do we interpret the physical meaning of the... Ch. If the particle exists , it must be somewhere on the x-axis . It is similar to the projection of a three dimensional vector v → = a x ^ + b y ^ + c z ^ onto another unit vector x ^ which gives you the results v → ⋅ x ^ = a. What is the physical significance of effective wave function? “The wave function ψ(r) for an electron in an atom does no t describe a smeared-out electron with a smooth charge density.
significance of wave function
Dermatology Nurse London, Big Data Learning Roadmap, American Institute Of Constructors Code Of Ethics, Spatial Disorientation Helicopter, Bushbuck Vs Nyala, Syrian Za Atar Recipe, Bertoia Bird Ottoman, |
f5aea5284f208ee6 | What happened before the Big Bang?
Why is there space and matter, and why do we feel things (and have free will)?
We know about the Big Bang. It happened. No arguing there. What that means is that we can pretty reliably use mathematical models to extrapolate current states and trends backwards in time to a point to the sudden expansion of the Universe.
That point, however, is for all practical purposes still infinitely far removed from what actually set off the Big Bang, inflationary expansion and the creation of all matter and space as we know it. Sure there are some pea-brain ideas about p-branes, circular time and whatnot, but there is still no generally accepted explanatory theory for the actual start of everything from nothing.
We also know there is consciousness. You know it and I know it. That more or less is what consciousness is: sensing, knowing, being aware. That what it is to be conscious is what consciousness is, and we all have it. If you don’t, you’re a zombie, a placeholder, a non-player character. In short, if you’re not conscious, you’re not; and then you don’t know what consciousness is. Otherwise you do.
Some like to claim that “not even science knows what consciousness is“.
Exactly, scientists don’t know what it is, because they are mostly reductionists and emergentists. They are reductionists in that they believe all things 1) are things, and 2) all things can be broken down into their fundamental components (little ‘billiard balls’ like quarks and electrons, or strings, or “fields”). Scientists are looking for that ultimate one field, string or particle, as well as the one ruling set of equations that govern the ultimate underlying field. That ‘field’ and that ‘equation’ are what I am talking about when I talk of consciousness, that which started it all. The One concept at the ultimate beginning. How it started is an unanswerable question. That there ‘suddenly’ was something rather than nothing is impossible for a physical something in physical reality to grasp.
Scientists are emergentists in that, in the reigning paradigm, they for the most part believe non-conscious fields, and their temporarily measured manifestations as particles through their interrelations, create self-awareness – i.e., consciousness, through a magical process called emergence.
Again, they start with a dead building block, claim these building blocks perform a kind of ‘dance’ with other such building blocks, since the particles can somehow sense each other through a field. They then go on to say that ‘consciousness’ arises from the dance, somewhere in the in-between of particles and fields.
That “emergence” is nothing more than saying “Hey, and then there is magic!“.
The moment before the Big Bang is exactly the same thing; the only interesting part of the story gets a hand-wawing gesture accompanied by “Oh, our models don’t work there. Our models only work after the creation of time, space, electromagnetism, gravity, matter and quantum mechanic interaction principles
Yes, that’s right. The current regime has a lot of practical ideas of how to predict and manipulate the behaviour of matter, but nothing to say of its origins or meaning. Actually, it has even less to say about the start than nothing, since scientists openly admit that their models explicitly break down at the start, as they do inside black holes, and when it comes to consciousness, i.e., more or less everywhere things start to get interesting.
You and I know there is consciousness, but scientists look away and say “Let’s disregard that and focus on something we can measure: particles“. What scientists miss is that particles are just what a consciousness interacts with, what is “sees”, what it chooses to see.
Particles are a social convention between consciousnesses, they are part of a playing field, a rule book: like a computer game or a book. What happens on the screen is not what happens in the CPU or hard drive. The rules for the interactions on the screen, the seeming characters, objects, walls, gravity explosions, bouncing etc., exist as rules in a machine very far removed in character from the graphic representation as icons on the screen.
You can think of how the rules in the CPU and computer memory relate to the actions on the screen as a bit closer to how consciousnesses relate to 3D-space, time, and the fundamental natural laws governing our physical experiences and interaction. In that context the brain is just a complicated “rock”, interacting with the underlying stream of consciousness. The resulting whirls, eddies, waves, turbulence, vortices around the ‘brain rock’ form our everyday experiences of consciousness in the physical realm.
Yes, science has come a long way in explaining cosmology, from the big bang, inflation and entropy to quantum mechanics, gravity and the potential heat death end of the universe.
But it has so far given up on creation, consciousness, truth, goodness and beauty. Not to mention (free) will. Sure, they keep inventing reductionist ideas of how beauty and love relates to “fitness” points in evolutionary processes – but that’s not an explanation or an ontological foundation, it’s just a reductionist play with words. Where does that self-awareness of “beauty” or “love” as attractors come from?
Anyway, those details can be quarrelled about for eternity. What I am proposing isn’t for scientist to stop looking. They are finding good stuff, stuff underlying my current mode of thinking. Physicists for example know that there is nothing at the bottom, no ‘billiard balls’, just emptiness, 1-dimensional points, rules of interaction. What we choose to measure is what we see and interact with, the rest remains hidden.
Physicists thus actually know that there is no matter (as we used to think about it). There are no canon balls dancing, governed by gravity or electromagnetism. Physicists now choose to see reality as consisting of a field or several fields manifesting as particles if we choose to observe the fields in a manner requiring a particle-like result.
To conclude, starting with an explicitly un-explained creation of all from nothing, continuing with fields or matter with an added sprinkle of magical emergence to get to consciousness, is not an explanation at all. It’s just bureaucrats playing with words and symbols. They aren’t even attempting to examine consciousness, which is the only thing we really know there is. And they and us alike know there is no matter if we look closely enough, just rules for our experience of what we’re looking at.
Then why not simply start with the idea that there is a fundamental ground of being, a something that is what it is to sense, to be aware. Perhaps the idea of a circle, the concept of no beginning, no end, but a kind of primordial self-referential, i.e., a closing in on itself. That ontological prime has no explanation as we could ever understand in the meaning of trying to reduce it further.
My ideas are no more incredible or un-explained than the ideas of the Big Bang and emergence of consciousness. It’s just another way of thinking about the building blocks. It’s actually a further reductionist view of the ontological underpinnings on what we actually observe. Since we observe both consciousness and rules for interaction (i.e., fields/particles), but there is no good way to get from particles to consciousness without magically adding consciousness, then why not start with consciousness and thus avoid the whole having-to-add-something-more theatre?
The ideas I have tried to explain in the last three posts are not saying any of the practical science and knowledge about particles, biology, cosmology, the Big Bang and so on are wrong. Not in any way.
I’m just pointing out that science has little intelligible to say about 1) what set off the Big Bang and the creation of space, time and the natural laws as we know them, 2) consciousness [as well as will] (since it is the ultimate ground of being, irreducible into anything else)
Cosmos / Meaning / Love
Where did it all come from? How did it start. Why is there something? Why do we have a sense of self, a sense at all? Why do particles “feel” gravity, or each other? What is that feeling, that sensing? Why is there such an interaction? What is doing the sensing? Why is there separation to begin with, so different things has something other outside themselves to sense?
Here is my current view, as informed by not least Iain McGilchrist’s awe-inspiring book “The Matter With Things
Consciousness came first, some kind of basic awareness. In order to explore the very concept of consciousness it split into self and other. The separation was a prerequisite of perspective, understanding and resonance, i.e., a partial reunification. (re-)Unity required diversity – one of the original coincidence of opposites. Instead of one potentially bored ur-consciousness there were now a multitude that could interact and see what happened.
Why start with consciousness rather than matter? Consciousness simply can’t emerge from matter, other than by adding consciousness through the magic trick of “emergence“. And then you could just as well have started with consciousness from nothing altogether.
Claiming emergence is just a not so clever sleight of hand, it’s basically fake news. There is no explanatory path demonstrating how quarks and electrons orbiting each other would somehow create a field of consciousness between them – unless there was already a conscious between-field. However, starting at the other end of the dualistic divide, just as non-existent but seemingly solid objects can interact on the screen of a computer game, matter can be willed into existence as a sort of playbook for consciousness. Matter is just rules of interaction. It doesn’t really have to exist, only the rules of what happens to it.
Consciousness is pure flow. It’s fluid and wispy by nature, hard to pin down or predict. By adding resistance within the flow, vortices appear, just as in water or air, i.e., semi-permanent patterns against which consciousnesses can explore what they are capable of.
3D-space and time are such vortices, huge ones; and particles, people, planets and stars are smaller, more or less complex, and more or less permanent, vortices; with their ontological base the one consciousness that is unbound by spacetime. The one at the bottom, split into several, which in turn created spacetime and matter to play on. A place to interact, investigate, and explore, a place for becoming.
You can call the unground becoming “God”, “Dao”, “Flow”, “Force” or whatever evokes a sense of ever-evolving ontological base of all existence, that which all we experience springs out from, like mushrooms from mycelium: individual parts, yet clearly integrated in the whole.
Our purpose.All clear captain, 3D spacetime is up and running. What is our mission?!” We are all here, in space-time, for a reason. That reason, however, is unknown even to the the Dao, unknown to God, if you accept the terminology.
This Becoming is always in motion, in eternal creation without repetition; the road ahead thus inherently unknowable. We are the cells, the fruit, the fungi, the insects and animals of this cosmic Gaia, or simply the Cosmos: the One Beautiful, True and Good. And evil.
Yes, (fortunately) there is an opposing force, a resistance, something to exist against. Without division and diversity, without an other, there is just blandness. Without true otherness and free will to oppose, to be evil, there is no meaning:
The one divided and saw that it was good, since the division was evil (“other”). That leads us to the human condition:
Matter is congealed consciousness. A resistance in the very flow itself. This resistance can be like pebbles in a stream, creating interesting patterns in the waterflow. The brain is an unusually complex rock, that gives rise to the extremely intricate vortices we experience as our individual consciousness and awareness.
The brain has a left side that specializes in narrow beam focus on still details, objects (concrete) and words (abstract symbols), in order to use them.
The right side has a broad and vigilant one-to-one approach with reality, and experiences the world as a moving wholeness, with depth in time, space and emotions.
The right keeps track of the overall arc and context, while handing over tasks of calculation and manipulation of objects and symbols to the left.
The left mistakenly thinks it is in control and has the factual grasp and is in charge, rather than its hippie colleague on the right. Whatever the left comes up with in its virtual rendering of the world, its hall of mirrors of symbols and made-up objects (that actually aren’t separate things, but fully part of the whole), the right needs to re-integrate and make sense of.
Both approaches are needed, but the process has to begin and end grounded in the reality-sensing of the right. For the cooperation to work, however, the left needs its hubris, has to be free to think it knows best, that it is the master, the powerful manipulator of words and things, the one that makes anything happen.
The right is right though, whereas the left is out on left field.
The left brain hemisphere is like the “evil” Flint in the ancient Iroquois legend. The left lacks any sense of meaning, of life, of awe, of depth. It (or Flint) can only bounce its symbols around in the hall of mirrors of its own making, thingificating itself, cutting itself off from the world and its connection with the Cosmos, creating monsters and wreaking havoc in the process.
As long as Flint (the left) doesn’t get the upper hand, the right, “He Who Grasps The Sky With Both Hands”, can accomodate, and integrate Flint and Flint’s actions to a fuller relation with all. Something is created through the process of division and re-unification, through the interplay of good and evil and more true.
Through most of history there was a balance between awe and manipulation, between Flint and his brother, between the left and right brain hemispheres. But since the instrumentalism of the Renaissance took hold, the will to power and lust for manipulation have usurped our natural instincts of reverence and community. As flawed as religion and its false prophets are, religion grounded humanity in traditions of not-knowing, of the coincidence of opposites, of intuitive reason, rather than one-sided “scientific” knowledge of precise cause-and-effect relations between essentially lifeless symbols separated from the true existence.
The scientific revolution and its corollary of secularism and individualism have severed our natural connection with the Cosmos, the Dao, the mycelium from which the individuated consciousnesses are sprung as deeply interconnected mushrooms. When the connection and unity with the whole is lost, life is just a meaningless dance for the deaf and blind, without music or purpose; and death is something to be feared and avoided, rather than a point of new insights and reconnection with the whole.
I don’t believe in “God”. I loathe the perversions of most insitutionalized religions. There is no one to pray to but oneself, there is no literal truth to religious texts, there is no heaven or hell, except of our own making.
What there is is an eternal force, a consciousness, ever flowing, developing, becoming something new. We are all instances of it, from it, through it: temporary individuations, existing for the sake of exploring what becoming means.
When the body dies, the consciousness is re-integrated into something bigger, like a river to a lake, or a water drop to a mountainside, like food in the stomach, it’s reintegrated to simultaneous utter weirdness and total familiarity in ways unfathomable to the little eddies and turbulence my brain gives rise to in the Flow.
Maybe some psychedelic experiences can give a hint of what to expect, maybe meditation or prayers can for some, maybe love or the sublime awe before nature’s immense force can render a semblance of what’s to come.
The Force, the Cosmos, Nature, the Dao is not something separate from us. It’s the ground of all, that which we are sprung from. There is no way to grasp what it is or where it came from. Those concepts don’t even apply. Moreover it is becoming, meaning it doesn’t know itself, it is forever finding out.
That is the mission, the meaning: forever exploring the endless potential of resonating consciousnesses
When we humans lose track of the mutual becoming, in order to grow our power of taming and manipulating matter, our sense of meaning dissipates, social cohesion dissolves, opposition and violence increases – as if driven by an actual evil force, by Flint with his precious flint arrow, i.e., by the left brain’s hubris and will to usurp the role as master rather than emissary.
With the left in charge, nothing new can be discovered, since it thinks it already knows all. It can only see what it already knows. Any attempts to consider what’s outside its schizophrenic hall of mirrors are struck down with ridicule, force and contempt. That’s where society is heading ever faster, accelerated by dogmatic religions and militant atheists alike. Traditions and rituals that used to ground us in history and communion are seen as unscientific and useless – literally the left brain saying “if I can’t use it to manipulate matter it isn’t important” – not unlike some scientists, wholly enamored by the idea of reductionist methods and emergence eventually explaining consciousness and the Cosmos.
What can be done? Just hope to die and leave Earth before it gets too bad? Try to do something about it, be a lonely right-brainer in a world of aggressive lefts? A hippie among yuppies? Establish a Galt’s Gulch of right brain hemisphere peers? Is there any hope of progression left? The world is on the verge of lighting up in global war once again, mental illness is rampant, people are unhappy, prone to aggression or apathy. The world itself is buckling under reckless industrialization.
Science and western philosophy teaches materialism, strict logic and all the things living is not. Words simply can not capture the essence of experience. Actually, verbalizing kills the flow, immediately re-presenting the depth of life with a still and abstract symbol, with no resemblance of its origin. The good, the true, joy, love, and the living, they only manifest when going with the flow, when not looking at it directly or trying to fixate it in a description, or pin it down to try to use it, to replicate it.
I’m not saying science is wrong. I’m not saying industrialization is wrong. I’m not saying religion, prayers and belief in God is the solution. There is no God as such, there is only us, ultimately together as one, the good, beautiful and true Cosmos itself.
I’m saying the lack of understanding our ontological underpinnings is a problem. I’m saying the mistaken idea of taming matter in order to understand consciousness is leading us astray. I’m saying reductionism and focus on details, with a magic sprinkle of emergentism, will never lead to proper understanding of the whole, and thus never be able to restore meaning.
Meaning is to be found in awe of the Cosmos, yes in ourself, of Nature, of our fundamental interconnectedness, of our never-ending journey of Becoming, together as one, albeit temporarily divided.
How do we get there? One step at a time, one day at a time, one awe at a time, perhaps by noticing and embracing the coincidence of opposites in everything, and by humbly projecting the idea to others. Perhaps by visibly rejecting the idea of simple one-sided truths and facts, and aiming for spreading that idea to just one person.
Listening to and integrating somebody else’s view to something grander is the driving force of the Cosmos. If I can do it once, there might be a chance.
Then just one more
P.S. Love is more or less all we need. Awe of the sublime, appreciation of beauty, truth, humility before the unknowable. Love of self, of ourself, of Dao/Nature/Cosmos/(God).
What life is and why – resonance for-yond the physical
Thoughts on perspective and resonance
This post will give you a feel for the view that mind and consciousness come before space,time and physical objects. I also hope to convey the view and mindset that what might seem like an adversary, a hindrance, something bad or ugly, actually coincides with its more readily benign opposite, in effect is required for the very existence if its positive dipole.
The practical use of reading the post, apart from a potential window to the soothing truth of mind over matter and a kind of afterlife, might be
• a more conciliatory view of your problems and adversaries, as well as
• how to adopt a mindset that gets you started and keeps you going on long term endeavors.
The entire concept stems from a word cloud meditation I did a few years ago on the topic of “perspective“. I then realized how my driving force, my purpose, my source of delight and awe was finding new insights, new perspectives; finding views orthogonal to my current thinking, i.e., novel vantage points that could unite apparent opposites and shed new light on existence and my place in it.
Resonance and meetings imply otherness and separatedness. It’s a second order form of non-binarity, not either/or or both/and, but both either/or and both/and: The one has to divide to have something other to interact with. The interaction itself is a third form, not the same as the one and the other, but something else, a connexion. The interaction takes the form of resonance between (not entirely) separated entities being other than themselves.
Resonance is the act of non-meeting meeting, i.e., a coincidence of seemingly opposite things, them being other, alien. If wholly different or separated with nothing between them, they can’t interact. There is no connexion. And if entirely the same, if fully fused, they would just be the one being itself, thus not inter-acting. Perspective is achieved through separation and semi-re-connection through resonance.
Spirited away to neutral territory for meeting with the enlightened (endarkened?)
Last week I forgot my laptop when leaving my seat number 1C after returning from Switzerland. Worth it! (as Elon Musk would say).
I’m still hoping to get it back, so perhaps I don’t really think it’s worth it to lose it after all – although my visit to the Gstaad valley was very rewarding. I never met The Gstaad Guy, Constance, though, which was a bummer. I kind of hoped I would run into him.
Hope is however not a strategy, as the host of the event pointed out when I was leaving, so I’ll have to make do even if the manuscript to my own book is lost with the computer. Yup, no recent backup, due to my cloud service being full since quite some time. I thought about backing it up, but also thought there is no chance in Zug I’ll forget my computer somewhere… Well, the Zug’s on me, I guess.
My Gstaad guy – cloak and dagger style
My visit to Switzerland was, as you can guess, very interesting.
I still don’t know what to make of it a week later. Was I celebrated, was I hacked, doxed? Was I lucky? Did I just participate in the beginning of historical greatness and importance? Maybe all of the above? Right now it’s a superposition of contrary possibilities, better not look the horse too carefully in the mouth.
We were slightly over a dozen people, sharing ideas about the state of the world and one’s own place in it. Most of us knew or knew of at most a handful of the others since before, but most of us clicked effortlessly. We all agreed that the world will work much more differently in 25 years compared to today, than today does compared to 25 years ago. Geopolitically, financially, security-wise, regarding machine tools like advanced AI, longevity etc. The pace of change is speeding up.
Some technologies have disappointed over the last 25 years, whereas other have offered astonishment – software from DeepMind that plays go, poker and computer games, or can predict how a protein will fold; or like Dalle-2 that can create creative art from a relatively simple prompt. Meta has a prompt-to-video product being released soon. And people are already eagerly waiting for prompt-to-private VR kind of products, for more or less hedonistic purposes. Recently, Alphabet revealed three new text prompt to video apps. Exciting times indeed.
If you understand Swedish, så pratar jag och Ludvig dessutom om How The World Works i ett av de senaste avsnitten av vår podcast 25 MINUTER
I was one of a select few people holding keynote talks at the gathering:
Some talked about hacking into modern technological devices, others about merging with technology, or how, or rather if/when, to treat cancer. I talked about perspectives, about relations, about (seeming) paradoxes and how to resolve them, as well as my own practical application of how to get ahead in life and in investing.
The whole invitation and journey process was quite intriguing:
Your podcast changed my world view. I’m putting together a group of people to talk about the changing world order. What can I do to make you come and hold a short talk on something you like“.
I accepted and got my boarding passes in the mail. No address. No hotel. No attendance list. No schedule, apart from “your talk will be in the afternoon on the 26th” (which was, however, without warning right before lunch, moved to “before lunch“, i.e., about 3 minutes notice. Well, well, there’s no time like the present…)
This is the talk I gave about the coincidence of opposites
Over the last few years I’ve changed my view of existence completely. That process started at a bank conference in London back in 2007, with me rekindling my thoughts about leaving the finance industry. The last time before that that I had nurtured such thoughts was in 1999/2000, when I was overworked, exhausted and disillusioned regarding the work hard/play hard lifestyle. Quite paradoxically my planned ‘break in life’ turned out to be joining a hedge fund start-up as a tech analyst in February 2000, right at the peak of an epic tech stock bubble.
Fast forward to 2007/2008 and onward; out of loyalty and nothing better to do, I stuck around at the hedge fund (Futuris/Brummer), planning to quit the day I couldn’t stomach keeping at it for two more years (hat tip to equity analyst and author of City Boy, Geraint Andersson, for the advice). The job was still intellectually rewarding and I liked my colleagues, it’s just that I felt I hade more potential than that.
That day eventually occurred in January 2014, right around my 42:nd birthday. 42 truly is the answer to everything. Incidentally, my typical equity portfolio at Antiloop, now that I’m back in the hedgehog saddle again after approximately 69 months’ hiatus, consists of 42 different companies.
I started blogging on January 1, 2005, in order to structure my thoughts and keep track of my personal development. It was a kind of precursor to a commonplace book, structured notes of one’s insights and knowledge – like the one Carl von Linné used to categorize all living things in.
Blogging turned into an obsession with learning, with reading other blogs, by people much more gifted, talented or hard working than me. I devoured self-help, personal development and life-style blogs everyday, all day long. By 2013/2014, I eventually realized that I had to make a change in my life, I had to free up time, create a vacuum to see what would fill it.
I thus retired and wrote the book “The retarded hedge fund manager, about my experience and lessons learned during my 15 years as a fund manager. That later turned into my online course in value investing, The Finance Course.
I also adopted a rescue dog, a 7-year old, 40+ kg German Shepherd-Doberman mix, and spent every waking and sleeping hour with her, going on long walks, listening to podcasts about science, investing, psychology and philosophy. Slowly a new world view dawned upon me, on existence, on my purpose.
Re-discovering feelings and emotions, the colors of life
When my best friend, my dog Ronja, passed away in the summer of 2019, and my girlfriend left me shortly after, I was forced into ever deeper soul-searching. My losses, and the resulting excruciating pain, stirred something that could not be left alone. I kept scratching for invisible doors in my psyche and eventually found the key to an unexpected awakening.
About the time my mother passed away, about half a year after Ronja, I reached a turning point in my investigations into my own psyche, and its place among other self-aware entities. After 35 years as an automaton, it once again was something to be me. I felt joy and sorrow, experienced the full rainbow of existence. Mostly sorrow, as it were.
I was however still years away from replacing my materialistic, non-spiritual slant on existence with my current, still-emerging, insights about consciousness being ontologically prior to matter. I owe these new perspectives to Bohm, Feynman, Donald Hoffman, Stanislav Grof, Anirban Bandyopadhyay, Adam West, Philip Ball, Sean Carroll, Andrew Gallimore (Building Alien Worlds) and Iain MacGilchrist, among many other authors, podcasters, researchers, philosophers and thinkers.
Before leaving finance in 2014, before Ronja leaving me five years later, I used to think that the universe sprang to life through the Big Bang; that energy condensed into matter, which eventually happened to combine in such complex ways that life emerged, and eventually became aware of itself – matter became conscious.
Talk about the limit level of incredulity: a universe from nothing, and consciousness from unconscious particles orbiting each other – albeit apparently sensing gravity and charge. Nothing in that story makes sense. Where did all the building blocks come from? In addition, even with that explained, you still had to tack on consciousness to the physicalist view, with or without ideas of emergence. Assuming emergence of consciousness was just as unexplained and magical as simply saying “and then there is consciousness as well“. You could just as well start with consciousness, and claim that it invents rules of engagement called “matter”, space and time.
So, I turned the story on its head. Like the German mathematician Jacobi said: Man muss immer umkehren.
Assuming matter came second, what came first?
Energy and matter did not explode into existence. There is no energy or matter. There is only consciousness – without elongation in time or space. It precedes spacetime. Spacetime, matter and energy are just parts of a neat playbook for the eternal consciousness to explore its own potential. It’s a game where the one original consciousness could divide and recombine, meet itself.
There is matter, obviously, you can feel it, interact with it, manipulate it, pull it with your “manis”, use it; but there is also not any matter, just rules of interaction which is what we experience as matter in 3D space. It’s like in a computer game, the buildings, characters aren’t there, they just signify rules of interaction. At the bottom there are just numbers (or something such; rules) specifying how an interaction will be managed. But the stuff isn’t there, unless there is an event, a meeting, a resonance – a measurement as the quantum physicists would say.
There simply was no big bang as science knows it, and yet apparently there was something. Not surprisingly, I used to think religion was stupid up until just a few short years ago. I still do, but now I also see how science is as well. The catalyst for my monumental change were several talks with Alexander Bard as well as reading Maps Of Meaning by Jordan Peterson. I slowly realized that what is said in words rarely is what is being communicated.
They both started out looking for explanations, but lost their way. Now, institutionalized religion is all about manipulating people, and science is all about manipulating matter.
Science doesn’t even begin to try to understand or explain anything anymore – it disregards the only thing we can be absolutely sure of: consciousness. More about that later.
Socialist turned libertarian capitalist turned Hegelian, in an Hegelian dialectic process of back and forth, of both both/and and either/or
• So, I used to be a hardened materialist and capitalist
Twelve years ago, in the spring of 2010, I received the award for The European Hedge Fund Of The Decade. In other words, investing worked out pretty well for me.
But, nevertheless…
Seven years ago, in 2014, I decided to retire from the industry, to focus on more creative endeavors like writing, teaching and podcasting, as well as exploring my own psyche. It’s my podcasting that brought me here to the Gstaad valley today, not my investing prowess or accomplishments.
Since about a year, however, I’m a hedge fund manager again – I’m consequently presently “the un-retired HF manager“. Investing simply is such a complex and interesting puzzle… I want to know more, and I want others to know more. The market is reflexive and ever-evolving, never the same and yet still much the same. Plus ca change…, a coincidence of opposites. Further, investing is the act of postponement, of delayed consumption in order to consume more later on – another interdependent dipole. Here I should mention Kriti Sharma, as a thinker that has influenced my current world view, where all things are interdependent in such an entangled web that the physicalist and strict rationalist concept of linear cause and effect has lost its meaning.
Back to the cold and rational industry of investing: Very competent investors have completely opposite views on investing:
• Some say you should not use your brain at all, just buy and hold the average market. The more time you spend holding the more return you can expect to get. That sounds like an impossibly stupid approach. Holdiots
• Others say you need to use all your faculties to the max; work harder and longer than everybody else, know more, build models, talk to managers, clients, suppliers, understand the economy, money flows, investor preferences and positioning. They say you need talent, discipline, stamina, luck and not least patience – and you must be unemotional and consistent in your execution. That sounds like an impossibly ambitious endeavor. Workaholdiots
• Some say to follow the trend, some to do the opposite – going for reversion to the mean.
• Some say forecasting is key, most say forecasts are worthless – since the future is unknowable in a variant and reliable way
• Some claim robots do a better job, mining historical data for repeating patterns to exploit without emotion– as if there’s nothing new under the sun, and that emotions and intuition suddenly ceased to be valuable decision-making and risk-management tools in complex environments.
They are all wrong. And they are all also all correct. It’s an example of the coincidence of opposites. Two opposing truths are often just different aspects, inseparable parts, of a whole.
In investing, as in life, as in hockey, you should skate to where the puck is going to be. You should play the basketball court like a living organism, where you live slightly into the future, knowing beforehand where and when the next pass or shot is coming. You do not wait for a pass, calculate its trajectory and head for the best intersection. You go well before the pass, perhaps 2-3 passes ahead, sensing the entire play, not its components and atomistic agents.
By not looking too intently on the specific players, ball or puck, you see their actual essence and actions more clearly – you sense the total Gestalt of the game, the whole. In a way, you experience the eternal Dao (the way, existence, the eternal flow, the original one consciousness), by looking through the players with your base mind-being, instead of directly at them with your ordinary eyes.
Relaxing control over puck players and passes, you gain control of the game they constitute. The less control equals more control is yet another example of the coincidence of opposites.
How much or little control has the same answer as always: the question “How long is a rope“. The resolution of a paradox or the reconciliation of opposing views is best accomplished not by a compromise where all loses, but as a dialectic synthesis, a synthesis found orthogonal to the opposing thesis and antithesis. A new vantage point, a new perspective, another dimension from where seeming opposites are seen as necessary aspects of one and the same. Like you and I and our interaction, e.g.
Coincident opposites, points on a spectrum, superpositioned co-existing adversarial factors, are everywhere:
• The north and south poles of a magnet is another example.
• The base rhythm interrupting the melody of a song while simultaneously emphasizing the melody.
• A melody consisting of notes, but where the pauses are what breathes life into the song
• There is no matter, and yet there obviously is
• There is no Fermi paradox, the “aliens” are everywhere. Of course! It’s a very old universe. Earth is their spaceship and playground, fully sustainable, carrying the sun with it for energy. Not the other way round. And yet the Earth does orbit the sun, as clear as day follows night.
Why is this observation important?
First and foremost, because it’s true. It’s the way reality is constructed. In addition, life makes more sense when one realizes that what might seem as conflicting views really aren’t. Looking for resonance and understanding instead of taking an arbitrary side is more constructive. North or south pole? You simply can’t take one without the other.
It also aligns with what I see as the purpose of existence – exploring the infinite complexity and potential of consciousness. It’s the universe trying to understand itself, the one sensor sensing, dividing, recombining, resonancing with a multitude of partly separated instantiations of itself.
The purpose
Why are we experiencing things, why do we partake in events?
I think meetings are everything. Without meetings there is nothing. A conscious being needs to meet other consciousnesses. An isolated consciousness can hardly be said to exist at all. Its existence is based on its resonance with others. These others need to be somewhat, but not too, different, still able to find resonance. But to meet you have to be different from that which is met, implying not meeting, or parting (initially) in order to get to meet and find new resonance – a new and interesting combination, that says something new about what it means to be sensing, to be conscious.
Matter, like consciousness, is also based on relations. An isolated particle is nothing at all. It’s not here in this universe, if it doesn’t interact with something, if there is nothing between two particles. Something is needed between them for them to sense each other, be conscious of each other, affect each other,inform each other, find resonance.
The closer we look at matter the less we find, as any physicist can tell you. The better the microscope the less it sees. At the bottom there is just a concept, a rule for its interaction, it’s meeting, with other similar (non-material) entities.
So, from the bottom up, there exists nothing but resonance. Meetings. Whether your a materialist and physicalist or more of a mind-first thinker.
There is nothing, and yet there is everything. It’s the ultimate coincidence of opposites. The only thing we actually have evidence for is consciousness. Matter more and more is seen as a playbook for interactions. We are like vortices in water, barely separated patterns in an underlying united whole. We are always just “talking” to ourself. We are vortices in the one underlying whole – the vortices created from resistance within itself.
The eternal dao, the flow, the one, shakes and twirls to see what consciousness is capable of.
For-yond of space and time, the paradoxes of Taoism
All of existence is a paradox; why is there something at all, and why is there consciousness? If there was just one consciousness to start with, it makes a lot of sense if it split into more, to have someone to “talk” to. It really is the only thing life boils down to – having someone to talk to. The rest is maintenance. Why and how (and when) that original spark of consciousness got started is not for us to understand, that was in a place beyond, or rather for-yond space and time.
In “Dream Of Life”, the philosopher and taoist Alan Watts in just three minute paints a vivid picture how an omnipotent but essentially lonely entity eventually would choose to live your exact life, for the thrill of it. Impending death, implies appreciating life.
Taoism explores the coincidence of opposites in all kinds of ways. For example it teaches how beauty and pleasure only have meaning relative to ugliness and pain, just as play implies depression, or death implies life.
Tao te ching, the original book on tao (or “dao”, the way), is an excellent example in itself of COO:
Language can never convey reality directly, but is still indispensable for the job – albeit it has to be done between the lines in poetic form. That’s what Lao Tzu has managed to do in Dao de jing. He somehow overcame the coincidence of opposites in terms of language. explaining the unexplainable by forcing a relaxation of strict logic and rationalism onto his students.
Words, such as blue, love, pain, jealousy, longing, can’t be read and understood word by word, but the whole idea can take shape all at once, like a stereoscopic image when relaxing your gaze to look through its seemingly meaningless geometric shapes, rather than directly at them.
there is no way of connecting
the shape of a spearmint molecule
to the experience of its smell
We have to realize that we have evolved not to see reality as it is, in order to even begin to ask the right questions. The smell of spearmint, e.g., has no connection with the bundle of electrons and quarks that constitute it. The experience of smelling is ineffable, unexplainable, impossible to convey to another being – there is no way of connecting the shape of a spearmint molecule to the experience of its smell.
Our interactions with each other, and with things like spearmint are just arbitrary rules, a kind of game, to explore what consciousness is. It’s an infinite game. It’d better be. What we experience as the material world are like computer icons – just handy symbols, shorthand for their underlying rules. A person is an icon of something wholly indescribable – a bundle of consciousness – an infinitely complex entity outside time and space.
The point with apparent opposites, with answers lacking questions, with ineffable experiences and incomprehensible mysteries, is for the universe, the one consciousness, to explore all possible combinations and states of consciousness – plausibly and hopefully an endless multitude, lest it, we, “God”, find ourselves trapped in en endlessly repeating hell of loneliness. I’ve been there myself.
My conclusions come from studying quantum mechanics, philosophy and brain science – but the two most important inspirations are Iain McGilchrist and Donald Hoffman. Especially McGilchrist’s latest book: The Matter With Things made everything click into place. But I couldn’t have truly bought into that world view without the help of many deep meditation sessions.
McGilchrist explains how the brain is divided into two quite different halves. The right hemisphere sees reality as it is, as a united whole, an intricate flow. The left sees concepts without time, without life, still pictures, in effect a virtual rendering in a simulation; it’s the only way we can separate out parts from the whole and manipulate them. We make the things, the parts, from something indivisible. The real reality is just a single unified whole without any parts.
The left manages matter by literally making the flow into graspable parts – that aren’t really there. They aren’t there, since reality is a single flow of relations, instantiations of the one, not “things”.
A few more examples of the ubiquitous Coincidence Of Opposites
Egotism is just altruism by another name. But it’s a good name, an instructive name. It hints at the process of how to be successfully egotistical, by acting generously, kindly, with goodness and beauty in mind.
Burning oil is the most environmentally friendly alternative we have at the moment – since it makes solar and wind viable (and enables their construction). Nuclear is much better of course, but we don’t have enough of it yet. As an investor I can’t wait for the transition to nuclear and renewables to come about.
A strong, stabile and reliable environment fosters weak and careless people, and the breeding ground for societal upheaval that in turn requires diligent, disciplined and strong individuals and terms of engagement
Minsky astutely pointed out how stabile, low volatility, financial market regimes lead to increased risk-taking, leverage, speculation, and unwarranted euphoric highs; and thus vulnerability and inevitable turbulence, before re-starting the dialectic cycle on lower ground.
Humans keep searching for an answer we don’t want to find. An answer would end the search, the process of learning. We have no use for the thing the ultimate truth.
Why there are many rather than just one
The universal consciousness split for a reason. It made itself into a multiplicity of otherness in order to get to know its own unknowable potential. Otherness implies sameness; without the idea of “other” there is no “same” – just a one. The other needs to be just about different enough to be interesting, but similar enough to find common ground and resonance, rather than pass right through unseen and unfelt like dark matter.
The universe itself is the ultimate COO: all from nothing, all the time from no-when, from for-yond physical reality. It’s without any reason, purpose or endgame. It’s all for nothing, except for itself, divided into temporary others, some of them otters.
Coincident opposites oscillate, partake in a dance of eternal exploration and creation. Coincide. Most thinking people are engaged in such a dialectic back and forth, of shifting views and ideals, amid hopefully continuously enhanced perspective and understanding: ever more confused at ever higher levels.
I, e.g., was a kind of socialist in my early teens, albeit a quite ignorant one. I simply planned to live on welfare when I grew up (!), since that was quite okay and doable in Sweden in the 1980s. In my first national vote as an 18-year old in 1990, I, however, voted on the conservative, low-tax alternative. I thought that no matter if I were poor, unemployed, living on welfare or not, the state still had no business stealing tax money from unwilling citizens.
Once I got around to reading Atlas Shrugged, by Ayn Rand, I turned into a full blown zero-tax, anarko-capitalist libertarian, later even into a geolibertarian, inspired by the amazing book (space opera) Withur We. Not so much anymore. No, I can no longer adhere to easy one-sided ‘truths’. I now see the coincidence of opposites in everything. I am a coincidencian. A dilectian (Hegelian) perhaps – considering both sides, finding not a compromise, but a third view, a conciliatory new perspective, seen perpendicular, orthogonal, to the other two extremes.
Did the statue exist before the artist hacked away at the stone?
The molecules were always there, but apparently the physical material is secondary to the art. Without the artist, no statue. And without the marble no statue either? Or? When the idea of the statue exists, like with Da Vinci’s statue sketches for Francesco Sforza and a monument for Marshal Trivulzio, maybe the statue exists, just not manifest in matter. Was Da Vinci a highly talented sculptor or none at all? Hard to say withput any actual output, but most likely he was excellent in that field as well. Both brilliant and non-existent at the same time. As was Michelangelo’s statue David for a time. It both existed, and, more easily grasped didn’t exist, before he ever put a chisel to a block of marble.
In the same vein matter does exists. Obviously – at least that thing we call matter in our consensus reality, our set of ‘Monopoly’, the board game called 3D-life. Matter matters, we made the rules that way. So, yes there is matter. But matter is also just slow moving energy, a more inert set of rules than that for pure consciousness interactions, the rules made up by us consciousnesses, agreed upon. For now. In this setting, at this level.
Just One: A practical application of the idea of apparent opposites
My nickname is , uncarved, unfinished, even incomplete to the extent that the process hasn’t begun at all. By not commencing, not finishing, all potential is left. By doing nothing nothing is left not done. Like an unchiseled block of marble. It can be everything, it is everything, every conceivable statue all at once, a superposition of artworks (pre-yond finished) before the decision, the observation that collapses the infinite to the specific, the end set in stone.
is closely related to Wu Wei, action through inaction. With minimal effort, the process is started in accordance with the natural flow of nature. Just do one little single thing that comes easy, the first step on a possibly long journey (but you’ll only get to know that afterward). Then, like with an involuntary cookie monster rage, do just one more: Take one more step, do one more set at the gym (while fantasizing about going home), just go to the gym one single day (today). Tomorrow is another day; cross that bridge when you get there. Today is today, and today we do one. When tomorrow turns into today, adhere to the principle of doing just one that day, but that’s not for us to think about today. We only think about the one on the present day – then we quit.
The Just One (more) concept is the hedonistic cookie doom loop turned into a virtue. You avoid the psychological burden of thinking about working out the rest of the month, year or life, since you only have one rep, set or gym session to finish. You avoid daunting endgames, long-term objectives, overwhelming target orientation that might stop you from even beginning. Process-orientation is target-less targeting. Not having targets gets you beyond any absolute objectives.
Being process-oriented is more targeted than being object-oriented. Maintaining a process gets you all the way and beyond, rather than stopping at the target or disappointingly missing it, perhaps never starting at all — daunted to apathy at the get go.
The Just One process might seem like interrupting the whole, hacking the flow of the way to the target to pieces. But the single steps of your marathon actually are the ultimate flow and never ending process. The end is not the end, merely a part of an infinite process. COO is about perspective, trying the other side of everything and finding a third, fourth and fifth vantage point, through the dialectic process of idea, anti-idea and new idea.
You are certain you are conscious
You know consciousness exists, that’s one of very few certainties. Matter is another matter. The closer you look at matter the less you see of it, and the further you thus get from the everyday idea of matter as something substantial, graspable, manipulable, pullable by your manis. The more you (or modern scientists) consider matter the less you see of its matterness. There is nothing at the bottom of the hierarchy of elementary “particles”, there is just the idea of rules for interaction, numbers (perhaps) representing arbitrary rules of charge and mass… and perhaps consciousness, if you’re still a materialist believing 3D space and time represent the ontological base level, with consciousness bolted on or magically emergent from the dance of particles (rather then consciousness creating the appearance of particles, and the dance being a manifestation of an awareness field).
Can you be certain there is time? Clock-time? Time is not measured by clocks. Clock-time is part of the physical, the made-up matter-space. As a consciousness you know this. For certain.
You know red, spearmint, consciousness, and love, first-hand. Those are what’s real, not what material scientists tell you is real, after they assumed away everything non-material, i.r. everything that counts.
Time in the present is not the same as in retrospect. Real time is how consciousness experience the flow of interactions. The more fun you have, the more the feeling of flow you experience in the present, the feeling of perfect harmony between your activity and its results, the faster time goes: hours in a flash. But when looking back; in retrospect your life seems full, meaningful, like you got so much done, that you truly lived.
Boredom is the exact opposite. Hours, minutes and seconds never end, whereas ten years, 25 years, 50 years, speed by as if they never happened to you at all. Two coincidences of opposites. Chose yours carefully.
What to do with this insight? What’s the meaning, the purpose, how do I propose you best spend your time?
Relations, meetings, resonance is what it is all about
Meetings are others coming together as one, through resonance, it’s a dance. A dance needs dancers, but it isn’t made of the dancers, the dance is the event, the coming together, the creative relation between others. The dance isn’t in the steps. The dance takes place in the in-between, just like music isn’t made of notes.
Nota Bene, for meetings to be worthwhile there needs to be otherness, the dancers need to be unique, interesting, differently experienced, de minimum having met other people, done other things, developed skills, acquired knowledge, having something to contribute to the relational sonata: goodness, kindness, perspective, beauty.
Relations are thus ontologically prior to what we consider stuff, things, small point-size billiard balls. Starting with little balls or points make it impossible to get to lines or any other dimension, or to relations without introducing the concept of relations in addition to the stuff, the relata, that which is related. An infinite amount of points never amount to a line.
It’s consequently unfathomable how to get from little billiard balls to consciousness without introducing consciousness too as an add-on. If you start with unrelated stuff you still have to invent relations to get interactions. But starting with relations without things, relations are already interconnected and their concentrated interactions can be regarded or felt as things, without adding anything extra in terms of little balls of matter.
Taking the meetings first, mind over matter, idea seriously
Starting with a wholeness, a one, a single field of consciousness, however that got started, easily explains the condensed points, the nexuses, the hubs, and 2D lines, 3D matter, relations, flow and interference.
Possibly, just calling everything fields of rules for interaction helps resolve the incomprehensible idea of relations without relata.
So, assume in the beginning there was a field. A field that flowed. A relation of relations, a kind of potential connections. Points of resistance in that flow, internal resistance from itself, cause vortices, more or less impermanent – what we call matter, things, including partly separated consciousnesses.
To hammer the point home of apparent opposites and the unmatterness of matter; even Matterhorn is just a slow wave. A neutron is too. Over very long stretches of time they too rise and fall, change and perish. A human is similarly semi-permanent, just like a river is, it only stays the same thanks to its constant regeneration, i.e., a kind of permanence is only achieved by not being permanent at all. Matter flows through the persistent pattern that is that person or river. Static, dead stuffthings – wither faster than living patterns in a flow. Dead things are unstable and changing exactly due to their static features.
Things exist, are (kind of) permanent only if they are not fixed…
…only if they are a pattern whose constituent parts flow through it. Patterns, like a cloud, can be permanent, but substantial things aren’t. The concepts all tell the same story: flows vs things, relations vs relata, permanence vs fleeting. All are opposites, and therefore not, i.e. the same.
Change is lasting
My path from either/or to the OE may inspire openness. Ten years ago, let’s say in 2012, I was still an either/or kind of thinker. I was all too willing to accept matter and hard facts as truths, willing to accept emergence of, e.g., consciousness from little billiard balls orbiting each other.
That was before I joined the Orthogonal Elves
Now I know matter is secondary, just a fun game, a sandbox for trying out various combinations of resistance to the flow of Dao.
The members of OE line up their amazing work of geometry, convincingly demonstrating non locality, non-euclidian realities, a place outside human concepts of time and space – only the meeting of consciousnesses. These machine elves exist perpendicular to our ordinary physics. Yes, it’s real. As real as consciousness, the only thing we all know is real, but also know can’t be reliably measured with our current 3D space-technology. They OE are all around us, just as could be expected given billions of years of universe and life. Of course!
And just as self-evidently, what we call matter also exists (as well as is not really real, not there independently when we’re not looking)
How to use the insights of COO in everyday life
Sideways – processes vs targets
Most precious values and experiences are best approached sideways, indirectly, obliquely. Orthogonally to the target:
• Pursuing happiness directly doesn’t work. You can’t fall in love by force or exertion. Love and happiness are byproducts, side-effects of meaningful actions, relations, meetings
• Being goal oriented is a recipe for giving up ,or finding an empty pot at the end of the rainbow. Being caught up in the flow of a meaningful process is a delight in itself, both in the present and in retrospect.
The most practical and actionable advice I can give based on the coincidence of opposites is to not look at the target, not stare at the players or the puck/ball, not aim for specifics, but instead do just one of something of value.
• JUST ONE is the non-target targeting technique of getting flow by interrupting the whole and cutting it up into its smallest components
Just one is an indirect, process-oriented, way of achieving your dreams without specifying them, without daunting goals, without the anxiety of taking in the entire journey at its onset.
• Just one is my way of getting things done. Just one more cookie will get you through all of them. Just one (more) works just as well for working out, for having fun, for enjoying every day for itself.
• Just one gets you all the way and beyond. To reach far away lands and beyond, aim as humbly as conceivable. Take just one small step and celebrate the process, not the objective, not some “truth” or some “endgame”
• To be a master, to achieve peak performance, don’t bore yourself with an arbitrary amount of hours of 1-dimensional practice, but aim to find joy in a wide range of activities – you never know what clues the violin holds for your proficiency on the court, or your AI thesis and struggling start-up. Reach widely, albeit one thing at a time, focus on the present step. Fully. Then aim for Range (excellent book by the way).
• Get it without the pressure of going for it. Enjoy the process, revel in the feeling of doing the right thing, not driven by some distant arbitrary, possibly meaningless end purpose.
The coincidence of opposites applied to investing
In investing, the beginning truly is the end. In addition, most investment activities benefit from inversion, turning concepts and causal relationships upside down.
The Ouroboros of investingmy way
In my job as a hedge fund manager and as a teacher of investing in the Finance Course, I often talk about investing in terms of the Ouroboros.
The Ouroboros is a mythical self-devouring snake, basing its existence on itself. Much like the universe seems to do.
An investor has to invest to learn investing
An investor has to know how to take notes before an investment teaches what notes are useful. The notes are needed in order to evaluate and refine his strategy afterward. But you can’t write useful notes until you have experience of the whole process of investing.
Every step builds on the subsequent steps – in a way parallel to the idea of teleology – that the future is drawing us toward it, with pre-designed blueprints of forms and increasing complexity. In that school of thinking, biological creatures are created by cells dividing, multiplying, turning on and off genes depending on what environment the cells find themselves in – but even more so by the shape they are supposed to create, without any apparent way of monitoring that shape. Cause and effect are in effect inverted. As they are in research, analysis and investing.
At my previous hedge fund Futuris, around the time of the Best over a decade awards (we received more than one), I used to say, only half-jestingly, that I’d get the same result being one step behind the market as one step ahead. And I claimed to chose being behind since it took much less effort.
It’s actually kind of true, if you can do it consistently. However, you can’t, since if you’re behind you have no idea how far behind. But there is still truth to it, to the endless reflexivity of markets – the lack of final truths, lack of an answer, lack of a stable relationship between factors.
You can never put your same foot into the same market twice
The same goes for being too far ahead. If you have to wait for years for your case to manifest itself in market prices, you’re actually wrong. The proof always is in your returns.
The market is BOTH reactive
and proactive
AND neither, i.e., concurrent
“News” are sometimes actual news, sometimes old news, sometimes forward-looking sentiment surveys or forecasts. Market participants are sometimes more forward-looking, sometimes more reactive, sometimes looking further into the future, sometimes less, sometimes risk-averse, sometimes tolerant, patient, sometimes jittery, undecided, both greedy and fearsome – and neither.
Legendary investors like Hussman, Dalio, Marks, Buffett claim to only look at the present data, since forecasts that are both correct and unanticipated by the market are too few and far between to rely upon.
It’s true. Very true. And also false and useless. All we have are historic facts, but all that matters is the future. It’s a kind of paradox.
That’s one reason for the wild stock market swings compared to how the economy develops over longer time frames — population growth, UE changes, productivity… the big picture moves slowly and predictably like a supertanker. But lender and borrower sentiment, i.e., access to financing and leverage, decide both short-term growth AND valuation multiples. Hence the big difference.
A successful investor should apply all tools, but without over-reaching and burning out. Intuition, experience, expert pattern recognition, embodied, lived knowledge can guide you to when to go with or against the herd, when to take extra risks or be careful, when to trust momentum, when to stop dancing.
Investing is just like skating or basketball an arena where your intuition is much more valuable than mechanical skills
Mark Spitznagel talks in his book Dao of Capital about losses being the ultimate profit centers for a trader. The losses are lessons for better trades in the future. The losses are your investments in chance of success. If you accept a certain loss during the lifetime of a trade, you increase your chance of profits by an associated amount.
Conclusions – this is not an exit
There is no final solution, no either/or. There isn’t even a both/and solution, no bipolar, inclusive, no answer. No finish line. This is not an exit.
In all aspects of meaningful activities we’ll have to accept a second order non-binarity: it’s both either/or and both/and, it’s a back and forth hand-over between both aspects, between both brain hemispheres and their respective slants on reality.
The left pauses, analyzes, picks apart, in effect kills the living flow, and separates time and space into stillborn fragments – separated, unrelated points that can’t be put together to a living and flowing whole again. The left makes an effort, strains itself, in order to manipulate the world, going against the natural flow of nature. The left is not the Dao, the left can never know the Dao, can’t even speak about it, since verbalizing is the act of manipulating lifeless concepts, separated from actual life, from the eternal flow of the Dao. But the left still is part of the Dao, part of existence. Naturally.
The right does see the whole, understands depth in time, space, in emotions & relations, but can’t manipulate and use anything – it can’t even talk. It can experience and live, but it can’t even keep itself alive, since it can’t grasp things or communicate verbally. The right can not say what it understands, but it understands nontheless. It’s like being the concept of red, or being conscious; you’d know but there is now way to convey that experience or knowledge.
The hemispheres need each other, they need the constant back and forth between each others’ views. The two halves are both there for a reason, that the universe is that way too, a dialectic process, ever creative in a never ending dance. Remember, there is no brain, it’s just an icon, a symbol; telling us a little, very little, about the consciousness that created that icon, the consciousness that pulls the brain into material existence for some purpose odf interaction.
One half is, however, ultimately more correct: the right, the wise master; but without its emissary, the single-minded left, nothing gets done.
The machine elves of the late Terence McKenna’s, philosopher and psychonaut, can teach you about new perspectives; and make any atheist even see the afterlife. The Voynich manuscript reads like written by those elves, those artists as I see them – eager to show new things, to surprise, to revel in just enough otherness to not accidentally kill by astonishment. A delicate balance, since things and their relations are infinitely ineffable
Tao te ching is just as impossible as The Voynich. It is a 2500 year old poem, 81 verses with fully digestible wisdom as if written today. Clear & direct, and opaque & vague at the same time.
The unity of opposites makes up the world. Yin and Yang are everywhere, in apparent conflict and opposition, yet clearly outlining and strengthening each other. We should not call one side good and the other bad. There is no point in telling them apart at all, since they cannot exist divided. Nor do they make any sense when separated from one another. Everything is an aspect of the one, of ourself.
It is hard to grasp the ungraspable, the (superficially) paradoxical. Not least because analyzing, picking apart, talking, verbalizing, shuts off the flow of reality, in order to conceptualize, thing-ificate, freeze the flow of integrated reality, thereby disrupting it, killing it, sucking out all depth and meaning, projecting the living flow to a different dimension as far from reality and devoid of life as flickering shadows on a cave wall.
Don’t feel as if you can’t get it, or that I’m the one who doesn’t get it. The realm of the not strictly rational, of a flow without beginning or end, without time and space cannot be experienced directly or explained. It has to be felt in the in-between, through, beyond, for-yond, before the yond.
consciousness undisturbed
free of all troubles
devoid of all experience
is that ideal?
You are the dao. You are part of the whole, connected to all, and all to you. Dao is trust, connectedness, responsibility, kindness, resonance. With trust comes natural cooperation and harmony as in a family. No need for government or religion. Nature flows without effort, every atom in its rightful place.
The closer you get to matter the further you get from seeing or experiencing its matterness. Going at matter directly ´makes it disappear. The Dao is the same. The left and the right brain hemispheres reflect these two sides of existence; the immaterial dimensionless consciousness without time and space, and the material congealed hubs of interaction, of resistance within the flow itself, the actualized semi-permanent matter.
The more the brain shuts off (its brakes and filters) the more you see: brain activity dimmed to zero, the brain out of the way of the flow that is the dao, consciousness undisturbed, free of all troubles, devoid of all experience. Like a still pool of water, an existing nothingness but with inherent infinite potential of vortices aware of each other.
When nothing is done, when the ‘statue’ is left uncarved, then nothing is not done, and all is possible, nothing undone.
Take aways
• Relations and flow are real. Things are not. There is just one unity, no true opposites or adversaries, only temporarily separated instances of oneself.
• Let go of the future and the past. You are not there. Take responsibility for the present.
• Look for the other side in everything, immer umkehren. Try to look through and beyond things and objectives by focusing on the dance of life, an infinite game of enhanced perspective and understanding, of both either/or and both/and. Try just one more vantage point, perpendicular to your current one.
• Do just one (more) in life, in love, at work or workouts – not to get overwhelmed or discouraged from commencing.
• Resonance: first and foremost aim for one more meeting. A meeting of Sheng-Jens.
A Sheng-jen is a person with a refined spirit, who is modest about his place in the world and shows compassion towards others, whatever the level of their wisdom.
The word ‘sheng’ is written with a sign that contains three parts: an ear, a mouth, and the sign for a king or sovereign. Someone who listens and speaks beyond the perspective of common men. He who hears heaven, understands all. A refined mind. It’s closer to what we call reason than to knowledge. Jen=person
You are always really interacting with another aspect of yourself. And that you, he or she has their reasons for being the way they are. As do you. Be compassionate. Be a sheng-jen. Power of gentleness is greater than force. Water wears down mountains.
Meetings are everything. Alone is nothing, not even weak. Not even wrong, as Pauli famously quipped
Consciousness is resonance, an existence nexus (“particle” if you will) that is aware of gravitation or the effects of electrical charge. Sensing is being in resonance, being conscious of something other – just enough separated to make it worthwhile, but still possible – exploring the infinite complexity and potential of consciousness.
The tricky thing in this eternal dance is where value come in: Where does joy, beauty and goodness hide in the three-body problem or Schrödinger equation? But that’s a future topic.
“I am here in Switzerland these days for more meetings, more understanding, a wider perspective. I’m looking for and offering resonance with what is ultimately myself and yourself, creating reality itself through the conjunction and resolution of apparent opposites
Karl-Mikael Syding
Aspiring Sheng-Jen
P.S. My computer has just been located and shipped back to me
P.P.S. German sociologist Hartmut Rosa on uncontrollability and resonance: |
f005ac1681e48cd6 | University of Thi-Qar
Recent publications
Background To prospectively investigate the role of Fast spin-echo T 2 -weighted (FSE T 2 -w) and diffusion-weighted imaging (DWI) in magnetic resonance imaging (MRI) for detecting spine bone marrow changes in postmenopausal women with osteoporosis (OP). A total of 101 postmenopausal women, mean age of 60.97 ± 7.41 (range 52–68) years old, who underwent dual-energy X-ray absorptiometry of the spine, were invited to this study and divided into three bone density (normal, osteopenic, and osteoporotic) groups based on T-score. After that MRI scan with both FSE T 2 -w and DWI of the vertebral body was done to calculate the signal-to-noise ratio (SNR) and apparent diffusion coefficient (ADC). Finally, MRI findings were compared in patients, between three groups and correlated with bone marrow density. Results The osteoporotic group showed significantly lower mean ADC values, compared to osteopenic and normal groups (0.58 ± 0.02 vs. 0.36 ± 0.05 vs. 0.24 ± 0.06 × 10 –3 mm ² /s, p < 0.001). According to these results, a significant positive correlation was found between T-scores and ADC values ( r = 0.652, p < 0.001). The mean SNR in FSE T 2 -w images for normal, osteopenic, and osteoporotic groups was calculated 5.61 ± 0.32, 5.48 ± 0.55, and 6.63 ± 0.67, respectively. No significant correlation was found between the mean SNR and T-score for all groups ( r = − 0.304, p > 0.05). Conclusions DWI can be used as a noninvasive, quantitative, and valuable technique for OP evaluation. While, routine MRI needs more investigation to be demonstrated as a reliable diagnostic indicator for OP.
In the process of this work, the Pulsed Laser Deposition (PLD) technique was used to deposit nanoparticles of pure titanium oxide (TiO2) onto a glass substrate at temperatures ranging from 100 to 400 degrees Celsius. This experiment made use of a Nd: YAG laser that had its frequency-doubled. The laser had a wavelength of 532 nanometers and an average laser strength of 800 millijoules. To explore the optical properties, transmittance spectrometry measurements were carried out for both visible and ultraviolet wavelengths. These measurements were carried out. The results of the optical transmittance test showed that it was more than 80 percent, which indicates that it is suitable for use in applications involving solar cells. The research was conducted on a number of optical constants, including the refractive index, the absorption coefficient, and the attenuation coefficient, and the values of these optical constants were determined. The value of the refractive index was found to be 2.49 when measured at a temperature of 400 degrees Celsius and a wavelength of 550 nanometers. Additionally, it was feasible to calculate the density of the titanium dioxide coating, which came out to be 3.6881 grams per cubic centimeter after the calculation was complete. The use of a numerical equation was utilized to ascertain the connection that exists between density and base temperature. It is an empirical equation that may be employed in the process of calculating the density of the components that are being used, and it has the potential to do so successfully. This equation is one of a kind since it was produced via the use of a theoretical computer program, and it is specific to the results that were obtained from the research.
Existing implementations of file systems often seem to be made on an ad hoc and implicit basis. This paper aims to enhance the organization of files and retrieval of files by modifying the traditional hierarchical file system to improve built-in query support and bulk metadata updates supported at the file system level. We introduce tags in a hierarchy of file collections and use links to allow file retrieval from multiple paths as files exist in multiple directories simultaneously. By using a series of modest changes to the hierarchical file system, we propose a novel Linked Tree Tags (LTTs) model. These changes include using multiple tags instead of names, collections instead of directories, exposing a query language at the Application Programming Interface (API) level, and allowing controlled file links. We assess our model's expressive capability and demonstrate that LTTs overcome traditional file systems' limits and provide users with the to manage their files easily.
Advances in Web 2.0 technologies have led to the widespread assimilation of electronic commerce platforms as an innovative shopping method and an alternative to traditional shopping. However, due to pro-technology bias, scholars focus more on adopting technology, and slightly less attention has been given to the impact of electronic word of mouth (eWOM) on customers' intention to use social commerce. This study addresses the gap by examining the intention through exploring the effect of eWOM on males' and females' intentions and identifying the mediation of perceived crowding. To this end, we adopted a dual-stage multi-group structural equation modeling and artificial neural network (SEM-ANN) approach. We successfully extended the eWOM concept by integrating negative and positive factors and perceived crowding. The results reveal the causal and non-compensatory relationships between the constructs. The variables supported by the SEM analysis are adopted as the ANN model's input neurons. According to the natural significance obtained from the ANN approach, males' intentions to accept social commerce are related mainly to helping the company, followed by core functionalities. In contrast, females are highly influenced by technical aspects and mishandling. The ANN model predicts customers' intentions to use social commerce with an accuracy of 97%. We discuss the theoretical and practical implications of increasing customers' intention toward social commerce channels among consumers based on our findings.
Lung cancer is a serious threat to human health, with millions dying because of its late diagnosis. The computerized tomography (CT) scan of the chest is an efficient method for early detection and classification of lung nodules. The requirement for high accuracy in analyzing CT scan images is a significant challenge in detecting and classifying lung cancer. In this paper, a new deep fusion structure based on the long short-term memory (LSTM) has been introduced, which is applied to the texture features computed from lung nodules through new volumetric grey-level-co-occurrence-matrices (GLCMs), classifying the nodules into benign, malignant, and ambiguous. Also, an improved Otsu segmentation method combined with the water strider optimization algorithm (WSA) is proposed to detect the lung nodules. WSA-Otsu thresholding can overcome the fixed thresholds and time requirement restrictions in previous thresholding methods. Extended experiments are used to assess this fusion structure by considering 2D-GLCM based on 2D-slices and approximating the proposed 3D-GLCM computations based on volumetric 2.5D-GLCMs. The proposed methods are trained and assessed through the LIDC-IDRI dataset. The accuracy, sensitivity, and specificity obtained for 2D-GLCM fusion are 94.4%, 91.6%, and 95.8%, respectively. For 2.5D-GLCM fusion, the accuracy, sensitivity, and specificity are 97.33%, 96%, and 98%, respectively. For 3D-GLCM, the accuracy, sensitivity, and specificity of the proposed fusion structure reached 98.7%, 98%, and 99%, respectively, outperforming most state-of-the-art counterparts. The results and analysis also indicate that the WSA-Otsu method requires a shorter execution time and yields a more accurate thresholding process.
The drawbacks of the "take-or-pay" conception led to the search for alternatives to mitigate its effects. This paper proposes an intelligent multistage control system consisting of two-phased to control the energy sources' operation in a connected-mode microgrid consisting of renewable energy sources and a diesel generator. The first phase is the forecasting process to predict the day's required fuel amount, thus dispensing with the take-or-pay method that forces the purchase of redundant quantities of fuel periodically. This stage depends on the deep neural network long short-term memory method. The second phase is the energy management to find optimum energy source scheduling relying on a model-free strategy using reinforcement learning to achieve minimum energy cost consumption. The proposed methodology is verified using improved PSO. Simulations and theoretical calculations reveal that the proposed scheme is very successful at decreasing energy consumption costs, in addition to meeting the preferences of users.
Remote sensing image registration can benefit from a machine learning method based on the likelihood of predicting semantic spatial position distributions. Semantic segmentation of images has been revolutionized due to the accessibility of high-resolution remote sensing images and the advancement of machine learning techniques. This system captures the semantic distribution location of the matching reference picture, which ML mapped using learning-based algorithms. The affine invariant is utilized to determine the semantic template’s barycenter position and the pixel’s center, which changes the semantic border alignment problem into a point-to-point matching issue for the machine learning-based semantic pattern matching (ML-SPM) model. The first step examines how various factors such as template radius, training label filling form, or loss function combination affect matching accuracy. In this second step, the matching of sub-images (MSI) images is compared using heatmaps created from the expected similarity between the images’ cropped sub-images. Images having radiometric discrepancies are matched with excellent accuracy by the approach. SAR-optical image matching has never been easier, and now even large-scale sceneries can be registered using this approach, which is a significant advance over previous methods. Optical satellite imaging or multi-sensor stereogrammetry can be combined with both forms of data to enhance geolocation.
The landscape of the Mesopotamian floodplain is mainly structured by channel processes, including the formation of levees, meanders, scrollbars, oxbow lakes, crevasse splays, distributary channels, inter-distributary bays, and marshes. Moreover, several human-made features also form and shape this landscape, such as canals, roads, trenches, farms, and settlement sites ranging in size from villages to cities. A significant part of the Mesopotamian floodplain is covered by marshes, especially the southern region. These marshlands have thrived for thousands of years and are well known for their sustainable biodiversity and ecosystem. However, after the deliberate draining of the marshes in the 1990s, the areas have become dry and only small areas of shallow water and narrow strips of vegetation remain. Several kinds of archaeological landscape features have appeared on the surface and can be clearly identified in both ground surveys and with the use of remote sensing tools. This paper aims to determine the type and nature of the preserved archaeological features that appear in the landscape of the dried marshes and whether they are different from other features elsewhere in the Mesopotamian floodplain. An intensive ground survey was carried out in a selected area of the dried marshland, resulting in the identification of six types of archaeological features: settlement sites, rivers, canals, farms, grooves, and roads (hollow ways). These features used to be covered by bodies of deep water and dense zones of vegetation (reeds and papyrus).
Oxysterols are cholesterol metabolites generated in the liver and other peripheral tissues as a mechanism of removing excess cholesterol. Oxysterols have a wide range of biological functions, including the regulation of sphingolipid metabolism, platelet aggregation, and apoptosis. However, it has been found that metabolites derived from cholesterol play essential functions in cancer development and immunological suppression. In this regard, research indicates that 27-hydroxycholesterol (27-HC) might act as an estrogen, promoting the growth of estrogen receptor (ER) positive breast cancer cells. The capacity of cholesterol to dynamically modulate signaling molecules inside the membrane and particular metabolites serving as signaling molecules are two possible contributory processes. 27-HC is a significant metabolite produced mainly through the CYP27A1 (Cytochrome P450 27A1) enzyme. 27-HC maintains cholesterol balance biologically by promoting cholesterol efflux via the liver X receptor (LXR) and suppressing de novo cholesterol production through the Insulin-induced Genes (INSIGs). It has been demonstrated that 27-HC is able to function as a selective ER regulator. Moreover, enhanced 27-HC production is in favor of the growth of end-stage malignancies in the brain, thyroid organs, and colon, as shown in breast cancer, probably due to pro-survival and pro-inflammatory signaling induced by unbalanced levels of oxysterols. However, the actual role of 27-HC in cancer promotion and progression remains debatable, and many studies are warranted to be performed to unravel the precise function of these molecules. This review article will summarize the latest evidence on the deleterious or beneficial functions of 27-HC in various types of cancer, such as breast cancer, prostate cancer, colon cancer, gastric cancer, ovarian cancer, endometrial cancer, lung cancer, melanoma, glioblastoma, thyroid cancer, adrenocortical cancer, and hepatocellular carcinoma.
This work studies optical absorption in the zinc-blende boron-containing quantum dot (QD) structures. Eight structures are studied; two of them are the ternary BInP/GaP and BInP/BP. The others are BGaAsP/BP, BAlAsP/BAs, BInAsP/InP, BGaInAs/GaAs, BGaInP/BP, and BInAsP/GaP. The emission wavelengths of the structures cover a broad spectrum range from UV to near-infrared. The structures with BAs and BP barriers emit at 227,292nm. The structures BInAsP/InP and BGaInAs/GaAs have peak absorptions at 870nm and 920nm wavelengths, while ternary and quaternary structures with GaP barrier are at 720 and 1200nm. The structures with GaP barrier have importance in silicon device technology. The absorption peaks are arranged where the smallest energy difference between the transition subbands correspond to a higher absorption peak and are associated with a wide bandgap energy difference between the barrier and QD. For boron increment by 0.005 in the QD region, the peak absorption of BInP/GaP and BInP/BP in the TE mode have a wider red-shift (170 nm) in the peak wavelength. BGaAsP/BP has an absorption peak four orders higher than BAlAsP/BP. For of BInAsP/InP QD structure, the absorption spectrum is increased by more than four times under 0.0001-mole fraction increment of boron.
The present investigation was designed to study the prevalence of cryptosporidiosis in the colorectal cancer patients compared to the healthy subjects. The present descriptive case-control study was performed on 174 subjects including 87 healthy people and 87 patients with colorectal cancer attending to general hospitals in Lorestan Province, Western Iran, during October 2019–August 2020. A fresh stool specimen was collected from each subject in a sterile labeled container. The collected stool samples were concentrated using the sucrose flotation method and then prepared for Ziehl-Neelsen staining for microscopic examination. All samples were also tested using the Nested-PCR assays by amplifying the 18S rRNA gene for the presence of Cryptosporidium DNA. Demographic and possible risk factors such as age, gender, residence, agriculture activity, history of contact with livestock, consumption unwashed fruits/vegetables, and hand washing before eating were investigated in all the studied subjects using a questionnaire. Of the 87 patients with colorectal cancer, 37 (42.5%) had Cryptosporidium infection. A significant difference (p < 0.001) in the prevalence of Cryptosporidium spp. infections among the participants in the case and control (11, 12.6%) groups was observed. We found that cryptosporidiosis was not linked with age, gender, hand washing, agriculture activity, and history of contact with livestock in the colorectal patients. However, residence in urban areas was significantly associated with the prevalence of cryptosporidiosis. The 18 s rRNA gene of Cryptosporidium in 48 samples was successfully amplified by the Nested-PCR. Based on the obtained findings, Cryptosporidium spp. infections were observed significantly more frequently in the patients with colorectal cancer in comparison with the healthy individuals. It is suggested to carry out similar studies in various parts of Iran with larger sample sizes and further parasitological tests.
Aggressive, unexpected, and catastrophic changes in the environment-induced or impacted by the cultivation of land, crops, and cattle are known as agricultural disasters. In agriculture, the volume of data unpredictability, processing, and data management standards for interoperability are significant concerns. While natural catastrophes are still a considerable problem, the enormous amount of data available has opened up new avenues for coping. Accordingly, big data analytics has profoundly changed the way people respond to disasters in the agriculture sector. In this paper, the Data handling model using big data analytics (DHM-BDA)explores the role of big data in managing agricultural disasters and highlights the technical status of delivering practical and efficient disaster management solutions. DHM-BDA is used to address the essential sources of big data that include climatic causes and associated successes and developing technological problems in different disaster management phases. In addition, it aids in the monitoring, mitigation, alleviation, and acceptance of agricultural catastrophes and the process of recovery and rebuilding. The simulation findings have been executed, and the suggested model enhances the prediction ratio of 98.9%, decision-making level of 97.8%, data management of 96.5%, production ratio of 95.6%, and risk reduction ratio of 97.1% compared to other existing approaches.
Sleep scoring is one of the primary tasks for the classification of sleep stages in Electroencephalogram (EEG) signals. Manual visual scoring of sleep stages is time-consuming as well as being dependent on the experience of a highly qualified sleep expert. This paper aims to address these issues by developing a new method to automatically classify sleep stages in EEG signals. In this research, a robust method has been presented based on the clustering approach, coupled with probability distribution features, to identify six sleep stages with the use of EEG signals. Using this method, each 30-second EEG signal is firstly segmented into small epochs and then each epoch is divided into 60 sub-segments. Each sub-segment is decomposed into five levels by using a discrete wavelet transform (DWT) to obtain the approximation and detailed coefficient. The wavelet coefficient of each level is clustered using the k-means algorithm. Subsequently, features are extracted based on the probability distribution for each wavelet coefficient. The extracted features then are forwarded to the least squares support vector machine classifier (LS-SVM) to identify sleep stages. Comparisons with several existing methods are also made in this study. The proposed method for the classification of the sleep stages achieves an average accuracy rate of 97.4%. It can be an effective tool for sleep stages classification and can be useful for doctors and neurologists for diagnosing sleep disorders.
Pancreatic and islet cell transplantation are considered surgical therapeutic modalities for type 1 diabetes mellitus with or without end-stage renal disease. The pancreatic transplant can be performed alone or with the kidney transplant simultaneously or at different times. It contributed to an improved quality of life in those patients. Pancreatic transplantation and islet cell transplantation provide different degrees of insulin independence. Although the latter needs less monitoring, yet, it is more expensive and tedious. The experiences in the Middle East and North African countries for both procedures are young but mature. They need more scheduled national and/or regional programs to provide diverse options for their citizens.
In this article, the new exact solitary wave solutions for the generalized nonlinear Schrödinger equation with parabolic nonlinear (NL) law employing the improved tanh(Γ(ϖ))-coth(Γ(ϖ)) function technique and the combined tan(Γ(ϖ))-cot(Γ(ϖ)) function technique are obtained. The offered techniques are novel and also for the first time in this study are used. Different collections of hyperbolic and trigonometric function solutions acquired rely on a map between the considered equation and an auxiliary ODE. The several hyperbolic and trigonometric forms of solutions based on diverse restrictions between parameters involved in equations and integration constants that appear in the solution are obtained. A few significant ones among the reported solutions are pictured to perceive the physical utility and peculiarity of the considered model utilizing mathematical software. The main subject of this work is that one can visualize and update the knowledge to overcome the most common techniques and defeat to solve the ODEs and PDEs. The concluded solutions are demonstrated where are valid by using Maple software and also found those are correct. The proposed methodology for solving the metamaterilas model are designed where is effectual, unpretentious, expedient, and manageable. Finally, the existence of the obtained solutions for some conditions is also analyzed.
Interest is increasing in certain parts of the world in replacing synthetic dyes with dyes from natural sources, particularly from plants. Although textile dyers have used various groups of natural dyes, microscopists generally have restricted their use to anthocyanins. Recently, however, another class of plant-based dyes has found some favor, the betacyanins. Betacyanins are a group of red and violet betalain dyes found only in certain plants of the order Caryophyalles and in Basidiomycetes mushrooms. Although the chemical structures of betacyanins are known, little use has been made of that information to understand or predict their behavior with biomedical specimens. We investigated two common, widely distributed betacyanin-containing plants, edible beets (Beta vulgaris) and wild pokeweed (Phytolacca americana). Aqueous alcoholic extracts were made from beet root and pokeweed berries, adjusted to pH 4.1 or 5.3 and used together with Harris’ hematoxylin to stain histological sections. We used a methanolic extract of pokeweed berries, pH 3.0, to stain cultured mycological specimens. Both extracts produced satisfactory staining that was equivalent to that of eosin Y, although the colors were more muted with the beet root extract. Epithelial cytoplasm, muscle, collagen and erythrocytes were well demonstrated. Betanin is the predominant component of beet root extract; it possesses one delocalized positive charge and three carboxylic acid substituents. The dyes are weak acids and the carboxylate anions are more diffuse than for eosin Y; this produces weaker bonding to tissue cations. The principal colored component of pokeweed berries, prebetanin, possesses a sulfonic acid group as well as carboxylic acids, which favors acid dyeing and more intense coloration. Both dyes show potential for hydrogen bonding and to a much lesser extent for some types of van der Waals forces. Complex formation with metals such as aluminum to create a nuclear stain is not likely with beet root dyes nor is it possible with pokeweed dyes. Betacyanins are suitable for staining microscopy preparations in place of other red acid dyes such as eosin. Of the two dyes tested here, prebetanin from pokeweed berries was superior to betanin from red beet roots. These berries are widely distributed and readily collected; the extraction procedure is simple and does not require expensive solvents.
Machine learning models have been effectively applied to predict certain variable in several engineering applications where the variable is highly stochastic in nature and complex to identify utilizing the classical mathematical models. Therefore, this study investigates the capability of various machine learning algorithms in predicting the power production of a reservoir located in China using data from 1979 to 2016. In this study, different supervised and unsupervised machine learning algorithms are proposed: artificial neural network (ANN), AutoRegressive Integrated Moving Aveage (ARIMA) and support vector machine (SVM). Three different scenarios are examined, such as scenario1 (SC1): used to predict daily power generation, scenario 2 (SC2): used to predict power generation for monthly prediction and scenario 3 (SC3): used to predict hydropower generation (HPG) seasonally. The statistical analysis and pre-processing techniques were applied to the raw data before developing the models. Five statistical indexes are employed to evaluate the performances of various models developed. The results indicate that the proposed models can be used to predict HPG efficiently and could be an effective method for energy decision-makers. The sensitivity analyses found the most effective models for predicting HPG for three scenarios using graphical distribution data (Taylor diagram). Regarding the uncertainty analysis, 95PPU and d-factors were adopted to measure the uncertainties of the best models for ANN and SVM. The results presented that the value of 95PPU for all models falls into the range between 80% and 100%. As for the d-factor, all values in all scenarios are less than one.
Electroencephalography (EEG) is a complex signal that may require several years of training, advanced signal processing, and feature extraction methodologies to interpret correctly. Recently, many methods have been used to extract and classify EEG data. This study reviews 62 papers that used EEG signals to detect driver drowsiness, published between January 2018 and 2022. We extract trends and highlight interesting approaches from this large body of literature to inform future research and formulate recommendations. To find relevant papers published in scientific journals, conferences, and electronic preprint repositories, researchers searched major databases covering the domains of science and engineering. For each investigation, many data items about (1) the data, (2) the channels used, (3) the extraction and classification procedure, and (4) the outcomes were extracted. These items were then analyzed one by one to uncover trends. Our analysis reveals that the amount of EEG data used across studies varies. We saw that more than half the studies used simulation driving experimental. About 21% of the studies used support vector machine (SVM), while 19% used convolutional neural networks (CNN). Overall, we can conclude that drowsiness and fatigue impair driving performance, resulting in drivers who are more exposed to risky situations.
This paper shows how to use the fractional Sumudu homotopy perturbation technique (SHP) with the Caputo fractional operator (CF) to solve time fractional linear and nonlinear partial differential equations. The Sumudu transform (ST) and the homotopy perturbation technique (HP) are combined in this approach. In the Caputo definition, the fractional derivative is defined. In general, the method is straightforward to execute and yields good results. There are some examples offered to demonstrate the technique's validity and use.
1,209 members
Amin H. Al-khursan
• Faculty of Science
Samir M Abdulalmohsin
• Faculty of Science
Hussein Togun
• Biomedical Engineering Department
Hussien Al-Hmood
• Electrical and Electronics Engineering (EEE) Department
Ahmed Hasan Mohammed
• Faculty of Science
Thi Qar, Iraq |
7c5f31d6f5889191 | Stanford Encyclopedia of Philosophy
Relational Quantum Mechanics
First published Mon Feb 4, 2002; substantive revision Wed Jan 2, 2008
1. Introduction
Quantum theory is our current general theory of physical motion. The theory is the core component of the momentous change that our understanding of the physical world has undergone during the first decades of the 20th century. It is one of the most successful scientific theories ever: it is supported by vast and unquestionable empirical and technological effectiveness and is today virtually unchallenged. But the interpretation of what the theory actually tells us about the physical world raises a lively debate, which has continued with alternating fortunes, from the early days of the theory in the late twenties, to nowadays. The relational interpretations are a number of reflections by different authors, which were independently developed, but converge in indicating an interpretation of the physical content of the theory. The core idea is to read the theory as a theoretical account of the way distinct physical systems affect each other when they interact (and not of the way physical systems "are"), and the idea that this account exhausts all that can be said about the physical world. The physical world is thus seen as a net of interacting components, where there is no meaning to the state of an isolated system. A physical system (or, more precisely, its contingent state) is reduced to the net of relations it entertains with the surrounding systems, and the physical structure of the world is identified as this net of relationships.
The possibility that the physical content of an empirically successful physical theory could be debated should not surprise: examples abound in the history of science. For instance, the great scientific revolution was fueled by the grand debate on whether the effectiveness of the Copernican system could be taken as an indication that the Earth was not in fact at the center of the universe. In more recent times, Einstein's celebrated first major theoretical success, special relativity, consisted to a large extent just in understanding the physical meaning (simultaneity is relative) of an already existing effective mathematical formalism (the Lorentz transformations). In these cases, as in the case of quantum mechanics, a very strictly empiricist position could have circumvented the problem altogether, by reducing the content of the theory to a list of predicted numbers. But perhaps science can offer us more than such a list; and certainly science needs more than such a list to find its ways.
The difficulty in the interpretation of quantum mechanics derives from the fact that the theory was first constructed for describing microscopic systems (atoms, electrons, photons) and the way these interact with macroscopic apparatuses built to measure their properties. Such interactions are denoted as "measurements". The theory consists in a mathematical formalism, which allows probabilities of alternative outcomes of such measurements to be calculated. If used just for this purpose, the theory raises no difficulty. But we expect the macroscopic apparatuses themselves — in fact, any physical system in the world — to obey quantum theory, and this seems to raise contradictions in the theory.
1.1 The Problem
In classical mechanics, a system S is described by a certain number of physical variables. For instance, an electron is described by its position and its spin (intrinsic angular momentum). These variables change with time and represent the contingent properties of the system. We say that their values determine, at every moment, the "state" of the system. A measurement of a system's variable is an interaction between the system S and an external system O, whose effect on O, depends on the actual value q of the variable (of S) which is measured. The characteristic feature of quantum mechanics is that it does not allow us to assume that all variables of the system have determined values at every moment (this irrespectively of whether or not we know such values). It was Werner Heisenberg who first realized the need to free ourselves from the belief that, say, an electron has a well determined position at every time. When it is not interacting with an external system that can detect its position, the electron can be "spread out" over different positions. In the jargon of the theory, one says that the electron is in a "quantum superposition" of two (or many) different positions. It follows that the state of the system cannot be captured by giving the value of its variables. Instead, quantum theory introduces a new notion of "state" of a system, which is different from a list of values of its variables. Such a new notion of state was developed in the work of Erwin Schrödinger in the form of the "wave function" of the system, usually denoted by Ψ. Paul Adrien Maurice Dirac gave a general abstract formulation of the notion of quantum state, in terms of a vector Ψ moving in an abstract vector space. The time evolution of the state Ψ is deterministic and is governed by the Schrödinger equation. From the knowledge of the state Ψ, one can compute the probability of the different measurement outcomes q. That is, the probability of the different ways in which the system S can affect a system O in an interaction with it. The theory then prescribes that at every such ‘measurement’, one must update the value of Ψ, to take into account which of the different outcomes has happened. This sudden change of the state Ψ depends on the specific outcome of the measurement and is therefore probabilistic. It is called the "collapse of the wave function".
The problem of the interpretation of quantum mechanics takes then different forms, depending on the relative ontological weight we choose to assign to the wave function Ψ or, respectively, to the sequence of the measurement outcomes q, q′, q″, …. If we take Ψ as the "real" entity which fully represents the actual state of affairs of the world, we encounter a number of difficulties. First, we have to understand how Ψ can change suddenly in the course of a measurement: if we describe the evolution of two interacting quantum systems in terms of the Schrödinger equation, no collapse happens. Furthermore, the collapse, seen as a physical process, seems to depend on arbitrary choices in our description and shows a disturbing amount of nonlocality. But even if we can circumvent the collapse problem, the more serious difficulty of this point of view is that it appears to be impossible to understand how specific observed values q, q′, q″, … can emerge from the same Ψ. A better alternative is to take the observed values q, q′, q″, … as the actual elements of reality, and view Ψ just as a bookkeeping device, determined by the actual values q, q′, q″, … that happened in past. From this perspective, the real events of the world are the "realization" (the "coming to reality", the "actualization") of the values q, q′, q″, … in the course of the interaction between physical systems. This actualization of a variable q in the course of an interaction can be denoted as the quantum event q. An exemple of a quantum event is the detection of an electron in a certain position. The position variable of the electron assumes a determined value in the course of the interaction between the electron and an external system and the quantum event is the "manifestation" of the electron in a certain position. Quantum events have an intrinsically discrete ("quantized") granular structure.
The difficulty of this second option is that if we take the quantum nature of all physical systems into account, the statement that a certain specific event q "has happened" (or, equivalently that a certain variable has or has not taken the value q) can be true and not-true at the same time. To clarify this key point, consider the case in which a system S interacts with another system (an apparatus) O, and exhibits a value q of one of its variables. Assume that the system O obeys the laws of quantum theory as well, and use the quantum theory of the combined system formed by O and S in order to predict the way this combined system can later interact with a third system O′. Then quantum mechanics forbids us to assume that q has happened. Indeed, as far as its later behavior is concerned, the combined system S+O may very well be in a quantum superposition of alternative possible values q, q′, q″, …. This "second observer" situation captures the core conceptual difficulty of the interpretation of quantum mechanics: reconciling the possibility of quantum superposition with the fact that the observed world is characterized by uniquely determined events q, q′, q″, …. More precisely, it shows that we cannot disentangle the two: according to the theory an observed quantity (q) can be at the same time determined and not determined. An event may have happened and at the same time may not have happened.
2. Relational view of quantum states
The way out from this dilemma suggested by the relational interpretations is that the quantum events, and thus the values of the variables of a physical system S, namely the q's, are relational. That is, they do not express properties of the system S alone, but rather refer to the relation between two systems.
The best developed of these interpretations is relational quantum mechanics (Rovelli 1996, 1997). For a detailed and critical account of this view of quantum theory, see (van Fraassen 2008) and (Bitbol 2007). The central tenet of relational quantum mechanics is that there is no meaning in saying that a certain quantum event has happened or that a variable of the system S has taken the value q: rather, there is meaning in saying that the event q has happened or the variable has taken the value q for O, or with respect to O. The apparent contradiction between the two statements that a variable has or hasn't a value is resolved by indexing the statements with the different systems with which the system in question interacts. If I observe an electron at a certain position, I cannot conclude that the electron is there: I can only conclude that the electron as seen by me is there. Quantum events only happen in interactions between systems, and the fact that a quantum event has happened is only true with respect to the systems involved in the interaction. The unique account of the state of the world of the classical theory is thus fractured into a multiplicity of accounts, one for each possible "observing" physical system. In the words of (Rovelli 1996): "Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world".
This relativisation of actuality is viable thanks to a remarkable property of the formalism of quantum mechanics. John von Neumann was the first to notice that the formalism of the theory treats the measured system (S ) and the measuring system (O) differently, but the theory is surprisingly flexible on the choice of where to put the boundary between the two. Different choices give different accounts of the state of the world (for instance, the collapse of the wave function happens at different times); but this does not affect the predictions on the final observations. Von Neumann only described a rather special situation, but this flexibility reflects a general structural property of quantum theory, which guarantees the consistency among all the distinct "accounts of the world" of the different observing systems. The manner in which this consistency is realized, however, is subtle.
What appears with respect to O as a measurement of the variable q (with a specific outcome), appears with respect to O′ simply as the establishing of a correlation between S and O (without any specific outcome). As far as the observer O is concerned, a quantum event has happened and a property q of a system S has taken a certain value. As far as the second observer O′ is concerned, the only relevant element of reality is that a correlation is established between S and O. This correlation will manifest itself only in any further observation that O′ would perform on the S+O system. Up to the time in which it physically interacts with S+O, the system O′ has no access to the actual outcomes of the measurements performed by O on S . This actual outcome is real only with respect to O (Rovelli 1996, pp. 1650-52). Consider for instance a two-state system O (say, a light-emitting diode, or l.e.d., which can be on or off) interacting with a two-state system S (say, the spin of an electron, which can be up or down). Assume the interaction is such that if the spin is up (down) the l.e.d. goes on (off). To start with, the electron can be in a superposition of its two states. In the account of the state of the electron that we can associate with the l.e.d., a quantum event happens in the interaction, the wave function of the electron collapses to one of two states, and the l.e.d. is then either on or off. But we can also consider the l.e.d./electron composite system as a quantum system and study the interactions of this composite system with another system O′. In the account associated to O′, there is no event and no collapse at the time of the interaction, and the composite system is still in the superposition of the two states [spin up/l.e.d. on] and [spin down/l.e.d. off] after the interaction. It is necessary to assume this superposition because it accounts for measurable interference effects between the two states: if quantum mechanics is correct, these interference effects are truly observable by O′. So, we have two discordant accounts of the same events. Can the two discord accounts be compared and does the comparison lead to contradiction? They can be compared, because the information on the first account is stored in the state of the l.e.d. and O′ has access to this information. Therefore O and O′ can compare their accounts of the state of the world.
However, the comparison does not lead to contradiction because the comparison is itself a physical process that must be understood in the context of quantum mechanics. Indeed, O′ can physically interact with the electron and then with the l.e.d. (or, equivalently, the other way around). If, for instance, he finds the spin of the electron up, quantum mechanics predicts that he will then consistently find the l.e.d. on (because in the first measurement the state of the composite system collapses on its [spin up/l.e.d. on] component). That is, the multiplicity of accounts leads to no contradiction precisely because the comparison between different accounts can only be a physical quantum interaction. This internal self-consistency of the quantum formalism is general, and it is perhaps its most remarkable aspect. This self consistency is taken in relational quantum mechanics as a strong indication of the relational nature of the world.
In fact, one may conjecture that this peculiar consistency between the observations of different observers is the missing ingredient for a reconstruction theorem of the Hilbert space formalism of quantum theory. Such a reconstruction theorem is still unavailable: On the basis of reasonable physical assumptions, one is able to derive the structure of an orthomodular lattice containing subsets that form Boolean algebras, which "almost", but not quite, implies the existence of a Hilbert space and its projectors' algebra (see the entry Quantum Logic and Quantum Probability.) Perhaps an appropriate algebraic formulation of the condition of consistency between subsystems could provide the missing hypothesis to complete the reconstruction theorem.
Bas van Fraassen has given an extensive critical discussion of this interpretation; he has also suggested an improvement, in the form of an additional postulate weakly relating the description of the same system given by different observers (van Fraassen 2008). Michel Bitbol has analyzed the relational interpretation of quantum mechanics from a Kantian perspective, substituting functional reference frames for physical (or naturalized) observers (Bitbol 2007).
3. Correlations
The conceptual relevance of correlations in quantum mechanics, — a central aspect of relational quantum mechanics — is emphasized by David Mermin, who analyses the statistical features of correlation (Mermin 1998), and arrives at views close to the relational ones. Mermin points out that a theorem on correlations in Hilbert space quantum mechanics is relevant to the problem of what exactly quantum theory tells us about the physical world. Consider a quantum system S with internal parts s, s′,…, that may be considered as subsystems of S , and define the correlations among subsystems as the expectation values of products of subsystems' observables. It can be proved that, for any resolution of S into subsystems, the subsystems' correlations determine uniquely the state of S. According to Mermin, this theorem highlights two major lessons that quantum mechanics teaches us: first, the relevant physics of S is entirely contained in the correlations both among the s, s′,…, themselves (internal correlations) and among the s′,…, and other systems (external correlations); second, correlations may be ascribed physical reality whereas, according to well-known ‘no-go’ theorems, the quantities that are the terms of the correlations cannot (Mermin 1998).
4. Self-reference and self-measurement
From a relational point of view, the properties of a system exists only in reference to another system. What about the properties of a system with respect to itself? Can a system measure itself? Is there any meaning of the correlations of a system with itself? Implicit in the relational point of view is the intuition that a complete self-measurement is impossible. It is this impossibility that forces all properties to be referred to another system. The issue of the self-measurement has been analyzed in details in two remarkable works, from very different perspectives, but with similar conclusions, by Marisa Dalla Chiara and by Thomas Breuer.
4.1 Logical aspect of the measurement problem
Marisa Dalla Chiara (1977) has addressed the logical aspect of the measurement problem. She observes that the problem of self-measurement in quantum mechanics is strictly related to the self-reference problem, which has an old tradition in logic. From a logical point of view the measurement problem of quantum mechanics can be described as a characteristic question of "semantical closure" of a theory. To what extent can quantum mechanics apply consistently to the objects and the concepts in terms of which its metatheory is expressed? Dalla Chiara shows that the duality in the description of state evolution, encoded in the ordinary (i.e. von Neumann's) approach to the measurement problem, can be given a purely logical interpretation: "If the apparatus observer O is an object of the theory, then O cannot realize the reduction of the wave function. This is possible only to another O′, which is ‘external’ with respect to the universe of the theory. In other words, any apparatus, as a particular physical system, can be an object of the theory. Nevertheless, any apparatus which realizes the reduction of the wave function is necessarily only a metatheoretical object " (Dalla Chiara 1977, p. 340). This observation is remarkably consistent with the way in which the state vector reduction is justified within the relational interpretation of quantum mechanics. When the system S+O is considered from the point of view of O′, the measurement can be seen as an interaction whose dynamics is fully unitary, whereas by the point of view of O the measurement breaks the unitarity of the evolution of S. The unitary evolution does not break down through mysterious physical jumps, due to unknown effects, but simply because O is not giving a full dynamical description of the interaction. O cannot have a full description of the interaction of S with himself (O), because his information is correlation information and there is no meaning in being correlated with oneself. If we include the observer into the system, then the evolution is still unitary, but we are now dealing with the description of a different observer.
4.2 Impossibility of complete self-measurement
As is well known, from a purely logical point of view self-reference properties in formal systems impose limitations on the descriptive power of the systems themselves. Thomas Breuer has shown that, from a physical point of view, this feature is expressed by the existence of limitations in the universal validity of physical theories, no matter whether classical or quantum (Breuer 1995). Breuer studies the possibility for an apparatus O to measure its own state. More precisely, of measuring the state of a system containing an apparatus O. He defines a map from the space of all sets of states of the apparatus to the space of all sets of states of the system. Such a map assigns to every set of apparatus states the set of system states that is compatible with the information that — after the measurement interaction — the apparatus is in one of these states. Under reasonable assumptions on this map, Breuer is able to prove a theorem stating that no such map can exist that can distinguish all the states of the system. An apparatus O cannot distinguish all the states of a system S containing O. This conclusion holds irrespective of the classical or quantum nature of the systems involved, but in the quantum context it implies that no quantum mechanical apparatus can measure all the quantum correlations between itself and an external system. These correlations are only measurable by a second external apparatus, observing both the system and the first apparatus.
5. Other relational views
5.1 Quantum reference systems
A relational view of quantum mechanics has been proposed also by Gyula Bene (1997). Bene argues that quantum states are relative in the sense that they express a relation between a system to be described and a different system, containing the former as a subsystem and acting for it as a quantum reference system (here the system is contained in the reference system, while in Breuer's work the system contains the apparatus). Consider again a measuring system (O) that has become entangled with a measured system (S ) during a measurement. Once again, the difficulty of quantum theory is that there is an apparent contradiction between the fact that the quantity q of the system assumes an observed value in the measurement, while the composite S+O system still has to be considered in a superposition state, if we want to properly predict the outcome of measurements on the S+O system. This apparent contradiction is resolved by Bene by relativizing the state not to an observer, as in the relational quantum mechanics sketched in Section 2, but rather to a relevant composite system. That is: there is a state of the system S relative to S alone, and a state of the system S relative to the S+O composite system. (Similarly, there is a state of the system O relative to itself alone, and a state of the system O relative to the S+O ensemble.) The ensemble with respect to which the state is defined is called by Bene the quantum reference system . The state of a system with respect to a given quantum reference system correctly predicts the probability distributions of any measurement on the entire reference system. This dependence of the states of quantum systems from different quantum systems that act as reference systems is viewed as a fundamental property that holds no matter whether a system is observed or not.
5.2 Sigma algebra of interactive properties
Similar views have been expressed by Simon Kochen in unpublished but rather well-known notes (Kochen, 1979, preprint). In Kochen's words: "The basic change in the classical framework which we advocate lies in dropping the assumption of the absoluteness of physical properties of interacting systems… Thus quantum mechanical properties acquire an interactive or relational character." Kochen uses a σ-algebra formalism. Each quantum system has an associated Hilbert space. The properties of the system are established by its interaction with other quantum systems, and these properties are represented by the corresponding projection operators on the Hilbert space. These projectors are elements of a Boolean σ-algebra, determined by the physics of the interaction between the two systems. Suppose a quantum system S can interact with quantum systems Q, Q′,…. In each case, S will acquire an interaction σ-algebra of properties σ(Q), σ(Q′) since the interaction between S and Q may be finer grained than the interaction between S and Q′. Thus, interaction σ-algebras may have non-trivial intersections. The family of all Boolean σ-algebras forms a category, with the sets of the projectors of each σ-algebra as objects. In Kochen's words: "Just as the state of a composite system does not determine states of its components, conversely, the states of the… correlated systems do not determine the state of the composite system […] We thus resolve the measurement problem by cutting the Gordian knot tying the states of component systems uniquely to the state of the combined system." This is very similar in spirit to the Bene approach and to Rovelli's relational quantum mechanics, but the precise technical relation between the formalisms utilized in these approaches has not yet been analysed in full detail.
Further approaches at least formally related to Kochen's have been proposed by Healey (1989), who also emphasises an interactive aspect of his approach, and by Dieks (1989). See also the entry on ’Modal Interpretations of Quantum Mechanics’.
5.3 Quantum theory of the universe
Relational views on quantum theory have been defended also by Lee Smolin (1995) and by Louis Crane (1995) in a cosmological context. If one is interested in the quantum theory of the entire universe, then, by definition, an external observer is not available. Breuer's theorem shows then that a quantum state of the universe, containing all correlations between all subsystems, expresses information that is not available, not even in principle, to any observer. In order to write a meaningful quantum state, argue Crane and Smolin, we have to divide the universe in two components and consider the relative quantum state predicting the outcomes of the observations that one component can make on the other.
5.4 Relation with Everett's relative-state interpretation
Relational ideas underlie also the interpretations of quantum theory inspired by the work of Everett. Everett’ original work (Everett 1975) relies on the notion of "relative state" and has a marked relational tone (see quantum mechanics: Everett's relative-state formulation of). In the context of Everettian accounts, a state may be taken as relative either (more commonly) to a "world", or "branch", or (sometimes) to the state of another system (see for instance Saunders 1996, 1998). While the first variant (relationalism with respect to branches) is far from the relational views described here, the second variant (relationalism with respect to the state of a system) is closer.
However, it is different to say that something is relative to a system or that something is relative to a state of a system. Consider for instance the situation described in the example of Section 5: According to the relational interpretation, after the first measurement the quantity q has a given value and only one for O, while in Everettian terms the quantity q has a value for one state of O and a different value for another state of O, and the two are equally real. In Everett, there is an ontological multiplicity of realities, which is absent in the relational point of view, where physisical quantities are uniquely determined, once two systems are given.
The difference derives from a very general interpretational difference between Everettian accounts and the relational point of view. Everett (at least in its widespread version) takes the state Ψ as the basis of the ontology of quantum theory. The overall state Ψ includes different possible branches and different possible outcomes. On the other hand, the relational interpretation takes the quantum events q, that is, the actualizations of values of physical quantities, as the basic elements of reality (see Section 1.1 above) and such q's are assumed to be univocal. The relational view avoids the traditional difficulties in taking the q's as univocal simply by noticing that a q does not refer to a system, but rather to a pair of systems.
For a comparison between the relational interpretation and other current interpretations of quantum mechanics, see Rovelli 1996.
6. Some consequences of the relational point of view
A number of open conceptual issues in quantum mechanics appear in a different light when seen in the context of a relational interpretation of the theory. In particular, the Einstein-Podolsky-Rosen (EPR) correlations have a substantially different interpretation within the perspective of the relational interpretation of quantum mechanics. Laudisa (2001) has argued that the non-locality implied by the conventional EPR argument turns out to be frame-dependent, and this result supports the "peaceful coexistence" of quantum mechanics and special relativity. More radically, Rovelli and Smerlak (2006) argue that these correlations do not entail any form of "non-locality", when viewed in the context of this interpretation, essentially because there is a quantum event relative to an observer that happens at a spacelike separation from this observer. The abandonment of strict Einstein realism implied by the relational stance permits to reconcile quantum mechanics, completeness and locality.
Also, the relational interpretation allows one to give a precise definition of the time (or, better, the probability distribution of the time) at which a measurement happens, in terms of the probability distribution of the correlation between system and apparatus, as measurable by a third observer (Rovelli 1998).
Finally, it has been suggested in (Rovelli 1997) that the relationalism at the core of quantum theory pointed out by the relational interpretations might be connected with the spatiotemporal relationalism that characterizes general relativity. Quantum mechanical relationalism is the observation that there are no absolute properties: properties of a system S are relative to another system O with which S is interacting. General relativistic relationalism is the well known observation that there is no absolute localization in spacetime: localization of an object S in spacetime is only relative to the gravitational field, or to any other object O, to which S is contiguous. There is a connection between the two, since interaction between S and O implies contiguity and contiguity between S and O can only be checked via some quantum interaction. However, because of the difficulty of developing a consistent and conceptually transparent theory of quantum gravity, so far this suggestion has not been developed beyond the stage of a simple intuition.
7. Conclusion
Relational interpretations of quantum mechanics propose a solution to the interpretational difficulties of quantum theory based on the idea of weakening the notions of the state of a system, event, and the idea that a system, at a certain time, may just have a certain property. The world is described as an ensemble of events ("the electron is the point x") which happen only relatively to a given observer. Accordingly, the state and the properties of a system are relative to another system only. There is a wide diversity in style, emphasis, and language in the authors that we have mentioned. Indeed, most of the works mentioned have developed independently from each other. But it is rather clear that there is a common idea underlying all these approaches, and the convergence is remarkable.
Werner Heisenberg first recognized that the electron does not have a well defined position when it is not interacting. Relational interpretations push this intuition further, by stating that, even when interacting, the position of the electron is only determined in relation to a certain observer, or to a certain quantum reference system, or similar.
In physics, the move of deepening our insight into the physical world by relativizing notions previously used as absolute has been applied repeatedly and very successfully. Here are a few examples. The notion of the velocity of an object has been recognized as meaningless, unless it is indexed with a reference body with respect to which the object is moving. With special relativity, simultaneity of two distant events has been recognized as meaningless, unless referred to a specific state of motion of something. (This something is usually denoted as "the observer" without, of course, any implication that the observer is human or has any other peculiar property besides having a state of motion. Similarly, the "observer system" O in quantum mechanics need not to be human or have any other property beside the possibility of interacting with the "observed" system S.) With general relativity, the position in space and time of an object has been recognized as meaningless, unless it is referred to the gravitational field, or to some other dynamical physical entity. The move proposed by the relational interpretations of quantum mechanics has strong analogies with these, but is, in a sense, a longer jump, since all physical events and the entirety of the contingent properties of any physical system are taken to be meaningful only as relative to a second physical system. The claim of the relational interpretations is that this is not an arbitrary move. Rather, it is a conclusion which is difficult to escape, following from the observation — explained above in the example of the "second observer" — that a variable (of a system S) can have a well determined value q for one observer (O) and at the same time fail to have a determined value for another observer (O′).
This way of thinking the world has certainly heavy philosophical implications. The claim of the relational interpretations is that it is nature itself that is forcing us to this way of thinking. If we want to understand nature, our task is not to frame nature into our philosophical prejudices, but rather to learn how to adjust our philosophical prejudices to what we learn from nature.
Other Internet Resources
[Please contact the authors with further suggestions.]
Related Entries
properties | quantum mechanics | quantum mechanics: action at a distance in | quantum mechanics: collapse theories | quantum mechanics: Everett's relative-state formulation of | quantum mechanics: modal interpretations of | quantum theory: measurement in | quantum theory: quantum entanglement and information | quantum theory: quantum logic and probability theory |
97bff736d7fb05ec | Quantum Pendulum
Initializing live version
Download to Desktop
Requires a Wolfram Notebook System
For an idealized classical pendulum consisting of a point mass attached to a massless rigid rod of length attached to a stationary pivot, in the absence of friction and air resistance, the energy is given by
where is the angular displacement from the vertical direction. The oscillation is presumed to occur between the limits , where to avoid the transition to a spherical pendulum. The exact solution for this classical problem is known (see, for example, [1]) and turns out to be very close to the behavior of a linear oscillator, for which can be approximated by . The natural frequency of oscillation is given by the series
where , the limiting linear approximation for the natural frequency (a result of great historical significance).
The nonlinear pendulum can be formulated as a quantum-mechanical problem represented by the Schrödinger equation
where . This has the form of Mathieu's differential equation, and its solutions are even and odd Mathieu functions of the form and [2]. However, we describe a more transparent solution, which uses the Fourier series used to compute the Mathieu functions.
Accordingly, the solution of the Schrödinger equation is represented by a Fourier expansion
This can be put in a more compact form:
The matrix elements of the Hamiltonian are given by
in terms of a set of normalized basis functions
, .
The built-in Mathematica function Eigensystem is then applied to compute the eigenvalues and eigenfunctions for , which are then displayed in the graphic.
Contributed by: S. M. Blinder (January 2019)
Open content licensed under CC BY-NC-SA
[1] A. Beléndez, C. Pascual, D. I. Méndez, T. Beléndez and C. Neipp, "Exact Solution for the Nonlinear Pendulum," Revista Brasileira de Ensino de Física, 29(4), 2007 pp. 645–648. doi:10.1590/S1806-11172007000400024.
[2] T. Pradhan and A. V. Khare, "Plane Pendulum in Quantum Mechanics," American Journal of Physics, 41(1), 1973 pp. 59–66. doi:10.1119/1.1987121.
Feedback (field required)
Email (field required) Name
Occupation Organization |
cd70a561dc140575 | Contextual emergence
From Scholarpedia
Harald Atmanspacher and Peter beim Graben (2009), Scholarpedia, 4(3):7997. doi:10.4249/scholarpedia.7997 revision #193943 [link to/cite this article]
Jump to: navigation, search
Post-publication activity
Curator: Harald Atmanspacher
Contextual emergence characterizes a specific kind of relationship between different domains of scientific descriptions of particular phenomena. Although these domains are not ordered strictly hierarchically, one often speaks of lower and higher levels of description, where lower levels are considered as more fundamental in a certain sense. As a rule, phenomena at higher levels of description are more complex than phenomena at lower levels. This increasing complexity depends on contingent conditions, so-called contexts, that must be taken into account for an appropriate description.
Moving up or down in the hierarchy of descriptions also decreases or increases the amount of symmetries relevant at the respective level. A (hypothetical) description at a most fundamental level would have no broken symmetry, meaning that such a description is invariant under all conceivable transformations. This would amount to a description completely free of contexts: everything is described by one (set of) fundamental law(s). Indeed, this is sometimes called (the dream of) a "theory of everything", but it is equally correct to call it – literally – a "theory of nothing". The consequence of complete symmetry is that there are no distinguishable phenomena. Broken symmetries provide room for contexts and, thus, "create" phenomena.
Contextual emergence utilizes lower-level features as necessary (but not sufficient) conditions for the description of higher-level features. As will become clear below, it can be viably combined with the idea of multiple realization, a key issue in supervenience (Kim 1992, 1993), which poses sufficient but not necessary conditions at the lower level. Both contextual emergence and supervenience are interlevel relations more specific than a patchwork scenario as in radical emergence and more flexible than a radical reduction where everything is already contained at a lower (or lowest) level.
Contextual emergence was introduced by Bishop and Atmanspacher (2006) as a structural relation between different levels of description. As such, it belongs to the class of synchronic types of emergence (Stephan 1999). It does not address questions of diachronic emergence, referring to how new qualities arise dynamically, as a function of time. Contextual emergence also differs from British emergentism from Mill to Broad. An informative discussion of various types of emergence versus reductive interlevel relations is due to Beckermann et al. (1992), see also Gillett (2002).
The conceptual scheme
The basic idea of contextual emergence is to establish a well-defined interlevel relation between a lower level \(L\) and a higher level \(H\) of a system. This is done by a two-step procedure that leads in a systematic and formal way (1) from an individual description \(L_i\) to a statistical description \(L_s\) and (2) from \(L_s\) to an individual description \(H_i\). This scheme can in principle be iterated across any connected set of descriptions, so that it is applicable to any case that can be formulated precisely enough to be a sensible subject of a scientific investigation.
The essential goal of step (1) is the identification of equivalence classes of individual states that are indistinguishable with respect to a particular ensemble property. This step implements the multiple realizability of statistical states in \(L_s\) (which will be the basis for individual states in \(H_i\)) by individual states in \(L_i\ .\) The equivalence classes at \(L\) can be regarded as cells of a partition. Each cell is the support of a (probability) distribution representing a statistical state, encoding limited knowledge about individual states.
The essential goal of step (2) is the assignment of individual states at level \(H\) to coextensional statistical states at level \(L\ .\) This is impossible without additional information about the desired level-\(H\) description. In other words, it requires the choice of a context setting the framework for the set of observables (properties) at level \(H\) that is to be constructed from level \(L\ .\) The chosen context provides conditions that can be implemented as stability criteria at level \(L\ .\) It is crucial that such stability conditions cannot be specified without knowledge about the context at level \(H\ .\) In this sense the context yields a top-down constraint, or downward confinement (sometimes misleadingly called downward causation).
The notion of stability induced by context is of paramount significance for contextual emergence. Roughly speaking, stability refers to the fact that some system is robust under (small) perturbations - for instance, if (small) perturbations of a homeostatic or equilibrium state are damped out by the dynamics, so that the initial state is (asymptotically) retained. The more complicated notion of a stable partition of a state space is based on the idea of coarse-grained states, i.e. cells of a partition whose boundaries are (approximately) maintained under the dynamics.
Stability criteria guarantee that the statistical states of \(L_s\) are based on a robust partition so that the emergent observables in \(H_i\) are well-defined. (For instance, if a partition is not stable under the dynamics of the system at \(L_i\ ,\) the assignment of states in \(H_i\) will change over time and, thus, will be ill-defined.) Implementing a contingent context of \(H_i\) as a stability criterion in \(L_i\) yields a proper partitioning for \(L_s\ .\) In this way, the lower-level state space is endowed with a new, contextual topology (see Atmanspacher (2007) and Atmanspacher and Bishop (2007) for more details).
From a slightly different perspective, the context selected at level \(H\) decides which details in \(L_i\) are relevant and which are irrelevant for individual states in \(H_i\ .\) Differences among all those individual states at \(L_i\) that fall into the same equivalence class at \(L_s\) are irrelevant for the chosen context. In this sense, the stability condition determining the contextual partition at \(L_s\) is also a relevance condition.
The interplay of context and stability across levels of description is the core of contextual emergence. Its proper implementation requires an appropriate definition of individual and statistical states at these levels. This means in particular that it would not be possible to construct emergent observables in \(H_i\) from \(L_i\) directly, without the intermediate step to \(L_s\ .\) And it would be equally impossible to construct these emergent observables without the downward confinement arising from higher-level contextual constraints.
In this spirit, bottom-up and top-down strategies are interlocked with one another in such a way that the construction of contextually emergent observables is self-consistent. Higher-level contexts are required to implement lower-level stability conditions leading to proper lower-level partitions, which in turn are needed to define those lower-level statistical states that are co-extensional (not necessarily identical!) with higher-level individual states and their associated observables.
Example: From mechanics to thermodynamics
As an example, consider the transition from classical point mechanics over statistical mechanics to thermodynamics (Bishop and Atmanspacher 2006). Step (1) in the discussion above is here the step from point mechanics to statistical mechanics, essentially based on the formation of an ensemble distribution. Particular properties of a many-particle system are defined in terms of a statistical ensemble description (e.g., as moments of a many-particle distribution function) which refers to the statistical state of an ensemble (\(L_s\)) rather than the individual states of single particles (\(L_i\)).
An example for an observable associated with the statistical state of a many-particle system is its mean kinetic energy, which can be calculated from the the Maxwell-Boltzmann distribution of the momenta of all N particles. The expectation value of kinetic energy is defined as the limit of its mean value for infinite N.
Step (2) is the step from statistical mechanics to thermodynamics. Concerning observables, this is the step from the expectation value of a momentum distribution of a particle ensemble (\(L_s\)) to the temperature of the system as a whole (\(H_i\)). In many standard philosophical discussions this step is mischaracterized by the false claim that the thermodynamic temperature of a gas is identical with the mean kinetic energy of the molecules which constitute the gas. A proper discussion of the details was not available for a long time and has been achieved by Haag et al. (1974) and Takesaki (1970) in the framework of quantum field theory.
The main conceptual point in step (2) is that thermodynamic observables such as temperature presume thermodynamic equilibrium as a crucial assumption serving as a contextual condition. It is formulated in the zeroth law of thermodynamics and not available at the level of statistical mechanics. The very concept of temperature is thus foreign to statistical mechanics and pertains to the level of thermodynamics alone. (Needless to say, there are more thermodynamic observables in addition to temperature. Note that also a feature so fundamental as irreversibility in thermodynamics depends crucially on the context of thermal equilibrium.)
The context of thermal equilibrium (\(H_i\)) can be recast in terms of a class of distinguished statistical states (\(L_s\)), the so-called Kubo-Martin-Schwinger (KMS) states. These states are defined by the KMS condition which characterizes the (structural) stability of a KMS state against local perturbations. (More precisely, this includes stationarity, ergodicity, and mixing; compare Atmanspacher and beim Graben 2007). Hence, the KMS condition implements the zeroth law of thermodynamics as a stability criterion at the level of statistical mechanics. (The second law of thermodynamics expresses this stability in terms of a maximization of entropy for thermal equilibrium states. Equivalently, the free energy of the system is minimal in thermal equilibrium.)
Statistical KMS states induce a contextual topology in the state space of statistical mechanics (\(L_s\)) which is basically a coarse-grained version of the topology of \(L_i\ .\) This means nothing else than a partitioning of the state space into cells, leading to statistical states (\(L_s\)) that represent equivalence classes of individual states (\(L_i\)). They form ensembles of states that are indistinguishable with respect to their mean kinetic energy and can be assigned the same temperature (\(H_i\)). Differences between individual states at \(L_i\) falling into the same equivalence class at \(L_s\) are irrelevant with respect to a particular temperature at \(H_i\ .\)
While step (1) formulates statistical states from individual states at the mechanical level of description, step (2) provides individual thermal states from statistical mechanical states. Along with this step goes a definition of new, emergent thermal observables that are coextensive, but not identical with mechanical observables.. All this is guided by and impossible without the explicit use of the context of thermal equilibrium.
The example of the relation between mechanics and thermodynamics is particularly valuable for the discussion of contextual emergence because it illustrates the two essential construction steps in great detail. There are other examples in physics and chemistry which can be discussed in terms of contextual emergence: emergence of polymeric and other complex fluids from linear thermodynamics (Grmela and Öttinger 1997, Öttinger and Grmela 1997), emergence of geometric optics from electrodynamics (Primas 1998), emergence of electrical engineering concepts from electrodynamics (Primas 1998), emergence of chirality as a classical observable from quantum mechanics (Bishop 2005, Bishop and Atmanspacher 2006, Gonzalez et al. 2019), emergence of diffusion and friction of quantum particles in a thermal medium (de Roeck and Fröhlich 2011, Fröhlich et al. 2011), emergence of hydrodynamic properties from many-particle theory (Bishop 2008).
More examples from the sciences can be found in the readable monograph by Chibbaro et al. (2014). A comprehensive monograph on levels of coarse-graining in non-equilibrium systems and techniques to construct them is due to Öttinger (2005, especially Secs.~6-8). Recently, Bishop (2019) offered an account of contextual emergence in historical context and outlines novel future implications.
Applications in cognitive neuroscience
If descriptions at \(L\) and \(H\) are well established, as it is the case in the preceding example, formally precise interlevel relations can be set up fairly straightforwardly. The situation becomes more challenging, though, when no such established descriptions are available, e.g. in cognitive neuroscience or consciousness studies, where relations between neural and mental descriptions are considered. Even there, contextual emergence has been proven viable for the construction of emergent mental states (e.g., the identification of neural correlates of conscious states). That brain activity provides necessary but not sufficient conditions for mental states, which is a key feature of contextual emergence, becomes increasingly clear even among practicing neuroscientists, see for instance the article by Frith (2011).
Hodkin-Huxley dynamics
A basic element of theoretical and computational neuroscience are the Hodgkin-Huxley equations for the generation and propagation of action potentials (Hodgkin and Huxley 1952). The Hodgkin-Huxley equations form a system of four ordinary nonlinear differential equations: one electric conductance equation for transmembrane currents, and three master equations describing the opening kinetics of sodium and potassium ion channels. At a higher-level description of ion channel functioning, these equations characterize a deterministic dynamical system. However, at a lower-level description, the presence of master equations within the Hodgkin-Huxley system indicates a stochastic approach in terms of transition probabilities of Markov processes.
A closer inspection of the Hodgkin-Huxley equations (beim Graben 2016) reveals that the dynamics of neuronal action potentials is actually contextually emergent over (at least) three levels of description. At the first and lowest level, ion channels must be treated as macro-molecular quantum objects that are governed by a many-particle Schrödinger equation. This Schrödinger equation describes a highly entangled state of electrons and atomic nuclei as a whole, which does not allow an interpretation in terms of molecular structures such as an ion channel with a pore that is either closed or open. The molecular structure of an ion channel is contextually emergent through the Born-Oppenheimer approximation (cf. Primas 1998, Bishop and Atmanspacher 2006), separating electronic and nucleonic wave functions. After that separation, the electronic quantum dynamics becomes constrained to a (relatively) rigid nucleonic frame that now possesses a classical spatial structure.
At a second level, the fluctuations of the spatial structure of an ion channel must be treated as a stochastic process. Under the respective stability conditions for such processes (stationarity, ergodicity, mixing; compare Atmanspacher and beim Graben 2007) a continuous master equation for the molecular configurations can be derived (van Kampen 1992). Finally, at the third level, a contextual coarse-graining of configuration space into four closed and one open state (here for the potassium channel), yields the master equations of the Hodgkin-Huxley system as a contextually emergent description.
Macrostates in neural systems
Contextual emergence addresses both the construction of a partition at a lower-level description and the application of a higher-level context to do this in a way adapted to a specific higher-level description. Two alternative strategies have been proposed to contruct \(H_i\)-states ("neural macrostates") from \(L_i\)-states ("neural microstates") previously: one by Amari and collaborators and another one by Crutchfield and collaborators.
Amari and colleagues (Amari 1974, Amari et al. 1977) proposed to identify neural macrostates based on two criteria: (i) the structural stability of microstates as a necessary lower-level condition, and (ii) the decorrelation of microstates as a sufficient higher-level condition. The required macrostate criteria, however, do not exploit the dynamics of the system in the direct way which a Markov partition allows. A detailed discussion of contextual emergence in Amari's approach is due to beim Graben et al. (2009).
Mental states from neurodynamics
For the contextual emergence of mental states from neural states, the first desideratum is the specification of proper levels \(L\) and \(H\ .\) With respect to \(L\ ,\) one needs to specify whether states of neurons, of neural assemblies or of the brain as a whole are to be considered; and with respect to \(H\) a class of mental states reflecting the situation under study needs to be defined. In a purely theoretical approach, this can be tedious, but in empirical investigations the experimental setup can often be used for this purpose. For instance, experimental protocols include a task for subjects that defines possible mental states, and they include procedures to record brain states.
The following discussion will first address a general theoretical scenario (developed by Atmanspacher and beim Graben 2007) and then a concrete experimental example (worked out by Allefeld et al. 2009). Both are based on the so-called state space approach to mental and neural systems, see Fell (2004) for a brief introduction.
The first step is to find a proper assignment of \(L_i\) and \(L_s\) at the neural level. A good candidate for \(L_i\) are the properties of individual neurons. Then the first task is to construct \(L_s\) in such a way that statistical states are based on equivalence classes of those individual states whose differences are irrelevant with respect to a given mental state at level \(H\ .\) This reflects that a neural correlate of a conscious mental state can be multiply realized by "minimally sufficient neural subsystems correlated with states of consciousness" (Chalmers 2000).
In order to identify such a subsystem, we need to select a context at the level of mental states. As one among many possibilities, one may use the concept of "phenomenal families" (Chalmers 2000) for this purpose. A phenomenal family is a set of mutually exclusive phenomenal (mental) states that jointly partition the space of mental states. Starting with something like creature consciousness, that is being conscious versus being not conscious, one can define increasingly refined levels of phenomenal states of background consciousness (awake, dreaming, sleep, anesthesia, ...), wake consciousness (perceptual, cognitive, affective, ...), perceptual consciousness (visual, auditory, tactile, ...), visual consciousness (color, form, location, ...), and so on.
Selecting one of these levels provides a context which can then be implemented as a stability criterion at \(L_s\ .\) In cases like the neural system, where complicated dynamics far from thermal equilibrium are involved, a powerful method to do so uses the neurodynamics itself to find proper statistical states. The essential point is to identify a partition of the neural state space whose cells are robust under the dynamics. This guarantees that individual mental states \(H_i\ ,\) defined on the basis of statistical neural states \(L_s\ ,\) remain well-defined as the system develops in time. The reason is that differences between individual neural states \(L_i\) belonging to the same statistical state \(L_s\) remain irrelevant as the system develops in time.
The construction of statistical neural states is strikingly analogous to what leads Butterfield (2012) to the notion of meshing dynamics. In his terminology, \(L\)-dynamics and \(H\)-dynamics mesh if coarse graining and time evolution commute. From the perspective of contextual emergence, meshing is guaranteed by the stability criterion induced by the higher-level context. In this picture, meshing translates into the topological equivalence of the two dynamics.
For multiple fixed points, their basins of attraction represent proper cells, while chaotic attractors need to be coarse-grained by so-called generating partitions. From experimental data, both can be numerically determined by partitions leading to Markov chains. These partitions yield a rigorous theoretical constraint for the proper definition of stable mental states. The formal tools for the mathematical procedure derive from the fields of ergodic theory (Cornfeld et al. 1982) and symbolic dynamics (Marcus and Lind 1995), and are discussed in some detail in Atmanspacher and beim Graben (2007) and Allefeld et al. (2009).
A pertinent example for the application of contextual emergence to experimental data is the relation between mental states and EEG dynamics. In a recent study, Allefeld et al. (2009) tested the method using data from the EEG of subjects with sporadic epileptic seizures. This means that the neural level is characterized by brain states recorded via EEG, while the context of normal and epileptic mental states essentially requires a bipartition of that neural state space.
The data analytic procedure rests on ideas by Gaveau and Schulman (2005), Froyland (2005), and Deuflhard and Weber (2005). It starts with a (for instance) 20-channel EEG recording, giving rise to a state space of dimension 20, which can be reduced to a lower number by restricting to principal components (PC). On the resulting low-dimensional state space, a homogeneous grid of cells is imposed in order to set up a Markov transition matrix T reflecting the EEG dynamics on a fine-grained auxiliary partition.
The eigenvalues of T express relaxation time scales for the dynamics which can be ordered by size. Gaps between successive relaxation times indicate groupings referring to mental states defined by partitions of neural states of increasing refinement. The first group is often sufficient for the distinction of "target" mental states.
The eigenvectors corresponding to the eigenvalues of T span an eigenvector space, in which the measured PC-compactified states form a simplex. For instance, three leading eigenvalues allow a representation of neural states in a two-dimensional eigenvector space which yields a 2-simpex with 3 vertices (a triangle). Classifying the measured neural states according to their distance from the vertices of the simplex then leads to three clusters of neural data. They can be coded and identified in the PC-state space (Allefeld and Bialonski 2007), where the clusters appear as non-intersecting convex sets distinguishing one normal state and one seizure state (composed of two substates). For details see Allefeld et al. (2009, Sec.~V).
Finally, the result of the partitioning can be inspected in the originally recorded time series to check whether mental states are reliably assigned to the correct episodes in the EEG dynamics. The study by Allefeld et al. (2009) shows perfect agreement between the distinction of normal and epileptic states and the bipartition resulting from the spectral analysis of the neural transition matrix.
Another EEG-segmentation algorithm that utilizes the recurrence structure of multivariate time series has been suggested by beim Graben and Hutt (2013, 2015). Their recurrence structure analysis (RSA) partitions the state space into clusters of recurrent, and therefore, overlapping balls obtained from the recurrence plot of the dynamical system (Eckman et al. 1987). Different choices of the radius r of the balls leads to potentially different segmentations of the time series from the corresponding partitions. An optimal choice of r, however, will ideally reflect the dwell times within metastable states and the transitions between metastable states (beim Graben et al. 2016). This can be described by a Markov chain with one distinguished transient state and other states representing the metastable states in the dynamics.
The deviation of a given contextual segmentation from the optimal segmentation can be assessed using a utility function whose maximization leads to a contextually emergent brain microstate segmentation of the EEG. Applying this technique to EEG data from anesthetized ferrets and to event-related brain potentials from human language-processing experiments revealed good correlation with mental states.
Philosophy of mind
According to Dennett (1989), the intentional stance can be applied to the prediction of any system's behavior that is too complex to be treated as either a physical or a designed system. Intentional systems in this sense are systems whose behavior is predictable upon ascribing beliefs and desires to their internal states. Examples for intentional systems range from thermostats and chess computers over "magnetic snakes" (Snezhko et al. 2006) to "true believers", e.g. human beings.
In order to make meaningful predictions of a system, several necessary and sufficient conditions on the system's dynamics must be fulfilled (beim Graben 2014). First of all, the system's dynamics must be non-trivial, thus excluding most kinds of linear systems with periodic oscillations or damped relaxations. The class of putative intentional systems can be embedded into an "intentional hierarchy" ranging from the general case of nonlinear nonequilibrium dissipative systems to more specific intentional systems and "true believers" as a subclass.
Being a physical system is necessary for being a nonlinear dissipative nonequilibrium system; being a nonlinear dissipative nonequilibrium system is necessary for being an intentional system; and being an intentional system is necessary for being a true believer. Moreover, sufficient conditions within the intentional hierarchy implement contextual stability conditions.
The most general case corresponds to the transition from equilibrium thermodynamics to fluid dynamics: The phenomenal laws of fluid dynamics (the Navier-Stokes equations) emerge from statistical mechanics under the assumption of "local equilibrium". At the next level, several sufficient boundary conditions must be selected to give rise to processes of self-organization, nicely illustrated by means of "magnetic snakes". Then, a rationality constraint is imposed for optimal dissipation of pumped energy (Tschacher and Haken 2007). Finally, "true believers" are contextually emergent as intentional systems that are stable under mutual adoption of the intentional stance.
Symbol Grounding
Another application of contextual emergence refers to the symbol grounding problem posed by Harnad (1990). The key issue of symbol grounding is the problem of assigning meaning to symbols on purely syntactic grounds, as proposed by cognitivists such as Fodor and Pylyshin (1988). This entails the question of how conscious mental states can be characterized by their neural correlates, see Atmanspacher and beim Graben (2007). Viewed from a more general perspective, symbol grounding has to do with the relation between analog and digital systems, the way in which syntactic digital symbols are related to the analog behavior of a system they describe symbolically.
An instructive example for this distinction is given by dynamical automata (Tabor 2002, Carmantini et al. 2017). These are piecewise linear (globally nonlinear) time-discrete maps over a two-dimensional state space which assume their interpretation as symbolic computers through a rectangular partition of the unit square. Interestingly, a single point trajectory, i.e. the evolution of microstates (at a lower level description) is not fully interpretable as symbolic computation. Therefore, one has to consider (higher-level) macrostates, based on ensembles of state space points (or probability distributions of points) that evolve under the dynamics.
Beim Graben and Potthast (2012) showed that only uniform probability distributions with rectangular support exhibit a stable dynamics that is interpretable as computation. Thus, the huge space of possible probability distributions must be contextually restricted to the subclass of uniform probability distributions in order to obtain meaningfully grounded symbolic processes. In this sense, symbol grounding is contextually emergent.
Mental causation
It is a long-standing philosophical puzzle how the mind can be causally relevant in a physical world: the problem of mental causation (for reviews see Robb and Heil 2009 and Harbecke 2008, Ch.~1). The question of how mental phenomena can be causes is of high significance for an adequate comprehension of scientific disciplines such as psychology and cognitive neuroscience. Moreover, mental causation is crucial for our everyday understanding of what it means to be an agent in a natural and social environment. Without the causal efficacy of mental states the notion of agency would be nonsensical.
One of the reasons why the causal efficacy of the mental has appeared questionable is that a horizontal (intralevel, diachronic) determination of a mental state by prior mental states seems to be inconsistent with a vertical (interlevel, synchronic) determination of that mental state by neural states. In a series of influential papers and books, Kim has presented his much discussed supervenience argument (also known as exclusion argument), which ultimately amounts to the dilemma that mental states either are causally inefficacious or they hold the threat of overdetermining neural states. In other words: either mental events play no horizontally determining causal role at all, or they are causes of the neural bases of their relevant horizontal mental effects (Kim 2003).
The interlevel relation of contextual emergence yields a quite different perspective on mental causation. It dissolves the alleged conflict between horizontal and vertical determination of mental events as ill-conceived (Harbecke and Atmanspacher 2012). The key point is a construction of properly defined mental states from the dynamics of an underlying neural system. This can be done via statistical neural states based on a proper partition, such that these statistical neural states are coextensive (but not necessarily identical) with individual mental states.
This construction implies that the mental dynamics and the neural dynamics, related to each other by a so-called intertwiner, are topologically conjugate (Atmanspacher and beim Graben 2007). Given properly defined mental states, the neural dynamics gives rise to a mental dynamics that is independent of those neurodynamical details that are irrelevant for a proper construction of mental states.
As a consequence, (i) mental states can indeed be causally and horizontally related to other mental states, and (ii) they are neither causally related to their vertical neural determiners nor to the neural determiners of their horizontal effects. This makes a strong case against a conflict between a horizontal and a vertical determination of mental events and resolves the problem of mental causation in a deflationary manner. Vertical and horizontal determination do not compete, but complement one another in a cooperative fashion. Both together deflate Kim's dilemma and reflate the causal efficacy of mental states. Our conclusions match with and refine the notion of proportionate causation introduced by Yablo (1992); see also Bishop (2012).
In this picture, mental causation is a horizontal relation between previous and subsequent mental states, although its efficacy is actually derived from a vertical relation: the downward confinement of (lower-level) neural states originating from (higher-level) mental constraints. This vertical relation is characterized by an intertwiner, a mathematical mapping, which must be distinguished from a causal before-after relation. For this reason, the terms downward causation or top-down causation (Ellis 2008) are infelicitous choices for addressing a downward confinement by contextual constraints.
Dual-aspect monism
Within the tradition of dual-aspect thinking, one can distinguish two different, in a sense opposing base conceptions. In one of them, psychophysically neutral elementary entities are composed to sets of such entities, and depending on the composition these sets acquire mental or physical properties. The other base conception refers to a psychophysically neutral domain which does not consist of elementary entities waiting to be composed, but is conceived as one overarching whole that is to be decomposed. In contrast to the atomistic picture of compositional dual-aspect monism, the holistic picture of the decompositional variant is strongly reminiscent of the fundamental insight of entanglement in quantum physics.
The contextual emergence of both the mental and the material from a psychophysically neutral whole requires a fresh look at the conceptual framework, both technically and in terms of the underlying metaphysics. At the technical level, we do now refer to the contextual emergence of multiplicity from unity, fine grains from coarse grains, rather than the other way around (Atmanspacher 2017). The basic idea here is that a "primordial" decomposition of an undivided whole generates (under a particular context) different domains that give rise to differentiations, e.g. the mind-matter distinction.
In the decompositional variety of dual-aspect monism, refinement by symmetry breakdown is conceptually prior to its opposite of generalization, where the restoration of symmetries generates equivalence classes of increasing size. The basic undivided, psychophysically neutral reality is the trivial partition where nothing is distinguished. There is full symmetry and, hence, the corresponding reality is "ineffable", or "discursively inaccessible". Successive decompositions give rise to more and more refined partitions, where symmetries are broken and equivalence classes beome smaller and smaller. Phenomenal families of mental states (Chalmers 2000) illustrate this for the mental domain.
At the metaphysical level, the mental and the physical remain epistemic, but the undivided whole is added as an ontic dimension. This reminds one of Plato's ideas or Kant's things-in-themselves, which are empirically inaccessible in principle and, in this sense, scientificallly mute. Indeed, an undivided whole cannot be further characterized without introducing distinctions that break up the wholeness. Yet, it provides one asset in the metaphyscis of the mind-matter problem that no other philosophical position provides: the emergence of mind-matter correlations as a direct and immediate consequence.
Philosophy of science
Stochastic and deterministic descriptions
Determinism is often understood as a feature of ontic descriptions of states and observables whereas stochasticity refers to epistemic descriptions (Atmanspacher (2002). Mathematical models of classical point mechanics are most common examples of deterministic descriptions, and three properties of these descriptions are particularly important (Bishop 2002): (1) differential dynamics, (2) unique evolution, and (3) value determinateness. (1) means essentially that the system's evolution obeys a differential equation (or some similar algorithm) in a space of ontic states. (2) says that for given initial and boundary conditions there is a unique trajectory. (3) assumes that any state be described with arbitrarily small (non-zero) error.
These three points are not independent from each other but define a hierarchy for the contextual emergence of deterministic descriptions (Bishop and beim Graben 2016). Assuming (1) as a necessary condition for determinism, (2) can be proven under the sufficient condition that the trajectories created by a vector field obeying (1) pass through points whose distance is stable under small perturbations. Assuming (2) for almost every initial condition as a necessary condition of determinism defines a phase flow with weak causality. In order to prove (3) one needs strong causality as a sufficient condition.
For a weakly causal system violating (3), trajectories may exponentially diverge, as in chaotic systems. In this situation, dilation techniques (e.g., Gustafson 2002) can lead to contextually emergent stochasticity in two steps. In the first step, a coarse-graining yields a Markov process. If this process is mixing such that it approaches an equilibrium distribution (Misra et al. 1979), the deterministic dynamics is a Kolmogorov-flow, thereby implementing microscopic chaos as a stability condition (Bishop and beim Graben 2016).
Interestingly, the converse is also possible. For a continuous stochastic process which fulfills the Markov criterion, the master equation approach leads to a deterministic "mean-field equation" (van Kampen 1992). Bishop and beim Graben (2016) showed that this situation is analogous to the paradigmatic example of the contextual emergence of thermal equilibrium states where thermal KMS macrostates are almost pure, and hence almost dispersion-free.
Reproducibility and relevance
Reproducibility is one of the pillars of scientific methodology, yet it becomes particularly difficult in interdisciplinary research where the results to be reproduced typically refer to more than one single level of description of the system considered. In such cases it is mandatory to distinguish the relevant attributes or observables of the system, depending on its description. Usually, different descriptive levels go along with different degrees of granularity. While lower-level descriptions address systems in terms of micro-properties (position, momentum, etc.), other, more global, macro-properties are more suitably taken into account for higher-level descriptions.
This observation led van Fraassen (1980) to the notion of explanatory relativity, where explanations are not only relationships between theories and facts; they are three-place relations between theories, facts, and contexts. The relevance of an explanation is determined by contexts that have to be selected, and are not themselves part of a scientific description.
Explanatory relativity backed up by relevance criteria can vitally serve the discussion of reproducibility across scientific disciplines. Features that are relevant for a proper explanation of some observation should have a high potential to be also relevant for the robust reproduction of that observation. But which properties of systems and their descriptions may be promising candidates for the application of such relevance criteria? One option to highlight relevance criteria is to consider the "granularity" (coarseness) of a description, which usually changes across disciplines.
The transformation between descriptive levels and their associated granularities is possible by the interlevel relation of contextual emergence (Atmanspacher et al. 2014). It yields a formally sound and empirically applicable procedure to construct level-specific criteria for relevant observables across disciplines. Relevance criteria merged with contextual emergence challenge the old idea of one fundamental ontology from which everything else derives. At the same time, the scheme of contextual emergence is specific enough to resist the backlash into a relativist patchwork of unconnected model fragments.
Relative onticity
Contextual emergence has been originally conceived as a relation between levels of descriptions, not levels of nature: It addresses questions of epistemology rather than ontology. In agreement with Esfeld (2009), who advocated that ontology needs to regain more significance in science, it would be desirable to know how ontological considerations might be added to the picture that contextual emergence provides. Bishop and Ellis (2020) sketched some ideas in this spirit.
A network of descriptive levels of varying degrees of granularity raises the question of whether descriptions with finer grains are more fundamental than those with coarser grains. The majority of scientists and philosophers of science in the past tended to answer this question affirmatively. As a consequence, there would be one fundamental ontology, preferentially that of elementary particle physics, to which the terms at all other descriptive levels can be reduced.
But this reductive credo also produced critical assessments and alternative proposals. A philosophical precursor of trends against a fundamental ontology is Quine's (1969) ontological relativity. Quine argued that if there is one ontology that fulfills a given descriptive theory, then there is more than one. It makes no sense to say what the objects of a theory are, beyond saying how to interpret or reinterpret that theory in another theory. Putnam (1981, 1987) later developed a related kind of ontological relativity, first called internal realism, later sometimes modified to pragmatic realism.
On the basis of these philosophical approaches, Atmanspacher and Kronz (1999) suggested how to apply Quine's ideas to concrete scientific descriptions, their relationships with one another, and with their referents. One and the same descriptive framework can be construed as either ontic or epistemic, depending on which other framework it is related to: bricks and tables will be regarded as ontic by an architect, but they will be considered highly epistemic from the perspective of a solid-state physicist.
Coupled with the implementation of relevance criteria due to contextual emergence (Atmanspacher 2016), the relativity of ontology must not be confused with dropping ontology altogether. The "tyranny of relativism" (as some have called it) can be avoided by identifying relevance criteria to distinguish proper context-specific descriptions from less proper ones. The resulting pluralistic picture (Dale 2008, Abney et al. 2014, Horst 2016) is more subtle and more flexible than an overly bold reductive fundamentalism, and yet it is more restrictive and specific than a patchwork of arbitrarily connected model fragments.
1. The combination of contextual emergence with supervenience can be seen as a program that comes conspicuously close to plain reduction. However, there is a subtle difference between the ways in which supervenience and emergence are in fact implemented. (In a related sense, Butterfield (2011a,b) has argued that emergence, supervenience and even reduction are not mutually incompatible.) While supervenience refers to states, the argument by emergence refers to observables. The important selection of a higher-level context leads to a stability criterion for states, but it is also crucial for the definition of the set of observables with which lower-level macrostates are to be associated.
2. An alternative to contextual emergence is the construction of macrostates within an approach called computational mechanics (Shalizi and Crutchfield 2001). A key notion in computational mechanics is the notion of a "causal state". Its definition is based on the equivalence class of histories of a process that are equivalent for predicting the future of the process. Since any prediction method induces a partition of the state space of the system, the choice of an appropriate partition is crucial. If the partition is too fine, too many (irrelevant) details of the process are taken into account; if the partition is too coarse, not enough (relevant) details are considered. As described in detail by Shalizi and Moore (2003), it is possible to determine partitions leading to causal states. This is achieved by minimizing their statistical complexity, the amount of information which the partition encodes about the past. Thus, the approach uses an information theoretical criterion rather than a stability criterion to construct a proper partition for macrostates. Causal states depend on the "subjectively" chosen initial partition but are then "objectively" fixed by the underlying dynamics. This has been expressed succinctly by Shalizi and Moore (2003): Nature has no preferred questions, but to any selected question it has a definite answer.
3. Statistical neural states are multiply realized by individual neural states, and they are coextensive with individual mental states; see also Bechtel and Mundale (1999) who proposed precisely the same idea. There are a number of reasons to distinguish this coextensivity from an identity relation which are beyond the scope of this article. For details see Harbecke and Atmanspacher (2011).
4. The reference to phenomenal families a la Chalmers must not be misunderstood to mean that contextual emergence provides an option to derive the appearance of phenomenal experience from brain behavior. The approach addresses the emergence of mental states still in the sense of a third-person perspective. "How it is like to be" in a particular mental state, i.e. its qualia character, is not addressed at all.
5. Besides the application of contextual emergence under well-controlled experimental conditions, it may be useful also for investigating spontaneous behavior. If such behavior together with its neural correlates is continuously monitored and recorded, it is possible to construct proper partitions of the neural state space. Mapping the time intervals of these partitions to epochs of corresponding behavior may facilitate the characterization of typical paradigmatic behavioral patterns.
6. It is an interesting consequence of contextual emergence that higher-level descriptions constructed on the basis of proper lower-level partitions are compatible with one another. Conversely, improper partitions yield, in general, incompatible descriptions (beim Graben and Atmanspacher 2006). As ad-hoc partitions usually will not be proper partitions, corresponding higher-level descriptions will generally be incompatible. This argument was proposed (Atmanspacher and beim Graben 2007) for an informed discussion of how to pursue "unity in a fragmented psychology", as Yanchar and Slife (2000) put it. In a similar vein, Spivey (2018) studied the influence of (improper) partitions on cognitive categorization processes modeled with cellular automata.
7. For additional directions of research that utilize ideas pertaining to contextual emergence in cognitive science and psychology see Tabor (2002), Dale and Spivey (2005), Jordan and Ghin (2006), Abney et al. (2014), or Moyal et al. (2020). They are similar in spirit, but differ in their scope and details. Applications of synergetics to cognitive science (Haken 2004) and brain science (Haken 2008) offer additional interesting parallels. The concept of closed-loop neuroscience (El Hady 2016) also utilizes the combination of bottom-up and top-down arguments in the study of multilevel systems.
D.H. Abney, R. Dale, J. Yoshimi, C.T. Kello, K. Tylen, R. Fusaroli (2014): Joint perceptual decision-making: A case study in explanatory pluralism. Frontiers in Psychology 5 (April), 330.
C. Allefeld, H. Atmanspacher, J. Wackermann (2009): Mental states as macrostates emerging from EEG dynamics. Chaos 19, 015102.
C. Allefeld, S. Bialonski (2007): Detecting synchronization clusters in multivariate time series via coarse-graining of Markov chains. Physical Review E 76, 066207.
S.-I. Amari (1974): A method of statistical neurodynamics. Kybernetik 14, 201–215.
S.-I. Amari, K. Yoshida, K.-I. Kanatani (1977): A mathematical foundation for statistical neurodynamics. SIAM Journal of Applied Mathematics 33(1), 95–126.
H. Atmanspacher (2002): Determinism is ontic, determinability is epistemic. In Between Chance and Choice, ed. by H. Atmanspacher and R. Bishop, Imprint Academis, Exeter, pp. 49-74.
H. Atmanspacher (2007): Contextual emergence from physics to cognitive neuroscience. Journal of Consciousness Studies 14(1/2), 18–36.
H. Atmanspacher (2016): Relevance criteria for reproducibility: The contextual emergence of granularity. In Reproducibility - Principles, Problems, Practices, Prospects, ed. by H. Atmanspacher and S. Maasen, Wiley, New York, pp. 525-538.
H. Atmanspacher (2017): Contextual emergence in decompositional dual-aspect monism. Mind and Matter 15, 111-129.
H. Atmanspacher, L. Bezzola, G. Folkers, and P.A. Schubiger (2014): Relevance relations for the concept of reproducibility. Journal of the Royal Society Interface 11(94), 20131030.
H. Atmanspacher and R.C. Bishop (2007): Stability conditions in contextual emergence. Chaos and Complexity Letters 2, 139–150.
H. Atmanspacher and P. beim Graben (2007): Contextual emergence of mental states from neurodynamics. Chaos and Complexity Letters 2, 151–168.
H. Atmanspacher and F. Kronz (1999): Relative onticity. In Quanta, Mind and Matter, ed. by H. Atmanspacher, A. Amann, and U. Müller-Herold, Kluwer, Dordrecht, pp.273-294.
W. Bechtel and J. Mundale (1999): Multiple realizability revisited: Linking cognitive and neural states. Philosophy of Science 66, 175--207.
A. Beckermann, H. Flohr, J. Kim (1992): Emergence or Reduction? de Gruyter, Berlin.
R.C. Bishop (2002): Deterministic and indeterministic descriptions. In Between Chance and Choice, ed. by H. Atmanspacher and R. Bishop, Imprint Academis, Exeter, pp. 5-31.
R.C. Bishop (2005): Patching physics and chemistry together. Philosophy of Science 72, 710–722.
R.C. Bishop (2008): Downward causation in fluid convection. Synthese 160, 229--248.
R.C. Bishop (2012): Excluding the causal exclusion argument against non-reductive physicalism. Journal of Consciousness Studies 19(5-6), 57-74.
R.C. Bishop (2019): The Physics of Emergence, Morgan & Claypool, Bristol.
R.C. Bishop and H. Atmanspacher (2006): Contextual emergence in the description of properties. Foundations of Physics 36, 1753–1777.
R.C. Bishop and P. beim Graben (2016): Contextual emergence of deterministic and stochastic descriptions. In From Chemistry to Consciousness. The Legacy of Hans Primas, ed. by H. Atmanspacher and U. Müller-Herold, Springer, Berlin, pp. 95-110.
R.C. Bishop and G.F.R. Ellis (2020): Contextual emergence of physical properties. Foundations of Physics 50, 481-510.
J. Butterfield (2012): Laws, causation and dynamics at different levels. Interface Focus 2, 101--114.
J. Butterfield (2011a): Emergence, reduction and supervenience: A varied landscape. Foundations of Physics 41, 920--960.
J. Butterfield (2011b): Less is different: emergence and reduction reconciled. Foundations of Physics 41, 1065--1135.
G.S. Carmantini, P. beim Graben, M. Desroches, S. Rodrigues (2017): A modular architecture for transparent computation in recurrent neural networks. Neural Networks 85, 85-105.
D. Chalmers (2000): What is a neural correlate of consciousness? In Neural Correlates of Consciousness, ed. by T. Metzinger, MIT Press, Cambridge, pp.17–39.
S. Chibbaro, L. Rondoni, A. Vulpiani (2014): Reductionism, Emergence and Levels of Reality, Springer, Berlin.
I.P. Cornfeld, S.V. Fomin, Ya.G. Sinai (1982): Ergodic Theory, Springer, Berlin, pp.250–252, 280–284.
R. Dale (2008): The possibility of a pluralist cognitive science. Journal of Experimental and Theoretical Artificial Intelligence 20, 155--179.
R. Dale, M. Spivey (2005): From apples and oranges to symbolic dynamics: A framework for conciliating notions of cognitive representations. Journal of Experimental and Theoretical Artificial Intelligence 17, 317–342.
D.C. Dennett (1989): The Intentional Stance, MIT Press, Cambridge.
W. de Roeck, J. Fröhlich (2011): Diffusion of a massive quantum particle coupled toa quasi-free thermal medium. Communications in Mathematical Physics 293, 361--398.
P. Deuflhard, M. Weber (2005): Robust Perron cluster analysis in conformation dynamics. Linear Algebra and its Applications 398, 161–184.
J.-P. Eckmann, S.O. Kamphorst, D. Ruelle (1987): Recurrence plots of dynamical systems. Europhysics Letters 4(9), 973-977.
G.F.R. Ellis (2008): On the nature of causation in complex systems. Transactions of the Royal Society of South Africa 63, 69--84.
M. Esfeld (2009): Hypothetical metaphysics of nature. In The Significance of the Hypothetical in the Natural Sciences, ed. by M. Heidelberger and G. Schiemann, deGruyter, Berlin, pp. 341-364.
J. Fell (2004): Identifying neural correlates of consciousness: The state space approach. Consciousness and Cognition 13, 709–729.
J. Fodor and Z.W. Pylyshyn (1988): Connectionism and cognitive architecture: A critical analysis. Cognition 28, 3--71.
C.D. Frith (2011): What brain plasticity reveals about the nature of consciousness: Commentary. Frontiers in Psychology 2, doi: 10.3389/fpsyg.2011.00087.
J. Fröhlich, Z. Gang, A. Soffer (2011): Some Hamiltonian models of friction. Journal of Mathematical Physics 52, 83508/1--13.
G. Froyland (2005): Statistically optimal almost-invariant sets. Physica D 200, 205–219.
B. Gaveau, L.S. Schulman (2005): Dynamical distance: coarse grains, pattern recognition, and network analysis. Bulletin de Sciences Mathematiques 129, 631–642.
C. Gillett (2002): The varieties of emergence: Their purposes, obligations and importance. Grazer Philosophische Studien 65, 95–121.
J.C.M. Gonzalez, S. Fortin, O. Lombardi (2019): Why molecular structure cannot be strictly reduced to quantum mechanics. Foundations of Chemistry 21, 31-45.
P. beim Graben (2014): Contextual emergence of intentionality. Journal of Consciousness Studies 21(5-6), 75-96.
P. beim Graben (2016): Contextual emergence in neuroscience. In Closed Loop Neuroscience, ed. by A. El Hady, Elsevier, Amsterdam, PP. 171-184.
P. beim Graben, H. Atmanspacher (2006): Complementarity in classical dynamical systems. Foundations of Physics 36, 291–306.
P. beim Graben, A. Barrett, H. Atmanspacher (2009): Stability criteria for the contextual emergence of macrostates in neural networks. Network: Computation in Neural Systems 20, 177-195.
P. beim Graben, A. Hutt (2013): Detecting recurrence domains of dynamical systems by symbolic dynamics. Physical Review Letters, 110(15), 154101.
P. beim Graben, A. Hutt (2015): Detecting event-related recurrences by symbolic analysis: Applications to human language processing. Philosophical Transactions of the Royal Society London A373, 20140089.
P. beim Graben, R. Potthast (2012): Implementing Turing machines in dynamic field architectures. In Proceedings of AISB12 World Congress 2012 - Alan Turing 2012, ed. by M. Bishop and Y.-J. Erden, pp. 36-40
P. beim Graben, K.K. Sellers, F. Fröhlich, A. Hutt (2016): Optimal estimation of recurrence structures from time series. Europhysics Letters 114, 38003.
M. Grmela, H.C. Öttinger (1997): Dynamics and thermodynamics of complex fluids I. Development of a general formalism. Physical Reveiw E 56, 6620-6632.
K. Gustafson (2002): Time-space dilations and stochastic-deterministic dynamics. In Between Chance and Choice, ed. by H. Atmanspacher and R. Bishop, Imprint Academis, Exeter, pp. 115-148.
R. Haag, D. Kastler, E.B. Trych-Pohlmeyer (1974): Stability and equilibrium states. Communications in Mathematical Physics 38, 173–193.
A. El Hady, ed. (2016): Closed Loop Neuroscience, Elsevier, Amsterdam.
H. Haken (2004): Synergetic Computers and Cognition, Spriner, Berlin.
H. Haken (2008): Brain Dynamics, Springer, Berlin.
J. Harbecke (2008): Mental Causation. Investigating the Mind's Powers in a Natural World, Ontos, Frankfurt.
J. Harbecke and H. Atmanspacher (2012): Horizontal and vertical determination of mental and neural states. Journal of Theoretical and Philosophical Psychology 32, 161-179.
S.~Harnad (1990): The symbol grounding problem. Physica D 42, 335--346.
A.L. Hodgkin, A.F. Huxley (1952): A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology 117, 500-544.
S. Horst (2016): Cognitive Pluralism, MIT Press, Cambridge.
J.S. Jordan and M. Ghin (2006): (Proto-) consciousness as a contextually emergent property of self-sustaining systems. Mind and Matter 4(1), 45–68.
J. Kim (1992): Multiple realization and the metaphysics of reduction. Philosophy and Phenomenological Research 52, 1–26.
J. Kim (1993): Supervenience and Mind, Cambridge University Press, Cambridge.
J. Kim (2003): Blocking causal drainage and other maintenance chores with mental causation. Philosophy and Phenomenological Research 67, 151--176.
D. Lind, B. Marcus (1995): Symbolic Dynamics and Coding, Cambridge University Press, Cambridge.
B. Misra, I. Prigogine, M. Courbage (1979): From deterministic dynamics to probabilistic descriptions. Proceedings of the National Academy of Sciences of the USA 76(8), 3607-3611.
R. Moyal, T. Fekete, S. Edelman (2020): Dynamical emergence theory: A computational account of phenomenal consciousness. Minds and Machines 30, 1-21.
H.C. Öttinger (2005): Beyond Equilibrium Thermodynamics, Wiley, New York.
H.C. Öttinger, M. Grmela (1997): Dynamics and thermodynamics of complex fluids II. Illustrations of a general formalism. Physical Review E 56, 6633-6655.
H. Primas (1998): Emergence in the exact sciences. Acta Polytechnica Scandinavica 91, 83–98.
H. Putnam (1981): Reason, Truth and History, Cambridge University Press, Cambridge.
H. Putnam (1987): The Many Faces of Realism, Open Court, La Salle.
W.V.O. Quine (1969): Ontological relativity. In Ontological Relativity and Other Essays, Columbia University Press, New York, pp. 26-68.
D. Robb and J. Heil (2009). Mental causation. In The Stanford Encyclopedia of Philosophy, ed. by E.~N. Zalta, Summer 2009 ed.
C.R. Shalizi, J.P. Crutchfield (2001): Computational mechanics: Pattern and prediction, structure and simplicity. Journal of Statistical Physics 104, 817–879.
C.R. Shalizi, C. Moore (2003): What is a macrostate? Subjective observations and objective dynamics. Preprint available at LANL cond-mat/0303625.
A. Snezhko, I.S. Aranson, W.-K. Kwok (2006): Surface wave assisted self-assembly of multidomain magnetic structures. Physical Review Letters 96, 078701.
M. Spivey (2018): Discovery in complex adaptive systems. Cognitive Systems Research 51, 40-55.
A. Stephan (1999): Emergenz. Von der Unvorhersagbarkeit zur Selbstorganisation, Dresden University Press, Dresden.
W. Tabor (2002): The value of symbolic computation. Ecological Psychology 14, 21--51.
M. Takesaki (1970): Disjointness of the KMS states of different temperatures. Communications in Mathematical Physics 17, 33–41.
W. Tschacher, H. Haken (2007): Intentionality in non-equilibrium systems? The functional aspects of self-organized pattern formation. New Ideas in Psychology 25, 1-15.
B. van Fraassen (1980): The Scientific Image, Clarendon, Oxford.
N.G. van Kampen (1992): Stochastic Processes in Physics and Chemistry, Elsevier, Amsterdam.
S. Yablo (1992): Mental causation. Philosophical Review 101, 245--280.
S.C. Yanchar, B.D. Slife (1997): Pursuing unity in a fragmented psychology: Problems and prospects. Review of General Psychology 1, 235–255.
Internal references
• Brian Marcus and Susan Williams (2008) Symbolic dynamics. Scholarpedia, 3(11):2923.
See also
Personal tools
Focal areas |
2a1e06e0c5d5307a |
Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
Conditions for entanglement to exist
1. Jul 18, 2007 #1
I am interested in the conditions necessary for entangled states to be created. Unfortunately I only have access to introductory QM texts, and they talk about about how entanglement can exists between particles etc, but no mention of the creation of them (and the conditions required for this).
I have also read somwhere that for the entanglement to exists, the particles need to be created by the same source, and conserve angular momentum.
Is this correct and all that is required, or if not, what conditions are required to create entangled states?
Thanks for your help
2. jcsd
3. Jul 18, 2007 #2
Any interaction between two unentangled (sub-)systems makes (generally) them entangled to a certain extend.
I think this is applies also to what you have in mind.
Maybe you could explain you point of view a little more and check if it is compatible with what I said.
4. Jul 18, 2007 #3
Thanks for your reply,
One of the examples given to explain an effect of entanglement is to do with the polarisations of two photons emitted from a source (travelling in opposite directions, although clearly this is just used to emphasize the point), and when you determine the polarisation of one of them, the state of polarisation of the other comes into existance.
But surely this doesnt happen for ANY two photons created anywhere, and independantly, does it? So what conditions are necessary for this to be the case? In other words, what conditions are necessary for two photons (or any particles) to become entangled?
Also, you mentioned about two unentangled sub systems becoming entangled (to a certain extent) when an interaction takes place, can you expand on this?
5. Jul 19, 2007 #4
In an unentangled system, measurement probabilities on one part A is independent of probabilities on part B.
But, if the part A and B have an interaction, after some time, this independence disappears.
This is easily seen from the solution of the coupled time-dependent Schrödinger equation:
i h df(a,b)/dt = (Ha + Ha + V) f(a,b)
where V represents the interaction.
If at an initial time f(a,b)=g(a)*h(b), the system is unentangled, and products of probabilities apply for the whole system.
If V=0, no interaction, the absence of entanglement will continue indefinitively.
However, if V =/= 0, the system will -generally- become entangled and the probabilities on each part of the system will not be independen anymore.
However, in special case and for special times the system may come back to an unentangled system. That's what I think more or less intuitively because the interaction can lead to an oscillatory behaviour of the entanglement.
It would be interresting to define a quantity that would represent the entanglement. It would be zero for unentangled systems and different of zero for entangled systems. I don't know if such a measure of entanglement should have an upper limit for some kind of "maximal entanglement".
With such a measure defined, one could solve the Schrödinger equation and see the evolution of the entanglement measure. Maybe we could see some oscillation.
What do you think?
I have often asked about "entanglement measures" but had few answers.
I found some litterature on that but was not satisfied (maybe no suitable for my hobby time!).
Could such an "entanglement measure" not be defined from elementary probability theory?
Last edited: Jul 19, 2007
6. Jul 20, 2007 #5
Thanks lalbatros,
It is interesting you talk about an oscillating entanglement behaviour, I had not really thought about this, only varying degrees of entanglement dependent on the conditions of the system, and (generally) decreasing with time.
It certainly would be useful to have a quantity we could deal with, but to use it in the schrodinger equation, would this indicate that it would have some relationship to the energy of the system?
Are there any books you could recommend which deal with entanglement, either qualitatively or quantitatively (or preferrably both).
Thanks again
7. Jul 21, 2007 #6
By far the most "normal" behaviour is a decrease of entanglement.
It is difficult to keep a system entangled since interactions with the outside world will entangle the system with the world, which amount to blurr the initial entanglement.
There are certainly many books and many papers delaing with the subject.
I have not really found something like a review paper or a textbook dealing with the subject, but I will look for it. Maybe I have some luck ...
Have something to add?
Similar Discussions: Conditions for entanglement to exist
1. On entanglement (Replies: 169) |
88a2c815cca29eea | Get access
Wave Packet Defocusing Due to a Highly Disordered Bathymetry
Address for correspondence: André Nachbin, IMPA, Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro, RJ, Brazil, CEP 22460-320; e-mail:
Slowly modulated water waves are considered in the presence of a strongly disordered bathymetry. Previous work is extended to the case where the random bottom irregularities are not smooth and are allowed to be of large amplitude. Through the combination of a conformal mapping and a multiple-scales asymptotic analysis it is shown that large variations of a disordered bathymetry can affect the nonlinearity coefficient of the resulting damped nonlinear Schrödinger equations. In particular it is shown that as the bathymetry fluctuation level increases the critical point (separating the focusing from the defocusing region) moves to the right, hence enlarging the region where the dynamics is of a defocusing character. |
e003685cf6627a8d | Blog Hole Memory Rescue and Lost Papers that were Really Lost
There is more than one path to classical mechanics.
So much to do, so little time. My own lost paper work (i.e. the translation of some of Hilbert's old papers that are not available in English) is commencing at a snail's pace, but at Kingsley Jones' blog we can learn about some papers that were truly lost and misplaced and that he only recovered because throughout all the moves and ups and downs of life, his parents have been hanging on to copies of the unpublished pre-prints. Kingsley's post affected me on a deeper level than the usual blog fare, because this is such a parent thing to do. Having (young) kids myself, I know exactly the emotional tug to never throw away anything they produce, even if they have seemingly moved on and abandoned it. On the other hand, the recollection of how he found these papers when going through his parent's belongings after they passed away, brings into sharp relief the fact that I have already begun this process for my father, who has Alzheimer's. So many of his things (such as his piano sheet music) are now just stark reminders of all the things he can no longer do.
On a more upbeat note: The content of these fortuitously recovered papers is quite remarkable. They expand on a formalism that Steven Weinberg developed, one that essentially allows you to continuously deform quantum mechanics, making it ever less quantum. In the limit, you end up with a wave equation that is equivalent to the Hamiltonian extremal principal--i.e. you recover classical mechanics and have a "Schrödinger equation" that always fully satisfies the Ehrenfest Theorem. In this sense, this mechanism is another route to Hamilton mechanicsThe anecdote of Weinberg's reaction when he learned about this news is priceless.
Ehrenfest's Theorem, in a manner, is supposed to be common sense mathematically formulated: QM expectation values of a system should obey classical mechanics in the classical limit. Within the normal QM frameworks this usually works, but the problem is that sometimes it does not, as every QM textbook will point out (e.g. these lecture notes). Ironically, at the time of writing, the Wikipedia entry on the Ehrenfest Theorem does not contain this key fact, which makes it kind of missing the point (just another example that one cannot blindly trust Wikipedia content). The above linked lecture notes illustrate this with a simple harmonic oscillator example and make this observation:
".... according to Ehrenfest’s theorem, the expectation values of position for this cubic potential will only agree with the classical behaviour insofar as the dispersion in position is negligible (for all time) in the chosen state."
So in a sense, this is what this "classic Schrödinger equation" accomplishes: a wave equation that always produces this necessary constraint in the dispersion. Another way to think about this is by invoking the analogy between Feynman's path integral and the classical extremal principle. Essentially, as the parameter lambda shrinks for Kingsley's generalized Schrödinger equation, the paths will be forced ever closer to the classically allowed extremal trajectory.
A succinct summation of the key math behind these papers can be currently found in Wikipedia, but you had better hurry, as the article is marked for deletion by editors following rather mechanistic notability criteria, by simply counting how many times the underlying papers were cited.
Unfortunately, the sheer number of citations is not a reliable measure with which to judge quality. A good example of this is the Quantum Discord research that is quite en vogue these days. It has recently been taken to task on R.R. Tucci's blog. Ironically, amongst many other aspects, it seem to me that Kingsley's approach may be rather promising to better understand decoherence, and possibly even put some substance to the Quantum Discord metric.
One thought on “Blog Hole Memory Rescue and Lost Papers that were Really Lost
Comments are closed. |
24bc63c631006374 | Dismiss Notice
Join Physics Forums Today!
Please Clarification Needed!
1. Oct 25, 2004 #1
Please!! Clarification Needed!
As I understand the wave particle duality of quantum mechanics, quanta exhibit both wave and particle properties. They:
(i) exhibit wave properties before being measured, i.e. the position of a single quantum object is spread out over space, and the wave amplitude of this object gives the probability of being found in a specific position. This lack of definite position before being measured (can just be calculated in terms of probability) means the quantum object exhibits wave properties.
(ii) exhibit particle properties once the measurement act has taken place, i.e. the position of a single quantum object is then localised, and the precision in its position reduces the ability to predict other quantities (the Uncertainty Principle).
The initial problem I had with the wave particle duality is that when I thought of quantum objects as being waves (in the traditional meaning of the word which I had learned at school, that is, a wave is a disturbance that transfers energy without transferring matter) I thought of it as not being a physical entity, i.e. it is not a tangible thing. Of course, this is not a problem with the particle part of the duality, because particles are actually tangible things.
However, as I now see it is that quantum objects are neither waves nor particles in the traditional sense of each word, but that they exhibit behaviours of both types but that they are still something physical, that is, something tangible (and not something such as a wave, which cannot really be described as a "thing", excuse the unscientific term).
Could anybody clarify if this is the correct interpretation? Am I right? Thanks anybody who takes some time in helping explain this concept.
2. jcsd
3. Oct 25, 2004 #2
Anybody willing to answer?
4. Oct 25, 2004 #3
User Avatar
Staff Emeritus
Science Advisor
Gold Member
I think you seem to understand it pretty well.
I wouldn't use the term "wave amplitude" though. The "wavefunction" is a function [tex]\psi[/tex] that takes a position [tex]\vec x[/tex] and a time [tex]t[/tex] to a complex number [tex]\psi(\vec x,t)[/tex]. The wavefunction is the mathematical representation of a particle. How it changes with time is specified by the Schrödinger equation. The interpretation of the wavefunction is that [tex]|\psi(\vec x,t)|^2[/tex] is a probability density, i.e. a number that you have to multiply with a volume to get a probability. That probability is the probability that a measurement at the specified time would find the particle in a volume of the specified size around the specified position. (Actually that's only an approximation. The correct way to get the probability is to integrate the probability density over that region of space).
5. Oct 25, 2004 #4
OK, and does the sum over histories approach then refer to the moment before an actual measurement is taken? I suppose that once the actual particle has been localised the particle is not taking all the possible paths anymore.
6. Oct 25, 2004 #5
User Avatar
Staff Emeritus
Science Advisor
Gold Member
Think of it as a sum over all different ways something can "happen" between two measurements. Suppose that a measurement reveals that a particle is localized near x at time 0 and another position measurement is done at time t>0. The probability that the result will be that the particle is localized near y is given by a sum (actually a really crazy kind of integral) over the probability amplitudes (complex numbers) associated with each path through spacetime that connects the two events (x,0) and (y,t).
What's important to realize is that even after the particle has been detected at y it would be wrong to assume that it took only one of those paths to (y,t). It would be more correct to say it took all of them.
This is the kind of stuff that's explained very well in Feynman's book "QED: The strange theory of light and matter". The book is thin, cheap and easy to read. I think you should get it if you don't have it already.
Have something to add?
Similar Discussions: Please Clarification Needed!
1. Need clarification (Replies: 4) |
22dbd7f3a10aa5ff | sábado, 22 de mayo de 2010
No existe la menor duda de que Hawking debe su enorme popularidad fuera del mundo de la física porque su nombre se liga-indefectiblemente a la existencia de esos extraños objetos que los científicos han dado en llamar agujeros negros. En un agujero negro se afirma nada sale,ni siquiera la luz. Pero la fama de Hawking se asienta en el hecho de su demostración de que si sale algo: ¡ una radiación que lleva su nombre!
Y así como la famosa paradoja del gato de Schorodinger ha gastado cerebros y paginas tratando de ser resuelta, el físico Susskind tiene su propio gato pero al que ha concebido como elefante que es lanzado a un agujero negro y requiere ser salvado.
Como salvarle si nada sale del agujero negro..? Susskind lo hace recurriendo a la posibilidad que se da en física quántica de estar a la vez en dos sitios, sin dejar de ser. En este caso , sin dejar elefante.
Pero a la vez la paradoja del elefante de doble ubicuidad ( alqo que revienta nuestra lógica casera) da pie a una teorizacion interesante llamada “Teoría del Todo”( Theory of Everything ) y mas concretamente a una rama desprendida de ella: la gravedad quantica que se ha convertido en una denodada lucha teórica por unir dos fases de la física divorciadas hasta ahora: la teoría de la relatividad y la gravedad los máximos pilares de la física moderna y de la que se relaciona con el nacimiento del Universo.
La revista científica: The New Scientist publica este interesante relato debido a Amanda Gefter, que nos pone en antecedentes acerca de este intento de elucubración teorética:
The elephant and the event horizon
( 26 October 2006 by Amanda Gefter , Magazine issue 2575 )
“ What happens when you throw an elephant into a black hole? It sounds like a bad joke, but it's a question that has been weighing heavily on Leonard Susskind's mind. Susskind, a physicist at Stanford University in California, has been trying to save that elephant for decades. He has finally found a way to do it, but the consequences shake the foundations of what we thought we knew about space and time. If his calculations are correct, the elephant must be in more than one place at the same time.
In everyday life, of course, locality is a given. You're over there, I'm over here; neither of us is anywhere else. Even in Einstein's theory of relativity, where distances and timescales can change depending on an observer's reference frame, an object's location in space-time is precisely defined. What Susskind is saying, however, is that locality in this classical sense is a myth. Nothing is what, or rather, where it seems.
This is more than just a mind-bending curiosity. It tells us something new about the fundamental workings of the universe. Strange as it may sound, the fate of an elephant in a black hole has deep implications for a "theory of everything" called quantum gravity, which strives to unify quantum mechanics and general relativity, the twin pillars of modern physics. Because of their enormous gravity and other unique properties, black holes have been fertile ground for researchers developing these ideas.
It all began in the mid-1970s, when Stephen Hawking of the University of Cambridge showed theoretically that black holes are not truly black, but emit radiation. In fact they evaporate very slowly, disappearing over many billions of years. This "Hawking radiation" comes from quantum phenomena taking place just outside the event horizon, the gravitational point of no return. But, Hawking asked, if a black hole eventually disappears, what happens to all the stuff inside? It can either leak back into the universe along with the radiation, which would seem to require travelling faster than light to escape the black hole's gravitational death grip, or it can simply blink out of existence.
Trouble is, the laws of physics don't allow either possibility. "We've been forced into a profound paradox that comes from the fact that every conceivable outcome we can imagine from black hole evaporation contradicts some important aspect of physics," says Steve Giddings, a theorist at the University of California, Santa Barbara.
Researchers call this the black hole information paradox. It comes about because losing information about the quantum state of an object falling into a black hole is prohibited, yet any scenario that allows information to escape also seems in violation. Physicists often talk about information rather than matter because information is thought to be more fundamental.
In quantum mechanics, the information that describes the state of a particle can't slip through the cracks of the equations. If it could, it would be a mathematical nightmare. The Schrödinger equation, which describes the evolution of a quantum system in time, would be meaningless because any semblance of continuity from past to future would be shattered and predictions rendered absurd. "All of physics as we know it is conditioned on the fact that information is conserved, even if it's badly scrambled," Susskind says.
For three decades, however, Hawking was convinced that information was destroyed in black hole evaporation. He argued that the radiation was random and could not contain the information that originally fell in. In 1997, he and Kip Thorne, a physicist at the California Institute of Technology in Pasadena, made a bet with John Preskill, also at Caltech, that information loss was real. At stake was an encyclopedia - from which they agreed information could readily be retrieved. All was quiet until July 2004, when Hawking unexpectedly showed up at a conference in Dublin, Ireland, claiming that he had been wrong all along. Black holes do not destroy information after all, he said. He presented Preskill with an encyclopedia of baseball.
What inspired Hawking to change his mind? It was the work of a young theorist named Juan Maldacena of the Institute for Advanced Study in Princeton, New Jersey. Maldacena may not be a household name, but he contributed what some consider to be the most ground-breaking piece of theoretical physics in the last decade. He did it using string theory, the most popular approach to understanding quantum gravity.
In 1997, Maldacena developed a type of string theory in a universe with five large dimensions of space and a contorted space-time geometry. He showed that this theory, which includes gravity, is equivalent to an ordinary quantum field theory, without gravity, living on the four-dimensional boundary of that universe. Everything happening on the boundary is equivalent to everything happening inside: ordinary particles interacting on the surface correspond precisely to strings interacting on the interior.
This is remarkable because the two worlds look so different, yet their information content is identical. The higher-dimensional strings can be thought of as a "holographic" projection of the quantum particles on the surface, similar to the way a laser creates a 3D hologram from the information contained on a 2D surface. Even though Maldacena's universe was very different from ours, the elegance of the theory suggested that our universe might be something of a grand illusion - an enormous cosmic hologram (New Scientist, 27 April 2002, p 22).
The holographic idea had been proposed previously by Susskind, one of the inventors of string theory, and by Gerard't Hooft of the University of Utrecht in the Netherlands. Each had used the fact that the entropy of a black hole, a measure of its information content, was proportional to its surface area rather than its volume. But Maldacena showed explicitly how a holographic universe could work and, crucially, why information could not be lost in a black hole.
According to his theory, a black hole, like everything else, has an alter ego living on the boundary of the universe. Black hole evaporation, it turns out, corresponds to quantum particles interacting on this boundary. Since no information loss can occur in a swarm of ordinary quantum particles, there can be no mysterious information loss in a black hole either. "The boundary theory respects the rules of quantum mechanics," says Maldacena. "It keeps track of all the information."
Of course, our universe still looks nothing like the one in Maldacena's theory. The results are so striking, though, that physicists have been willing to accept the idea, at least for now. "The opposition, including Hawking, had to give up," says Susskind. "It was so mathematically precise that for most practical purposes all theoretical physicists came to the conclusion that the holographic principle and the conservation of information would have to be true."
All well and good, but a serious problem remains: if the information isn't lost in a black hole, where is it? Researchers speculate that it is encoded in the black hole radiation (see "Black hole computers"). "The idea is that Hawking radiation is not random but contains subtle information on the matter that fell in," says Maldacena.
Susskind takes it a step further. Since the holographic principle leaves no room for information loss, he argues, no observer should ever see information disappear. That leads to a remarkable thought experiment.
Which brings us back to the elephant. Let's say Alice is watching a black hole from a safe distance, and she sees an elephant foolishly headed straight into gravity's grip. As she continues to watch, she will see it get closer and closer to the event horizon, slowing down because of the time-stretching effects of gravity in general relativity. However, she will never see it cross the horizon. Instead she sees it stop just short, where sadly Dumbo is thermalised by Hawking radiation and reduced to a pile of ashes streaming back out. From Alice's point of view, the elephant's information is contained in those ashes.
Inside or out?
There is a twist to the story. Little did Alice realise that her friend Bob was riding on the elephant's back as it plunged toward the black hole. When Bob crosses the event horizon, though, he doesn't even notice, thanks to relativity. The horizon is not a brick wall in space. It is simply the point beyond which an observer outside the black hole can't see light escaping. To Bob, who is in free fall, it looks like any other place in the universe; even the pull of gravity won't be noticeable for perhaps millions of years. Eventually as he nears the singularity, where the curvature of space-time runs amok, gravity will overpower Bob, and he and his elephant will be torn apart. Until then, he too sees information conserved.
Neither story is pretty, but which one is right? According to Alice, the elephant never crossed the horizon; she watched it approach the black hole and merge with the Hawking radiation. According to Bob, the elephant went through and floated along happily for eons until it turned into spaghetti. The laws of physics demand that both stories be true, yet they contradict one another. So where is the elephant, inside or out?
The answer Susskind has come up with is - you guessed it - both. The elephant is both inside and outside the black hole; the answer depends on who you ask. "What we've discovered is that you cannot speak of what is behind the horizon and what is in front of the horizon," Susskind says. "Quantum mechanics always involves replacing 'and' with 'or'. Light is waves or light is particles, depending on the experiment you do. An electron has a position or it has a momentum, depending on what you measure. The same is happening with black holes. Either we describe the stuff that fell into the horizon in terms of things behind the horizon, or we describe it in terms of the Hawking radiation that comes out."
Wait a minute, you might think. Maybe there are two copies of the information. Maybe when the elephant hits the horizon, a copy is made, and one version comes out as radiation while the other travels into the black hole. However, a fundamental law called the no-cloning theorem precludes that possibility. If you could duplicate information, you could circumvent the uncertainty principle, something nature forbids. As Susskind puts it, "There cannot be a quantum Xerox machine." So the same elephant must be in two places at once: alive inside the horizon and dead in a heap of radiating ashes outside.
The implications are unsettling, to say the least. Sure, quantum mechanics tells us that an object's location can't always be pinpointed. But that applies to things like electrons, not elephants, and it usually spans tiny distances, not light years. It is the large scale that makes this so surprising, Susskind says. In principle, if the black hole is big enough, the two versions of the same elephant could be separated by billions of light years. "People always thought quantum ambiguity was a small-scale phenomenon," he adds. "We're learning that the more quantum gravity becomes important, the more huge-scale ambiguity comes into play."
All this amounts to the fact that an object's location in space-time is no longer indisputable. Susskind calls this "a new form of relativity". Einstein took factors that were thought to be invariable - an object's length and the passage of time - and showed that they were relative to the motion of an observer. The location of an object in space or in time could only be defined with respect to an observer, but its location in space-time was certain. Now that notion has been shattered, says Susskind, and an object's location in space-time depends on an observer's state of motion with respect to a horizon.
What's more, this new type of "non-locality" is not just for black holes. It occurs anywhere a boundary separates regions of the universe that can't communicate with each other. Such horizons are more common than you might think. Anything that accelerates - the Earth, the solar system, the Milky Way - creates a horizon. Even if you're out running, there are regions of space-time from which light would never reach you if you kept speeding up. Those inaccessible regions are beyond your horizon.
As researchers forge ahead in their quest to unify quantum mechanics and gravity, non-locality may help point the way. For instance, quantum gravity should obey the holographic principle. That means there might be redundant information and fewer important dimensions of space-time in the theory. "This has to be part of the understanding of quantum gravity," Giddings says. "It's likely that this black hole information paradox will lead to a revolution at least as profound as the advent of quantum mechanics."
This paradox will lead to a revolution as profound as the birth of quantum mechanics
That's not all. The fact that space-time itself is accelerating - that is, the expansion of the universe is speeding up - also creates a horizon. Just as we could learn that an elephant lurked inside a black hole by decoding the Hawking radiation, perhaps we might learn what's beyond our cosmic horizon by decoding its emissions. How? According to Susskind, the cosmic microwave background that surrounds us might be even more important than we think. Cosmologists study this radiation because its variations tell us about the infant moments of time, but Susskind speculates that it could be a kind of Hawking radiation coming from our universe's edge. If that's the case, it might tell us something about the elephants on the other side of the universe.
Black hole computers
Seth Lloyd of the Massachusetts Institute of Technology believes that this phenomenon can be used to get information out of a black hole. His model, first suggested by Gary Horowitz of the University of California, Santa Barbara, and Juan Maldacena of the Institute for Advanced Study in Princeton, New Jersey, shows that when an in-falling Hawking particle interacts with matter inside the black hole, it sends information about the matter to its partner outside the black hole. If this scheme works, black holes could conceivably be used as quantum computers.
According to Leonard Susskind of Stanford University, however, it makes no sense to talk about the location of information independent of an observer. To an outside observer, information never falls into the black hole in the first place. Instead, it is heated and radiated back out before ever crossing the horizon. The quantum computer model, he says, relies on the old notion of locality. "The location of a bit becomes ambiguous and observer-dependent when gravity becomes important," he says. So the idea of a black hole computer remains controversial….” |
3b5585fe592cca42 | Quantum mechanics from theory to technology
It is a popular assumption that the macrophysical world is governed by classical, deterministic physics – while the microphysical world is governed by modern physics which is of statistical nature. For everyday phenomenons, it should therefore be sufficient with a basic understanding of classical mechanics, while the quantum mechanical issues may be left to the particularly interested ones. This is, however, only partly true. Fundamental everyday issues like heat conduction and heat capacity are classically incomprehensible – and even microelectronics engineering issues has started to require in-depth understanding of quantum mechanics, due to ever-shrinking feature sizes and new technologies taking advantage of quantum mechanical effects. This article gives a basic introduction into quantum mechanics and presents a few of the microelectronics technologies which take advantage of quantum mechanical effects.
Understanding quantum mechanics may be difficult, especially due to its counter-intuitive nature. From everyday life, we are used to phenomenons occurring predictably, in a deterministic and not in a statistical way. The famous quotation of Albert Einstein, “God does not play with dice”, refers to his unwillingness to accept quantum mechanics.
The beginning of quantum mechanics can be tracked back to the observation, done by Max Planck, that the radiation from a black body was quantized, with photons of energy E = hf (where h is the Planck constant and f is the frequency of the electromagnetic wave). The frequency of the electromagnetic wave – not its intensity – is what determines the energy of the photons. Increasing the intensity only means increasing the number of discrete photons.
Later on, De Broglie showed that the same formula could be applied to electrons too. The formula can also be written as p = hk (where p is the momentum of the particle and k is its wavenumber, i.e. its spatial frequency). This led to a new insight: Particles could be modeled as waves – just as waves could be modeled as particles. The foundations were laid for a totally new type of mechanics.
This might already sound very theoretical and abstract, and it is indeed too. At the same time, it affects some of the most basic phenomenons that surround us. Solid state materials, like for example metals, should be well-known to everybody. The discipline of metallurgy has been known through thousands of years, but the insight on how the solid state materials obtain their properties has been unknown until modern times. Just think about pseudoscience like alchemy and the quest for the “Philosopher’s Stone”.
How can we explain phenomenons like electrical conductivity, heat capacity and heat conductivity in solid state materials? Why are some materials isolators while others are semiconductors or metals? As we will see in this article, these problems can only be solved by using the concepts of modern physics.
It could fill an entire text book to go through all the mathematical equations and derivations leading up to the basic insights of quantum mechanics – and most people would have lost track of it meanwhile. I will instead try to give an explanation which I think can be intuitive enough for the average reader to follow.
The free electron gas
J. Thompson discovered the electron in 1897, and since then there was little doubt that it was this little particle that provided metals with their excellent electrical and thermal conductivites. The first metal theory emerged only three years after the discovery: It was Paul Drude who provided a model with a simple classical kinetic theory for a gas of electrons. He imagined the loosest bond electrons in the metal, the valence electrons, flying around freely inside of the metal, just like gas molecules in a box. In this model, the electrons were not interacting with each others, but exchanged kinetic energy by colliding into the heavy nuclear cores.
The free electron theory explains well the electric conductivity of the metals, and can be used for deriving Ohm’s law for an electrical resistance. However, it is still unable to explain why the electrons only have a negligible contribution to the thermal capacity of metals – while at the same time having a major contribution to their thermal conductivity. The theory is also unable to explain why some solid state materials are metals, while others are semiconductors or isolators.
Although the free electron theory was a major step towards a better understanding, there were still many strange experimental results that did not correspond to the free electron theory.
The next big step came when Sommerfeld introduced quantum statistics for the electrons, with the free electron gas modeled as what we today would call a Fermi gas. The last bits of the puzzle came finally into place when Bloch corrected the model by taking into consideration that the electrons were moving around in a period potential – given by the periodic structure of the crystal lattice. We will come back to all of this.
Phonons – collective lattice vibrations
In the case of electrons, we have seen that these can – similar to photons – be modeled as both particles and waves. However, it does not stop there. To fully explain the properties of lattice structures, like their heat capacity, even the harmonic vibrations of the lattice structure itself have to be complemented with a particle model – the phonon.
In metals, the nuclear cores are put together in a crystal structure. There are both attracting and repulsive forces between the atoms, and a first-order approximation could be to model the nuclear cores as a collection of balls bound together with elastic springs (See Figure 1).
Figure 1. The lattice vibrations modeled as a classical mechanical system of balls and elastic springs.
By setting up the differential equations for such a system, we can see that the nuclear cores have kinetic energy and potential energy in three dimensions each. According to classical Boltzmann statistics, each degree of freedom should correspond to an average energy of ½ kT, where k is the Boltzmann constant and T is the temperature, defined in terms of Kelvin.
We consider each atom as a three-dimensional harmonic oscillator. The oscillator has kinetic energy and potential energy at the same time, and such a three-dimensional oscillator should therefore provide 2 * 3 degrees of freedom. In total, each atom should therefore provide an average thermal energy of 2 * 3 * ½ kT = 3kT. The free electrons, on the contrary, which only have the three dimensions of kinetic energy, and no dimensions of potential energy, should only provide 1 * 3 * ½ kT = 3/2 kT.
For one mole of a material, the heat capacity should therefore become Avogadros number, N, multiplied with the kinetic energy contributions from both the atoms and the electrons. The heat capacity of one mole should therefore sum up to N*(3kT + 3/2kT).
But instead of this, all observations showed that the heat capacity was much lower than N*(3kT + 3/2kT) at low temperatures – and stabilizing at around N*3kT at high temperatures.What was the reason behind the temperature dependency of the heat capacity, and why was there no significant contribution from the electrons?
We obtain the answer by substituting the classical harmonic oscillator representation of the nuclear cores with a quantum representation. The lattice vibrations are now modeled like a sum of so-called “eigen modes”, where each mode is a quantum harmonic oscillator. The quantum harmonic oscillators have energy of n*h*w0, where w0 is the frequency of the individual eigen mode and h is the Planck constant. The number n is an integer, and this integer tells us how many “phonons” there are in each eigen mode.
A phonon is a quasi particle serving to quantize the energy in the lattice vibrations. Within quantum mechanics, the phonons belong to the bosons, i.e. particles which have an integer spin and do not follow the Pauli exclusion principle. While the molecules of a gas follow the Boltzman distribution, the bosons follow the so-called Bose-Einstein distribution. For the Bose-Einstein distribution, a critical temperature must be reached before a linear dependency is established between the temperature and the number of phonons in the different eigen modes (the critical temperature is material-dependent).
Using the Bose-Einstein statistics, we can finally explain why some of the “degrees of freedom” are frozen out at low temperatures. Figure 2 gives an example of electrons and phonons interacting inside of the crystal lattice.
Figure 2. Example of interaction between phonons and electrons in the crystal lattice.
When it comes to the electrons, there is another principle – already briefly mentioned – that has to be introduced: The Pauli exclusion principle. It states that there can be no more that two fermions for each quantum state (and electrons are fermions). Unlike the bosons, which will all be packed together in the ground state when the temperature reaches zero, the electrons must be arranged, two by two, in the lowest available energy states.
At moderate temperatures, the Fermi energy (the highest energy the electrons may have at a temperature of zero degrees Kelvin) is much higher than kT. This means that only the electrons at states that are close to the Fermi level will obtain a thermal energy that is sufficiently high for jumping up to an unoccupied state. The rest of the electrons will remain at their zero degree ground states. There are just a few electrons that are thermally excited at moderate temperatures. This is contrary to the phonons which are bosons that do not obey the Pauli principle – and where every single phonon may be thermally excited from very small amounts of thermal energy. Because of this, the heat capacity at moderate temperature is dominated by the phonons – and and the electron contribution to the heat capacity at moderate temperatures is therefore negligible.
Electroncs – from particles to waves
With the Schrödinger equation for a free electron, a particle is modeled as a complex wave function. The time independent Schrödinger equation is written as:
E * Ψ = H * Ψ,
where Ψ is wavefunction, E is the energy of the wave, and H is the Hamiltonian function of the system.
The square of the wave function can be interpreted as the probability density function of the particle. If we modify the wave function in such a way that the particle is flying above a periodic potential, given by the attraction forces of the nuclear cores in the material, then we get a more realistic model for the electrons inside of a lattice structure.
We can do this by modulating the period potential into the wave function. Such a wave function is called a Bloch function. The Bloch function is written as:
where r is the position in the crystal lattice, k is the wavenumber, and u(r) is a function with the same spatial periodicity as the lattice.
By replacing the general wave function with the a Bloch function and solving the Schrödinger equation, we can extract a number of energy intervals that the electron is allowed to be within. Then it becomes clear that there are certain energies where no wave-like solutions exist for the particle. The electrons become located in certain “bands”, with forbidden energy gaps between the bands. On top of the energy gap, we have a so-called “conduction band” and on the bottom of the gap we have a so-called “valence band”.
When the band structure has come into place, it is easy to explain the difference in electrical conductivity between the different materials (See Figure 3):
Figure 3. Metals, semiconductors and insulators are distinguished from each others by their band structures.
In a metal, the valence band has not been completely filled up with electrons, or valence band and conduction band have overlapping energies. Very little energy is therefore required to move an electron from one place in the valence band to another – or from the valence band to the conduction band.
In an isolator or a semiconductor, the entire valence band is filled up with electrons. This can only happen if the number of valence electrons per atom is an integer (as the Pauli principle only allows two electron per quantum state). If the energy gap up to the conduction band is high, the material becomes an isolator. If the gap is narrow, some of the electrons can obtain sufficient thermal energy to jump into the conduction band and become a conducting electron, just like in a metal. These materials, we call semiconductors.
We have to distinguish between direct semiconductors and indirect semiconductors. The difference is given by their band structure. In a direct semiconductor, the band structure is symmetrical around the energy axis, with the smallest band gap between band edges of the same wave number. A photon may therefore directly create an electron-hole-pair where the electron and the hole have opposite wave number. In an indirect semiconductor, like Silisium or Germanium, the smallest energy gap is between band edges at different wave numbers. An electron-hole pair can therefore only be created by also creating a phonon to conserve momentum. See Figure 4.
Figure 4. Difference between direct (left) and indirect (right) semiconductors.
Doping and PN Junctions
By introducing impurities into a lattice structure in a controlled way, we can fundamentally change the electrical properties of the material. This technique is called doping and the introduced impurities are called dopants.
All atoms strive to obtain a noble gas structure, i. e. to have eight electrons in the outer electron shell. In a silicon crystal structure, each of the four valence electrons in an atom make a covalent bond to an electron of a neighbouring atom. Like that, the desired noble gas structure is obtained for all the atoms in the lattice. However, if a dopant atom with five valence electrons is introduced into the structure, only four out of the five electrons will engage in covalent bindings with its neigbours and the last electron will remain a loosely bound electron. Such an impurity atom is called an N-type dopant.
The loosely bond electron of the N-type impurity atom becomes a localized impurity state, located right below the conduction band – to which it is easily thermally excited and can start moving freely around similar to a free particle.
Similarly, introducing an impurity atom with only three valence electrons will prevent a neighbouring atom from obtaining the desired noble gas structure – since only three out of its four valence electrons may form covalent bonds. Such an impurity atom is called a P-type dopant. The missing electron becomes a “hole” which can be filled – and hence move around in the lattice – when an electron belonging to a neighbouring covalent bond comes and takes its place.
Similarly to the loosely bound electron of the N-type dopant, the “hole” of the P-type dopant becomes a localized impurity state, but this time it will be located right above the valence band. An electron in the valence band can now easily be excited into this localized state, leaving a vacancy in the valence band. This vacancy can now move around in the valence band, similar to a free electron in the conduction band.
For an undoped semiconductor, the Fermi level of the material – also called the chemical potential – is located in the middle of the band gap. Introducing dopants into the material means shifting its chemical potential towards the impurity states. When N-doped and P-doped materials are joined, the two different chemical potentials must be aligned, and this gives rise to a “band bending” throughout the P-N junction between the materials. A contact potential between the P and N side is thus established, preventing charges from diffusing from one side to the other. See Figure 5.
Figure 5. Alignment of Fermi levels across a PN junction.
Figure 5. Alignment of Fermi levels across a PN junction.
By applying an external voltage on the P- and N-side, the shape of the barrier can be modified. Depending on the polarity of the voltage, the barrier can either be increased, i.e. widening the depletion region in the junctions, or it can be narrowed down and hence allowing the diffusion current to pass through the junction. The PN diode hence acts as a current rectifier.
Tunnel diodes
According to classical physics, a free particle will always bounce back when it hits a barrier with a potential energy higher than the kinetic energy of the particle. However, in quantum physics, there is a probability that the particle will “tunnel” through the barrier and continue moving as a free particle on the other side of the barrier. The probability depends strongly on the height and width of the barrier. Tunnel diodes take advantage of this effect.
With only a lightly doped silicon substrate, the wavelengths of the impurity holes and electrons do not overlap each others, and the impurities remain localized and are therefore allowed to have the same energy levels. With a heavily doped substrate, the impurity states start overlapping each others, and they are therefore forced by the Pauli principle to become part of the band structure. When heavily doped N-type and P-type materials are joined, the upper part of the P-side valence band will now overlap with the lower part of the N-side conduction band. The heavy doping also leads to a very narrow depletion zone of only a few nanometer. This means that a small forward bias may allow charges to tunnel through the depletion zone and recombine. However, as the voltage continues to increase, the overlapping regions will start shrinking and the tunnelling recombination current will start decreasing. The tunnel diode will therefore act as a negative resistor in a certain voltage range. This makes the tunnel diode excellent for implementing high frequency oscillators and switching circuits with hysteresis. See Figure 6.
Figure 6. Characteristics of a normal diode (left), compared to a tunnel diode (right).
EEPROM and Flash Technology
EEPROM and Flash technology are other inventions which takes direct advantage of the tunnel effect. In normal NMOS transistors, a voltage is applied on the gate with the purpose of raising the surface potential of the substrate beneath the gate to a level where it exceeds the fermi level and becomes a conductive channel. However, instead of using a normal NMOS transistor with one single gate, a dual-gate transistor is introduced, with the second gate floating between the first gate and the NMOS channel. A high voltage is applied on the first gate, creating an electric field that is strong enough to cause tunneling through the silicon oxide, with electrons passing from the NMOS channel onto the floating gate. When the high voltage is turned off, the accumulated charges remain trapped within the floating gate, right above the NMOS channel – shielding the channel from the voltage applied on the normal gate and therefore keeping the transistor permanently turned off.
A programmed floating-gate transistor is comparable to a fuse that has been burned off and it can therefore be used for permanently storing information. The accumulated electrons on the floating gate can eventually be removed from by applying a reversed high voltage – making such circuits eraseable, and hence reprogrammable.
Light emitting diodes
Silicon is an indirect semiconductor. The energy released through the recombination of electrons and holes is therefore used to create phonons to conserve momentum. The recombination energy therefore gets lost in lattice vibrations.
In direct semiconductors, like Gallium and Arsen, can electrons and hole recombine with all the energy going into one single photon. A forward biased P-N junction will therefore emit photons of energy corresponding to the bandgap of the semiconductor.
Quantum dot lasers and quantum computing
For a totally free particle, the energy range is continuous – since all possible energies are allowed. For a particle flying above a periode potential, we have already seen that a band structure is implied, where certain energies are forbidden. But there is more to it than that: If we assume that the particle is not completely free, but rather confined within a certain volume, then the allowed energies are discretized into the energies of the standing waves that can be created within the box when the probability of finding the particle outside of the box becomes zero. Figure 7 shows examples of standing waves for a particle within a potential well, with the different harmonics corresponding to different energy levels.
Figure 7. Examples of standing waves for a particle trapped within a potential well. The waves are in the middle with the boundaries of the well to the left and the right. Due to the tunnelling effect, the waves are able to extend into the “walls” of the potential well.
By creating patterns of P- and N-doped structures, junctions are created where the already explained band bending causes potential wells to establish. As the electrons and holes lose energy due to interaction with phonons (the quantized lattice vibrations), they might get trapped within the boundaries of the potential wells. Their energies will be discretized due to the structure of the potential well, and they will continue to lose energy until they are located in the lowest available energy levels.
A quantum dot is a potential well where the allowed energy is confined in all three dimensions – i.e. the height, width and length of the potential well is shorter than the De Broglie wavelength of an electron. The available standing wave energies in such a quantum dot will depend strongly on the shape of the quantum dot. Also the band gap energy of the quantum dot will be narrowed down, as the energy on the edges of the valence band and the conduction band will spread out into a number of discretized states. For such quantum dots, it can be shown that the highest density of available energy states lies around the ground state energies. The more compact the dot becomes, the more concentrated are the available states. See Figure 8.
Figure 8. Schematic diagram illustrating the representation of the electronic density of states depending on dimensionality.
If the quantum dot is implemented in a direct semiconductor, like Gallium or Arsen, the trapped photons and electrons will localize in the ground states and emit photons of well-controlled wavelengths when recombining.A lot of research efforts are currently put into developing compact and controlled quantum dots. Recently, laser devices based on quantum dots are finding commercial applications within medicine, display technologies, spectroscopy and telecommunications.
Although still only on the theoretical stage, quantum dots are also promising for the future technology of quantum computing. In quantum computing, the different states of a particles in a potential well is used for encoding data. Instead of coding a zero or a one, the wavefunction of individual electrons can be used to code several bits.
Moore’s law has remained valid throughout the last decades. Transistor scaling has reached a level where transistor lengths of 14 nanometer has become normal – and the efforts continue for reaching even smaller dimensions. With transistor lengths reaching the wavelength of ionizing radiation, the art of electronics engineering becomes the art of quantum mechanical engineering. The quantum mechanical effects provide a lot of new challenges to overcome – like ever-increasing leakage currents due to tunnel effects – but the solutions also give rise to new and advanced technologies. The ones mentioned in this article are just a few examples of well-established technologies. Even more sophisticated technologies, like quantum computing, has been theoretically described already decades ago. The adventure of quantum electronics has just begun, so we will surely see even more inventions as soon as the processing technology becomes more mature.
Olav Torheim,
PhD in Microelectronics |
65c685dad7b282c2 | by Brian Tomasik
First published: 21 Feb. 2017; last update: 7 Oct. 2017
This piece sketches my (perhaps inadequate) understanding of type-F monism in the philosophy of mind. I argue that the view is ultimately either a form of type-A physicalism or a form of property dualism (broadly construed). The reasons to reject property dualism also apply to property-dualist interpretations of type-F monism. In addition, type-F monism relies on categorical properties, whose existence I'm skeptical of.
Note: This piece is a work in progress. I'm not an expert on type-F monism or the debate over dispositional monism.
Note: This piece uses the terminology of Chalmers (2003). See that article for definitions of type-A through type-F views on consciousness.
In my opinion, if you're not a type-A physicalist or an idealist regarding consciousness, then you're ultimately a property dualist, whether you admit it or not. I'll ignore idealism in this piece and focus on the contrast between type-A physicalism and property dualism. Type-A physicalists maintain that there is only physics, and phenomenal experience is an attribution we make, not a "real thing" in an ontological sense. Property dualism is the alternative view: that phenomenal experience is a "real thing" in some form. In my opinion, this is the fundamental distinction in philosophy of mind, with specific views on consciousness just being variants of these two stances. In particular, I believe that type-F monism ultimately reduces to either type-A physicalism (in its more sensible interpretations) or to property dualism (in its less sensible interpretations).
Type-A construals of type-F views
Neutral monism is one type-F view. While the views of neutral monists are generally not equivalent to type-A physicalism, at least some quotations from neutral monists can be (mis?)interpreted as expressing the same idea as type-A physicalism. One example is the following quote from Bertrand Russell, summarizing William James:
This idea of merely "calling" some things mental and others not based on their relations or functions is a type-A view. And in my opinion, it doesn't matter if we call the stuff of the world "physical" or "neutral"—it amounts to the same thing—so we may as well regard such a view as type-A physicalism. Of course, I haven't read James or Russell myself and am probably quoting them out of context.
Property-dualist construals of type-F views
Type-F views as Chalmers (2003) defines them seem to me to be property dualism in disguise. Why? Because like property dualism, type-F views maintain that there's a difference between physics and phenomenal experience, where phenomenal experience has properties/aspects/natures beyond the structural/functional dynamics of physics. This means that there are phenomenal properties that are not just the structural/functional properties of physics, which meets a broad definition of property dualism. (For similar reasons, I maintain that type-B and type-D views are ultimately property dualism as well, broadly understood.) Indeed, Chalmers (2003) says of type-F monism:
it can be seen as a sort of dualism. The view acknowledges phenomenal or protophenomenal properties as ontologically fundamental, and it retains an underlying duality between structural-dispositional properties (those directly characterized in physical theory) and intrinsic protophenomenal properties (those responsible for consciousness). One might suggest that while the view arguably fits the letter of materialism, it shares the spirit of antimaterialism.
Arvan (2013) also uses the label of "dualism" to describe a type-F view.
Ordinary epiphenomenalist property dualism might be pictured as follows: We have a physical world of objects with relations to one another, and then an extra property of conscious dangles (or "supervenes") on that physical system.
The way I picture type-F monism is that it wants to preserve the yellow star (qualia) while trying to sneak it in somewhere else. Where else can we put the yellow star? Well, we could stick it inside the entities that fundamental physics describes:
This way we can keep our qualia (the yellow stuff) without having it dangling off to the side. We can instead claim that it fits more naturally into our ontology because it's the essence physics is made of.
Shortly after writing this piece, I discovered that Goff (2017) proposes the same analogy (coloring in a picture) to describe type-F panpsychism: "All we get from physics is this big black-and-white abstract structure, which we must somehow colour in with intrinsic nature. We know how to colour in one bit of it: the brains of organisms are coloured in with experience. How to colour in the rest? The most elegant, simple, sensible option is to colour in the rest of the world with the same pen."
What's wrong with property-dualist type-F views?
Some of the below points are also discussed here.
Doesn't explain our belief in consciousness
A big problem with ordinary property dualism is that it fails to explain why our physical brains believe themselves to be conscious. If consciousness is epiphenomenal, there must be a different reason within physics that explains why we believe we have phenomenal consciousness (Yudkowsky 2008, Tomasik 2014).
With type-F monism, consciousness is not technically epiphenomenal, since physics is consciousness, but consciousness seems to still be effectively epiphenomenal in the sense that it, by itself, doesn't explain why our neurons form connections and fire in patterns that correspond to a physical belief in our own consciousness. How would the yellow stuff in the type-F picture cause my neurons to wire in the right, complex patterns to express a physical belief in my own consciousness? Unless the yellow stuff has a complex physics-guiding algorithm built into it (in which case it's part of the dispositional nature of physics, not just the categorical nature of physics), it seems to be a brute coincidence that our physical brains correctly believe themselves to be conscious.
Chalmers (1996) seems to agree (p. 181):
Even if [type-B and type-Fa] views salvage a sort of causal relevance for consciousness, they still lead to explanatory irrelevance, as explanatory relevance must be supported by conceptual connections. Even on these views, one can give a reductive explanation of phenomenal judgments but not of consciousness itself, making consciousness explanatorily irrelevant to the judgments. There will be a processing explanation of the judgments that does not invoke or imply the existence of experience at any stage; the presence of any further "metaphysically necessary" connection or intrinsic phenomenal properties will be conceptually quite independent of anything that goes into the explanation of behavior.
Combination problem
Chalmers (2003) believes that the so-called combination problem for panpsychism "is easily the most serious problem for the type-F monist view." I won't recapitulate the debate on this topic (most of which I'm unfamiliar with) other than to add my voice to the sentiment that the combination of micro-experiences into macro-experiences seems about as mysterious to me as just postulating a property-dualist form of consciousness that supervenes on ordinary physics. If we can get a macro-consciousness property from micro-consciousnesses, why not just get a macro-consciousness property from ordinary physics?
Who said anything about categorical properties?
Type-F monism tries to squeeze consciousness into the categorical properties of physics that are said to exist over and above dispositional properties, i.e., the essence of physics that exists beyond its structural/functional behavior. This is said to be one of the selling points of the type-F view, since it solves two mysteries for the price of one: (1) the existence of qualia and (2) the fundamental nature of physical stuff.
The problem is, I don't see a reason to believe in categorical properties, using the same logic by which I don't see a reason to believe in a property-dualist view of consciousness: If categorical properties did exist, how would we ever discover them? Because they're by definition not functional, there could be no causal interactions that would result in our physical brains believing they exist.
Maybe one could argue for the existence of barebones entities on a priori grounds. For example, if you want to use math to describe physical structure, you might postulate sets. The mathematical definition of a binary relation is a set of ordered pairs. And an ordered pair like (1, 2) can be defined as {{1}, {1, 2}}. And natural numbers can themselves be defined using sets. So we can express relations using sets, which can be seen as barebones "entities" in an ontology, but they're not the kinds of entities that have any content like phenomenality to them. They're more like placeholders, or arbitrary variable names in a programming language, that merely help to represent the structure. Postulating ontological primitives that have non-trivial essential properties seems to me to violate Occams's razor. All our beliefs are structural/functional, because it's only the structure/function of physics that can produce changes in our neural patterns. Nothing is gained by postulating extra properties—whether phenomenal properties or more general categorical properties.
I haven't read much about dispositional monism, so I won't commit to it just yet. My view is something like this: We use empirical observations to derive physical theories. Then the "properties" of things are exactly what's described by those theories. For example, the properties of a quantum system are just whatever the system does when you apply the Schrödinger equation (or whatever the true laws of physics are) to it. The dispositions of an entity can be "read off" of the equations of physics by seeing how the object will behave in such-and-such conditions. We can make statements about counterfactuals by imagining a world with such-and-such initial conditions and then running physical laws forward on that world.
C. B. Martin and Pythagoreanism
Esser (2013), p. 14 discusses C. B. Martin's view on categorical properties:
Purely categorical properties are intrinsically “incapable of affecting or being affected by anything else” (Martin, 2008, p. 66). They are undetectable and give us no reason to believe they exist. Postulating that they serve as grounds or bases for dispositions (or supposing that dispositions supervene on categorical properties) requires introducing a new and mysterious sort of relation into one’s ontology.
I share these critiques. But Esser (2013) continues (pp. 14-15):
pure dispositionalism becomes in the context of physics an exercise in describing causal relations solely in language of mathematics. Martin warns: “This way lies Pythagoreanism” – in other words, the temptation to think reality can somehow be purely formal or quantitative (Martin, 2008, p. 74).
But what exactly is wrong with this Pythagorean view? I haven't read Martin's 2008 book, but I did read an older article, Martin (1997), on which the relevant chapter from his book is based. In my opinion, Martin (1997) didn't really argue against Pythagoreanism so much as express the intuition that it leaves something out. For example, he calls Pythagoreanism "an unacceptably empty desert" (p. 195) and says (p. 222): "This unfettered deontologizing which results in a world of pure number seems as clear a reductio as in philosophy."
Martin (1997) adds (p. 215):
Dispositionalists believe that all that appears to be qualitatively intrinsic to things just reduces to capacities/dispositions for the formation of other capacities/dispositions for the formation of other capacities/dispositions for the formation of . . . . And, of course, the manifestations of any disposition can only be further dispositions for . . . . This image appears absurd even if one is a realist about capacities/dispositions. It is like a promissory note that may be actual enough but if it is for only another promissory note which is . . . , that is entirely too promissory.
I don't see the dispositionalist view as absurd as long as there's still some barebones way in which the structure hangs together. What we don't need are meaningful properties in addition to structure/relations. What would it even mean for there to be a substantive property that wasn't dispositional? For example, suppose you thought that the spin of an electron was a real thing over and above the ways it affected small-scale physics and chemistry. What would you even be saying? How would such an attribution of a non-placeholder "spin" property to an electron help? Of course, I also don't know what it even means to say that a physical structure/relation "exists"; in general, the field of metaphysics is beyond my ken. But at least I'm trying to minimize the number of mysteries in my ontology by only having structure/relations rather than also categorical properties.
Martin (1997) recognizes (p. 215) my argument that if a property doesn't play a functional role, it's not clear why we should believe in it: "Your knowledge of the existence of physical x or mental y has to involve the causal set of dispositions of x and y to affect your belief that x or that y. That set of dispositions is specifically operative on that occasion for that belief." But then (if I read him correctly) he rejects what I see as the approach, consistent with Occam's razor, of denying that there are further properties if they don't contribute to our belief in them: "This, however, does not establish (contra Shoemaker) that the content of what you know or believe is nothing more than the set of causal dispositions or functions that make you or would make you believe x or y."b
There seems to be a connection between people's psychological need for categorical properties in fundamental physics and people's psychological need for phenomenal properties in the philosophy of mind. Indeed, Martin (1997) uses the phrase "physical qualia" to describe categorical properties (p. 193). And he says (p. 223): "When we consider the need for physical qualia (that is, qualities), even in the finest interstices of nature, largely unregarded and unknown, among them should belong the qualia (qualities) required for the sensing and feeling parts of physical nature."
My guess is that the human brain naturally thinks in dualist/essentialist terms (Tomasik 2013), which leads to a doubling of one's ontology relative to what actually exists. An essentialist ontology contains (1) the structural/functional aspect of a thing that does all the work and (2) the "essence" of the thing, which does nothing other than making us feel better about the way the ontology looks.
1. Actually, in this passage, Chalmers (1996) refers to "type-C' positions". I think Chalmers (1996) uses the label "type C'" to refer to the same view as what Chalmers (2003) calls "type F". My guess is that Chalmers just changed the label for this view over time? Or is there any substantive difference? I don't have the full text of Chalmers (1996) and have not checked this point. (back)
2. I should clarify that my view is not that I should only believe things exist if they causally affect my brain state. I believe that regions of spacetime exist outside my past and future light cones, because the existence of such regions is predicted by a simple set of theories/equations that correctly predict my own experiences. It's actually simpler (in a minimum-description-length kind of sense) to postulate that a massive multiverse exists of which I observe tiny pieces than to postulate that only the things I observe exist, because the equations for physics as a whole can be relatively simple. In contrast, rules for specifying what I observe at any given moment, without a broader context of the rest of the universe, are fairly complicated, because such rules presumably need to hard-code the data I observe at each time slice, which requires far more bits than are required to write the laws of physics. Or at least, one would need to compute the laws of physics anyway to know what to show me at any given moment and then restrict reality down to just what I myself observe. (Incidentally, this is also why I reject solipsistic idealism—the idea that all that exists is my own conscious state at any given time.) (back) |
e4e162bf85cde4f7 | Dismiss Notice
Join Physics Forums Today!
Time dependent transformations
1. Jun 21, 2004 #1
Hey people,
I just finished reading a chapter in a book on quantum mechanics that has deeply disturbed me. The chapter was about symmetry in quantum mechanics. It was divided in two basic parts: time dependent and time independent transformations.
Time independent transformations were quite tractable in the sense that nothing special or surprising happened. The Schrödinger equation was left unchanged.
For time dependent transformations a new term to the Hamiltonian had to be added in order to preserve the form of the equation. The new Hamiltonian was the same as the old one transformed plus a term involving a firs order time derivative of the transformation operator. How can that be? Aren't all inertial frames of reference equivalent? Is Schrödinger's equation not covariant?
I know nothing of relativistic QM, so sorry if this question offends those who do know something about it.
2. jcsd
3. Jun 21, 2004 #2
User Avatar
Staff Emeritus
Gold Member
Dearly Missed
No Schroedinger's equation is not covariant. Schroedinger originally tried for a covariant equation and wound up with the equation now called Klein Gordon; this describes a boson and had, in the understanding of the 1920s, a problem of being unstable to negative energy solutions. So Schroedinger abandoned it and developed his non-relativistic equation.
The problem of a covariant equation that describes fermions like the electron was famously solved by Dirac.
4. Jun 22, 2004 #3
I'm a litle confused here. According to Jackson (Classical Electrodynamics, ed Wiley) the Schrödinger equation is invariant under the Galilean group of transformations (!!??). However, according to Sakurai (Modern Quantum Mechanics) the new hamiltonian is the old transformed plus a term involving a first order derivative of the transformation operator (which is first order in time, and thus not zero).
Furthermore, I have read in other books comments on the new hamiltonian like, "this additional term ( the one involing the first order time derivative) plays the same role in QM as the forces of inertia do in classical physics"(???!!!).
How can that be!!That something that depends linearly on time affects the equations of motion!!No principle of relativity!?
Thanks for replying and sorry for the stubborness!!ciao
5. Jun 22, 2004 #4
User Avatar
Staff Emeritus
Gold Member
Dearly Missed
Sure Schroedinger is Galilean invariant. So is Newton's mechanics. Schroedinger splits time and space; space is an operator but time is just a parameter. This is non-covariant from the git-go.
Special relativity covariance is a symmetry of the Poincare group.
I'm not prepared to discuss your other issues off the top of my head. I'll study Sakurai a little first.
6. Jun 23, 2004 #5
User Avatar
Science Advisor
Time dependent xforms are 1. used a lot in classical and quantum physics, and 2. often are mathematically difficult and opaque.
In classical mechanics, TDX are used, think of body centered coordinates and the theory of the spinning top; think of transforming away Corolius forces by appropriate rotational coordinates -- the most general form are the equations of motions in GR, identical to the most general equations derived from a Lagrangian in generalized coordinates. Necessarily, such xforms change energy/Hamiltonian if for no other reason than the KE must change.
In QM time dependent xforms are often used in QFT -- Heisenberg Picture, InteractionPicture -- and in magnetic resonance work --the so called Bloch Eq.s in rotating frames. These, by the way, are quite analogous to time dependent Contact Transformations in classical mechanics.(See Goldstein's Classical Mechanics, for example.
These topics are scattered all over the literature, probably can be found in advanced QM texts. Do a Google, and you will find more than you want to know.
BTW, Dirac's major insight in deriving the Dirac Eq. was to realize relativistic form invariance required first order spatial derivatives if the equation was to be first order in the time derivative.
Reilly Atkinson |
0e58b1c7c516206f | Particle in a box
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Some trajectories of a particle in a box according to Newton's laws of classical mechanics (A), and according to the Schrödinger equation of quantum mechanics (B–F). In (B–F), the horizontal axis is position, and the vertical axis is the real part (blue) and imaginary part (red) of the wavefunction. The states (B,C,D) are energy eigenstates, but (E,F) are not.
In quantum mechanics, the particle in a box model (also known as the infinite potential well or the infinite square well) describes a particle free to move in a small space surrounded by impenetrable barriers. The model is mainly used as a hypothetical example to illustrate the differences between classical and quantum systems. In classical systems, for example, a ball trapped inside a large box, the particle can move at any speed within the box and it is no more likely to be found at one position than another. However, when the well becomes very narrow (on the scale of a few nanometres), quantum effects become important. The particle may only occupy certain positive energy levels. Likewise, it can never have zero energy, meaning that the particle can never "sit still". Additionally, it is more likely to be found at certain positions than at others, depending on its energy level. The particle may never be detected at certain positions, known as spatial nodes.
The particle in a box model is one of the very few problems in quantum mechanics which can be solved analytically, without approximations. Due to its simplicity, the model allows insight into quantum effects without the need for complicated mathematics. It serves as a simple illustration of how energy quantization (energy levels), which are found in more complicated quantum systems such as atoms and molecules, come about. It is one of the first quantum mechanics problems taught in undergraduate physics courses, and it is commonly used as an approximation for more complicated quantum systems.
One-dimensional solution
The barriers outside a one-dimensional box have infinitely large potential, while the interior of the box has a constant, zero potential.
The simplest form of the particle in a box model considers a one-dimensional system. Here, the particle may only move backwards and forwards along a straight line with impenetrable barriers at either end.[1] The walls of a one-dimensional box may be visualised as regions of space with an infinitely large potential energy. Conversely, the interior of the box has a constant, zero potential energy.[2] This means that no forces act upon the particle inside the box and it can move freely in that region. However, infinitely large forces repel the particle if it touches the walls of the box, preventing it from escaping. The potential energy in this model is given as
where L is the length of the box, xc is the location of the center of the box and x is the position of the particle within the box. Simple cases include the centered box (xc = 0 ) and the shifted box (xc = L/2 ).
Position wave function
In quantum mechanics, the wavefunction gives the most fundamental description of the behavior of a particle; the measurable properties of the particle (such as its position, momentum and energy) may all be derived from the wavefunction.[3] The wavefunction can be found by solving the Schrödinger equation for the system
where is the reduced Planck constant, is the mass of the particle, is the imaginary unit and is time.
Inside the box, no forces act upon the particle, which means that the part of the wavefunction inside the box oscillates through space and time with the same form as a free particle:[1][4]
where and are arbitrary complex numbers. The frequency of the oscillations through space and time is given by the wavenumber and the angular frequency respectively. These are both related to the total energy of the particle by the expression
which is known as the dispersion relation for a free particle.[1] Here one must notice that now, since the particle is not entirely free but under the influence of a potential (the potential V described above), the energy of the particle given above is not the same thing as where p is the momentum of the particle, and thus the wavenumber k above actually describes the energy states of the particle, not the momentum states (i.e. it turns out that the momentum of the particle is not given by ). In this sense, it is quite dangerous to call the number k a wavenumber, since it is not related to momentum like "wavenumber" usually is. The rationale for calling k the wavenumber is that it enumerates the number of crests that the wavefunction has inside the box, and in this sense it is a wavenumber. This discrepancy can be seen more clearly below, when we find out that the energy spectrum of the particle is discrete (only discrete values of energy are allowed) but the momentum spectrum is continuous (momentum can vary continuously) and in particular, the relation for the energy and momentum of the particle does not hold. As said above, the reason this relation between energy and momentum does not hold is that the particle is not free, but there is a potential V in the system, and the energy of the particle is , where T is the kinetic and V the potential energy.
Initial wavefunctions for the first four states in a one-dimensional particle in a box
The size (or amplitude) of the wavefunction at a given position is related to the probability of finding a particle there by . The wavefunction must therefore vanish everywhere beyond the edges of the box.[1][4] Also, the amplitude of the wavefunction may not "jump" abruptly from one point to the next.[1] These two conditions are only satisfied by wavefunctions with the form
where [5]
where n is a positive integer (1,2,3,4...). For a shifted box (xc = L/2), the solution is particularly simple. The simplest solutions, or both yield the trivial wavefunction , which describes a particle that does not exist anywhere in the system.[6] Negative values of are neglected, since they give wavefunctions identical to the positive solutions except for a physically unimportant sign change.[6] Here one sees that only a discrete set of energy values and wavenumbers k are allowed for the particle. Usually in quantum mechanics it is also demanded that the derivative of the wavefunction in addition to the wavefunction itself be continuous; here this demand would lead to the only solution being the constant zero function, which is not what we desire, so we give up this demand (as this system with infinite potential can be regarded as a nonphysical abstract limiting case, we can treat it as such and "bend the rules"). Note that giving up this demand means that the wavefunction is not a differentiable function at the boundary of the box, and thus it can be said that the wavefunction does not solve the Schrödinger equation at the boundary points and (but does solve everywhere else).
Finally, the unknown constant may be found by normalizing the wavefunction so that the total probability density of finding the particle in the system is 1. It follows that
Thus, A may be any complex number with absolute value 2/L; these different values of A yield the same physical state, so A = 2/L can be selected to simplify.
It is expected that the eigenvalues, i.e., the energy of the box should be the same regardless of its position in space, but changes. Notice that represents a phase shift in the wave function, This phase shift has no effect when solving the Schrödinger equation, and therefore does not affect the eigenvalue.
Momentum wave function
The momentum wavefunction is proportional to the Fourier transform of the position wavefunction. With (note that the parameter k describing the momentum wavefunction below is not exactly the special kn above, linked to the energy eigenvalues), the momentum wavefunction is given by
where since is the cardinal sine sinc function, sinc(x)=sin(x)/x. For the centered box (xc= 0), the solution is real and particularly simple, since the phase factor on the right reduces to unity. (With care, it can be written as an even function of p.)
It can be seen that the momentum spectrum in this wave packet is continuous, and one may conclude that for the energy state described by the wavenumber kn, the momentum can, when measured, also attain other values beyond .
Hence, it also appears that, since the energy is for the nth eigenstate, the relation does not strictly hold for the measured momentum p; the energy eigenstate is not a momentum eigenstate, and, in fact, not even a superposition of two momentum eigenstates, as one might be tempted to imagine from equation (1) above: peculiarly, it has no well-defined momentum before measurement!
Position and momentum probability distributions
In classical physics, the particle can be detected anywhere in the box with equal probability. In quantum mechanics, however, the probability density for finding a particle at a given position is derived from the wavefunction as For the particle in a box, the probability density for finding the particle at a given position depends upon its state, and is given by
Thus, for any value of n greater than one, there are regions within the box for which , indicating that spatial nodes exist at which the particle cannot be found.
In quantum mechanics, the average, or expectation value of the position of a particle is given by
For the steady state particle in a box, it can be shown that the average position is always , regardless of the state of the particle. For a superposition of states, the expectation value of the position will change based on the cross term which is proportional to .
The variance in the position is a measure of the uncertainty in position of the particle:
The probability density for finding a particle with a given momentum is derived from the wavefunction as . As with position, the probability density for finding the particle at a given momentum depends upon its state, and is given by
where, again, . The expectation value for the momentum is then calculated to be zero, and the variance in the momentum is calculated to be:
The uncertainties in position and momentum ( and ) are defined as being equal to the square root of their respective variances, so that:
This product increases with increasing n, having a minimum value for n=1. The value of this product for n=1 is about equal to 0.568 which obeys the Heisenberg uncertainty principle, which states that the product will be greater than or equal to
Another measure of uncertainty in position is the information entropy of the probability distribution Hx:[7]
where x0 is an arbitrary reference length.
Another measure of uncertainty in momentum is the information entropy of the probability distribution Hp:
where p0 is an arbitrary reference momentum. The integral is difficult to express analytically for general n, but in the limit as n approaches infinity:[7][8]
where γ is Euler's constant. The quantum mechanical entropic uncertainty principle states that for
For , the sum of the position and momentum entropies yields:
which satisfies the quantum entropic uncertainty principle.
Energy levels
The energy of a particle in a box (black circles) and a free particle (grey line) both depend upon wavenumber in the same way. However, the particle in a box may only have certain, discrete energy levels.
The energies which correspond with each of the permitted wavenumbers may be written as[5]
The energy levels increase with , meaning that high energy levels are separated from each other by a greater amount than low energy levels are. The lowest possible energy for the particle (its zero-point energy) is found in state 1, which is given by[9]
The particle, therefore, always has a positive energy. This contrasts with classical systems, where the particle can have zero energy by resting motionlessly. This can be explained in terms of the uncertainty principle, which states that the product of the uncertainties in the position and momentum of a particle is limited by
It can be shown that the uncertainty in the position of the particle is proportional to the width of the box.[10] Thus, the uncertainty in momentum is roughly inversely proportional to the width of the box.[9] The kinetic energy of a particle is given by , and hence the minimum kinetic energy of the particle in a box is inversely proportional to the mass and the square of the well width, in qualitative agreement with the calculation above.[9]
Higher-dimensional boxes
(Hyper)rectangular walls
The wavefunction of a 2D well with nx=4 and ny=4
If a particle is trapped in a two-dimensional box, it may freely move in the and -directions, between barriers separated by lengths and respectively. For a centered box, the position wave function may be written including the length of the box as . Using a similar approach to that of the one-dimensional box, it can be shown that the wavefunctions and energies for a centered box are given respectively by
where the two-dimensional wavevector is given by
For a three dimensional box, the solutions are
where the three-dimensional wavevector is given by:
In general for an n-dimensional box, the solutions are
The 1-dimensional momentum wave functions may likewise be represented by and the momentum wave function for an n-dimensional centered box is then:
An interesting feature of the above solutions is that when two or more of the lengths are the same (e.g. ), there are multiple wavefunctions corresponding to the same total energy. For example, the wavefunction with has the same energy as the wavefunction with . This situation is called degeneracy and for the case where exactly two degenerate wavefunctions have the same energy that energy level is said to be doubly degenerate. Degeneracy results from symmetry in the system. For the above case two of the lengths are equal so the system is symmetric with respect to a 90° rotation.
More complicated wall shapes
The wavefunction for a quantum-mechanical particle in a box whose walls have arbitrary shape is given by the Helmholtz equation subject to the boundary condition that the wavefunction vanishes at the walls. These systems are studied in the field of quantum chaos for wall shapes whose corresponding dynamical billiard tables are non-integrable.
Because of its mathematical simplicity, the particle in a box model is used to find approximate solutions for more complex physical systems in which a particle is trapped in a narrow region of low electric potential between two high potential barriers. These quantum well systems are particularly important in optoelectronics, and are used in devices such as the quantum well laser, the quantum well infrared photodetector and the quantum-confined Stark effect modulator. It is also used to model a lattice in the Kronig-Penny model and for a finite metal with the free electron approximation.
Conjugated polyenes
β-carotene is a conjugated polyene
Conjugated polyene systems can be modeled using particle in a box.[11] The conjugated system of electrons can be modeled as a one dimensional box with length equal to the total bond distance from one terminus of the polyene to the other. In this case each pair of electrons in each π bond corresponds to one energy level. The energy difference between two energy levels, nf and ni is:
The difference between the ground state energy, n, and the first excited state, n+1, corresponds to the energy required to excite the system. This energy has a specific wavelength, and therefore color of light, related by:
A common example of this phenomenon is in β-carotene.[11] β-carotene (C40H56)[12] is a conjugated polyene with an orange color and a molecular length of approximately 3.8 nm (though its chain length is only approximately 2.4 nm).[13] Due to β-carotene's high level of conjugation, electrons are dispersed throughout the length of the molecule, allowing one to model it as a one-dimensional particle in a box. β-carotene has 11 carbon-carbon double bonds in conjugation;[12] each of those double bonds contains two π-electrons, therefore β-carotene has 22 π-electrons. With two electrons per energy level, β-carotene can be treated as a particle in a box at energy level n=11.[13] Therefore, the minimum energy needed to excite an electron to the next energy level can be calculated, n=12, as follows[13] (recalling that the mass of an electron is 9.109 × 10−31 kg[14]):
Using the previous relation of wavelength to energy, recalling both Planck's constant h and the speed of light c:
This indicates that β-carotene primarily absorbs light in the infrared spectrum, therefore it would appear white to a human eye. However the observed wavelength is 450 nm,[15] indicating that the particle in a box is not a perfect model for this system.
Quantum well laser
The particle in a box model can be applied to quantum well lasers, which are laser diodes consisting of one semiconductor “well” material sandwiched between two other semiconductor layers of different material . Because the layers of this sandwich are very thin (the middle layer is typically about 100 Å thick), quantum confinement effects can be observed.[16] The idea that quantum effects could be harnessed to create better laser diodes originated in the 1970s. The quantum well laser was patented in 1976 by R. Dingle and C. H. Henry[17]
Specifically, the quantum well’s behavior can be represented by the particle in a finite well model. Two boundary conditions must be selected. The first is that the wave function must be continuous. Often, the second boundary condition is chosen to be the derivative of the wave function must be continuous across the boundary, but in the case of the quantum well the masses are different on either side of the boundary. Instead, the second boundary condition is chosen to conserve particle flux as, which is consistent with experiment. The solution to the finite well particle in a box must be solved numerically, resulting in wave functions that are sine functions inside the quantum well and exponentially decaying functions in the barriers.[18] This quantization of the energy levels of the electrons allows a quantum well laser to emit light more efficiently than conventional semiconductor lasers.
Due to their small size, quantum dots do not showcase the bulk properties of the specified semi-conductor but rather show quantised energy states.[19] This effect is known as the quantum confinement and has led to numerous applications of quantum dots such as the quantum well laser.[19]
Researchers at Princeton University have recently built a quantum well laser which is no bigger than a grain of rice.[20] The laser is powered by a single electron which passes through two quantum dots; a double quantum dot. The electron moves from a state of higher energy, to a state of lower energy whilst emitting photons in the microwave region. These photons bounce off mirrors to create a beam of light; the laser.[20]
The quantum well laser is heavily based on the interaction between light and electrons. This relationship is a key component in quantum mechanical theories which include the De Broglie Wavelength and Particle in a box. The double quantum dot allows scientists to gain full control over the movement of an electron which consequently results in the production of a laser beam.[20]
Quantum dots
Quantum dots are extremely small semiconductors (on the scale of nanometers).[21] They display quantum confinement in that the electrons cannot escape the “dot”, thus allowing particle-in-a-box approximations to be applied.[22] Their behavior can be described by three-dimensional particle-in-a-box energy quantization equations.[22]
The energy gap of a quantum dot is the energy gap between its valence and conduction bands. This energy gap is equal to the band gap of the bulk material plus the energy equation derived from particle-in-a-box, which gives the energy for electrons and holes.[22] This can be seen in the following equation, where and are the effective masses of the electron and hole, is radius of the dot, and is Planck's constant:[22]
Hence, the energy gap of the quantum dot is inversely proportional to the square of the “length of the box,” i.e. the radius of the quantum dot.[22]
Manipulation of the band gap allows for the absorption and emission of specific wavelengths of light, as energy is inversely proportional to wavelength.[21] The smaller the quantum dot, the larger the band gap and thus the shorter the wavelength absorbed.[21][23]
Different semiconducting materials are used to synthesize quantum dots of different sizes and therefore emit different wavelengths of light.[23] Materials that normally emit light in the visible region are often used and their sizes are fine-tuned so that certain colors are emitted.[21] Typical substances used to synthesize quantum dots are cadmium (Cd) and selenium (Se).[21][23] For example, when the electrons of two nanometer CdSe quantum dots relax after excitation, blue light is emitted. Similarly, red light is emitted in four nanometer CdSe quantum dots.[24][21]
Quantum dots have a variety of functions including but not limited to fluorescent dyes, transistors, LEDs, solar cells, and medical imaging via optical probes.[21][22]
One function of quantum dots is their use in lymph node mapping, which is feasible due to their unique ability to emit light in the near infrared (NIR) region. Lymph node mapping allows surgeons to track if and where cancerous cells exist.[25]
Quantum dots are useful for these functions due to their emission of brighter light, excitation by a wide variety of wavelengths, and higher resistance to light than other substances.[25][21]
Relativistic Effects
The probability density does not go to zero at the nodes if relativistic effects are taken into account via Dirac equation.[26]
See also
1. ^ a b c d e Davies, p.4
2. ^ Actually, any constant, finite potential can be specified within the box. This merely shifts the energies of the states by .
3. ^ Davies, p. 1
4. ^ a b Bransden and Joachain, p. 157
5. ^ a b Davies p. 5
6. ^ a b Bransden and Joachain, p.158
7. ^ a b Majernik, Vladimir; Richterek, Lukas (1997-12-01). "Entropic uncertainty relations for the infinite well". J. Phys A. 30 (4). Bibcode:1997JPhA...30L..49M. doi:10.1088/0305-4470/30/4/002. Retrieved 11 February 2016.
8. ^ Majernik, Vladimir; Majernikova, Eva (1998-12-01). "The momentum entropy of the infinite potential well". J. Phys A. 32 (11). Bibcode:1999JPhA...32.2207M. doi:10.1088/0305-4470/32/11/013. Retrieved 11 February 2016.
9. ^ a b c Bransden and Joachain, p. 159
10. ^ Davies, p. 15
11. ^ a b Todd Wimpfheimer, A Particle in a Box Laboratory Experiment Using Everyday Compounds, Journal of Laboratory Chemical Education, Vol. 3 No. 2, 2015, pp. 19-21. doi:10.5923/j.jlce.20150302.01.
12. ^ a b Pubchem. "beta-carotene | C40H56 - PubChem". Retrieved 2016-11-10.
13. ^ a b c Sathish, R. K.; Sidharthan, P. V.; Udayanandan, K. M. "Particle in a Box- A Treasure Island for Undergraduates".
14. ^ P.J. Mohr, B.N. Taylor, and D.B. Newell, "The 2014 CODATA Recommended Values of the Fundamental Physical Constants". This database was developed by J. Baker, M. Douma, and S. Kotochigova. Available: [1]. National Institute of Standards and Technology, Gaithersburg, MD 20899.
15. ^ β-Carotene®ion=us (accessed Nov 8, 2016).
16. ^ Zory, Peter (1993). Quantum Well Lasers. San Diego: Academic Press Unlimited.
17. ^ U.S. Patent #3,982,207, issued September 21, 1976, Inventors R. Dingle and C. H. Henry ,"Quantum Effects in Heterostructure Lasers", filed March 7, 1975.
18. ^ Miller, David (1995). Burstein, Elias; Weisbuch, Claude, eds. Confined Electrons and Photons: New Physics and Applications. New York: Plenum Press. pp. 675–702.
19. ^ a b Miessler, G. L. (2013). Inorganic chemistry (5 ed.). Boston: Pearson. pp. 235–236. ISBN 978-0321811059.
20. ^ a b c Zandonella, Catherine. "Rice-sized laser, powered one electron at a time, bodes well for quantum computing". Princeton University. Princeton University. Retrieved 8 November 2016.
21. ^ a b c d e f g h Rice, C.V.; Griffin, G.A. (2008). "Simple Syntheses of CdSe Quantum Dots". Journal of Chemical Education. 85 (6): 842. Bibcode:2008JChEd..85..842R. doi:10.1021/ed085p842. Retrieved 5 November 2016.
22. ^ a b c d e f "Quantum Dots : a True "Particle in a Box" System". PhysicsOpenLab. 20 November 2015. Retrieved 5 November 2016.
23. ^ a b c Overney, René M. "Quantum Confinement" (PDF). University of Washington. Retrieved 5 November 2016.
24. ^ Zahn, Dietrich R.T. "Surface and Interface Properties of Semiconductor Quantum Dots by Raman Spectroscopy" (PDF). Technische Universität Chemnitz. Retrieved 5 November 2016.
25. ^ a b Bentolila, Laurent A.; Ebenstein, Yuval (2009). "Quantum Dots for In Vivo Small-Animal Imaging". Journal of Nuclear Medicine. 50 (4): 493–496. doi:10.2967/jnumed.108.053561. Retrieved 5 November 2016.
26. ^ Alberto, P; Fiolhais, C; Gil, V M S (1996). "Relativistic particle in a box" (PDF). European Journal of Physics. 17: 19–24. Bibcode:1996EJPh...17...19A. doi:10.1088/0143-0807/17/1/004.
• Bransden, B. H.; Joachain, C. J. (2000). Quantum mechanics (2nd ed.). Essex: Pearson Education. ISBN 0-582-35691-1.
• Davies, John H. (2006). The Physics of Low-Dimensional Semiconductors: An Introduction (6th reprint ed.). Cambridge University Press. ISBN 0-521-48491-X.
External links
Retrieved from ""
This content was retrieved from Wikipedia :
This page is based on the copyrighted Wikipedia article "Particle in a box"; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA |
a89a416354f157d9 | Open main menu
Wikipedia β
Noether's theorem
(Redirected from Noether theorem)
First page of Emmy Noether's article "Invariante Variationsprobleme" (1918), where she proved her theorem.
Noether's (first)[1] theorem states that every differentiable symmetry of the action of a physical system has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918,[2] although a special case was proven by E. Cosserat & F. Cosserat in 1909.[3] The action of a physical system is the integral over time of a Lagrangian function (which may or may not be an integral over space of a Lagrangian density function), from which the system's behavior can be determined by the principle of least action.
Noether's theorem is used in theoretical physics and the calculus of variations. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g. systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Basic illustrations and backgroundEdit
As an illustration, if a physical system behaves the same regardless of how it is oriented in space, its Lagrangian is symmetric under continuous rotations: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that are symmetric.
As another example, if a physical process exhibits the same outcomes regardless of place or time, then its Lagrangian is symmetric under continuous translations in space and time respectively: by Noether's theorem, these symmetries account for the conservation laws of linear momentum and energy within this system, respectively.
As a final example, if the behavior of a physical system does not change upon spatial or temporal reflection, then its Lagrangian has reflection symmetry and time reversal symmetry respectively: Noether's theorem says that these symmetries result in the conservation laws of parity and entropy,[citation needed] [discuss] respectively.
Noether's theorem is important, both because of the insight it gives into conservation laws, and also as a practical calculational tool. It allows investigators to determine the conserved quantities (invariants) from the observed symmetries of a physical system. Conversely, it allows researchers to consider whole classes of hypothetical Lagrangians with given invariants, to describe a physical system. As an illustration, suppose that a physical theory is proposed which conserves a quantity X. A researcher can calculate the types of Lagrangians that conserve X through a continuous symmetry. Due to Noether's theorem, the properties of these Lagrangians provide further criteria to understand the implications and judge the fitness of the new theory.
There are numerous versions of Noether's theorem, with varying degrees of generality. The original version applied only to ordinary differential equations (used for describing distinct particles) and not partial differential equations (used for describing fields). The original versions also assume that the Lagrangian depends only upon the first derivative, while later versions generalize the theorem to Lagrangians depending on the nth derivative.[disputed ] There are natural quantum counterparts of this theorem, expressed in the Ward–Takahashi identities. Generalizations of Noether's theorem to superspaces also exist.[citation needed]
Informal statement of the theoremEdit
All fine technical points aside, Noether's theorem can be stated informally
If a system has a continuous symmetry property, then there are corresponding quantities whose values are conserved in time.[4]
A more sophisticated version of the theorem involving fields states that:
To every differentiable symmetry generated by local actions, there corresponds a conserved current.
The word "symmetry" in the above statement refers more precisely to the covariance of the form that a physical law takes with respect to a one-dimensional Lie group of transformations satisfying certain technical criteria. The conservation law of a physical quantity is usually expressed as a continuity equation.
The formal proof of the theorem utilizes the condition of invariance to derive an expression for a current associated with a conserved physical quantity. In modern (since ca. 1980[5]) terminology, the conserved quantity is called the Noether charge, while the flow carrying that charge is called the Noether current. The Noether current is defined up to a solenoidal (divergenceless) vector field.
In the context of gravitation, Felix Klein's statement of Noether's theorem for action I stipulates for the invariants:[6]
If an integral I is invariant under a continuous group Gρ with ρ parameters, then ρ linearly independent combinations of the Lagrangian expressions are divergences.
Historical contextEdit
A conservation law states that some quantity X in the mathematical description of a system's evolution remains constant throughout its motion — it is an invariant. Mathematically, the rate of change of X (its derivative with respect to time) is zero,
Such quantities are said to be conserved; they are often called constants of motion (although motion per se need not be involved, just evolution in time). For example, if the energy of a system is conserved, its energy is invariant at all times, which imposes a constraint on the system's motion and may help in solving for it. Aside from insights that such constants of motion give into the nature of a system, they are a useful calculational tool; for example, an approximate solution can be corrected by finding the nearest state that satisfies the suitable conservation laws.
The earliest constants of motion discovered were momentum and energy, which were proposed in the 17th century by René Descartes and Gottfried Leibniz on the basis of collision experiments, and refined by subsequent researchers. Isaac Newton was the first to enunciate the conservation of momentum in its modern form, and showed that it was a consequence of Newton's third law. According to general relativity, the conservation laws of linear momentum, energy and angular momentum are only exactly true globally when expressed in terms of the sum of the stress–energy tensor (non-gravitational stress–energy) and the Landau–Lifshitz stress–energy–momentum pseudotensor (gravitational stress–energy). The local conservation of non-gravitational linear momentum and energy in a free-falling reference frame is expressed by the vanishing of the covariant divergence of the stress–energy tensor. Another important conserved quantity, discovered in studies of the celestial mechanics of astronomical bodies, is the Laplace–Runge–Lenz vector.
In the late 18th and early 19th centuries, physicists developed more systematic methods for discovering invariants. A major advance came in 1788 with the development of Lagrangian mechanics, which is related to the principle of least action. In this approach, the state of the system can be described by any type of generalized coordinates q; the laws of motion need not be expressed in a Cartesian coordinate system, as was customary in Newtonian mechanics. The action is defined as the time integral I of a function known as the Lagrangian L
where the dot over q signifies the rate of change of the coordinates q,
Hamilton's principle states that the physical path q(t)—the one actually taken by the system—is a path for which infinitesimal variations in that path cause no change in I, at least up to first order. This principle results in the Euler–Lagrange equations,
Thus, if one of the coordinates, say qk, does not appear in the Lagrangian, the right-hand side of the equation is zero, and the left-hand side requires that
where the momentum
is conserved throughout the motion (on the physical path).
Thus, the absence of the ignorable coordinate qk from the Lagrangian implies that the Lagrangian is unaffected by changes or transformations of qk; the Lagrangian is invariant, and is said to exhibit a symmetry under such transformations. This is the seed idea generalized in Noether's theorem.
Several alternative methods for finding conserved quantities were developed in the 19th century, especially by William Rowan Hamilton. For example, he developed a theory of canonical transformations which allowed changing coordinates so that some coordinates disappeared from the Lagrangian, as above, resulting in conserved canonical momenta. Another approach, and perhaps the most efficient for finding conserved quantities, is the Hamilton–Jacobi equation.
Mathematical expressionEdit
Simple form using perturbationsEdit
The essence of Noether's theorem is generalizing the ignorable coordinates outlined.
One can assume that the Lagrangian L defined above is invariant under small perturbations (warpings) of the time variable t and the generalized coordinates q. One may write
where the perturbations δt and δq are both small, but variable. For generality, assume there are (say) N such symmetry transformations of the action, i.e. transformations leaving the action unchanged; labelled by an index r = 1, 2, 3, …, N.
Then the resultant perturbation can be written as a linear sum of the individual types of perturbations,
where εr are infinitesimal parameter coefficients corresponding to each:
For translations, Qr is a constant with units of length; for rotations, it is an expression linear in the components of q, and the parameters make up an angle.
Using these definitions, Noether showed that the N quantities
(which have the dimensions of [energy]·[time] + [momentum]·[length] = [action]) are conserved (constants of motion).
Time invariance
For illustration, consider a Lagrangian that does not depend on time, i.e., that is invariant (symmetric) under changes tt + δt, without any change in the coordinates q. In this case, N = 1, T = 1 and Q = 0; the corresponding conserved quantity is the total energy H[7]
Translational invariance
Consider a Lagrangian which does not depend on an ("ignorable", as above) coordinate qk; so it is invariant (symmetric) under changes qkqk + δqk. In that case, N = 1, T = 0, and Qk = 1; the conserved quantity is the corresponding momentum pk[8]
In special and general relativity, these apparently separate conservation laws are aspects of a single conservation law, that of the stress–energy tensor,[9] that is derived in the next section.
Rotational invariance
The conservation of the angular momentum L = r × p is analogous to its linear momentum counterpart.[10] It is assumed that the symmetry of the Lagrangian is rotational, i.e., that the Lagrangian does not depend on the absolute orientation of the physical system in space. For concreteness, assume that the Lagrangian does not change under small rotations of an angle δθ about an axis n; such a rotation transforms the Cartesian coordinates by the equation
Since time is not being transformed, T=0. Taking δθ as the ε parameter and the Cartesian coordinates r as the generalized coordinates q, the corresponding Q variables are given by
Then Noether's theorem states that the following quantity is conserved,
In other words, the component of the angular momentum L along the n axis is conserved.
If n is arbitrary, i.e., if the system is insensitive to any rotation, then every component of L is conserved; in short, angular momentum is conserved.
Field theory versionEdit
Although useful in its own right, the version of Noether's theorem just given is a special case of the general version derived in 1915. To give the flavor of the general theorem, a version of the Noether theorem for continuous fields in four-dimensional space–time is now given. Since field theory problems are more common in modern physics than mechanics problems, this field theory version is the most commonly used version (or most often implemented) of Noether's theorem.
Let there be a set of differentiable fields defined over all space and time; for example, the temperature would be representative of such a field, being a number defined at every place and time. The principle of least action can be applied to such fields, but the action is now an integral over space and time
(the theorem can actually be further generalized to the case where the Lagrangian depends on up to the nth derivative using jet bundles)
A continuous transformation of the fields can be written infinitesimally as
where is in general a function that may depend on both and . The condition for to generate a physical symmetry is that the action is left invariant. This will certainly be true if the Lagrangian density is left invariant, but it will also be true if the Lagrangian changes by a divergence,
since the integral of a divergence becomes a boundary term according to the divergence theorem. A system described by a given action might have multiple independent symmetries of this type, indexed by , so the most general symmetry transformation would be written as
with the consequence
For such systems, Noether's theorem states that there are conserved current densities
(where the dot product is understood to contract the field indices, not the index or index)
In such cases, the conservation law is expressed in a four-dimensional way
which expresses the idea that the amount of a conserved quantity within a sphere cannot change unless some of it flows out of the sphere. For example, electric charge is conserved; the amount of charge within a sphere cannot change unless some of the charge leaves the sphere.
For illustration, consider a physical system of fields that behaves the same under translations in time and space, as considered above; in other words, is constant in its third argument. In that case, N = 4, one for each dimension of space and time. An infinitesimal translation in space, (with denoting the Kronecker delta), affects the fields as : that is, relabelling the coordinates is equivalent to leaving the coordinates in place while translating the field itself, which in turn is equivalent to transforming the field by replacing its value at each point with the value at the point "behind" it which would be mapped onto by the infinitesimal displacement under consideration. Since this is infinitesimal, we may write this transformation as
The Lagrangian density transforms in the same way, , so
and thus Noether's theorem corresponds to the conservation law for the stress–energy tensor Tμν,[9] where we have used in place of . To wit, by using the expression given earlier, and collecting the four conserved currents (one for each ) into a tensor , Noether's theorem gives
(note that we relabelled as at an intermediate step to avoid conflict). (However, note that the obtained in this way may differ from the symmetric tensor used as the source term in general relativity; see Canonical stress–energy tensor.)
The conservation of electric charge, by contrast, can be derived by considering Ψ linear in the fields φ rather than in the derivatives.[11] In quantum mechanics, the probability amplitude ψ(x) of finding a particle at a point x is a complex field φ, because it ascribes a complex number to every point in space and time. The probability amplitude itself is physically unmeasurable; only the probability p = |ψ|2 can be inferred from a set of measurements. Therefore, the system is invariant under transformations of the ψ field and its complex conjugate field ψ* that leave |ψ|2 unchanged, such as
a complex rotation. In the limit when the phase θ becomes infinitesimally small, δθ, it may be taken as the parameter ε, while the Ψ are equal to and −*, respectively. A specific example is the Klein–Gordon equation, the relativistically correct version of the Schrödinger equation for spinless particles, which has the Lagrangian density
In this case, Noether's theorem states that the conserved (∂⋅j = 0) current equals
which, when multiplied by the charge on that species of particle, equals the electric current density due to that type of particle. This "gauge invariance" was first noted by Hermann Weyl, and is one of the prototype gauge symmetries of physics.
One independent variableEdit
Consider the simplest case, a system with one independent variable, time. Suppose the dependent variables q are such that the action integral
is invariant under brief infinitesimal variations in the dependent variables. In other words, they satisfy the Euler–Lagrange equations
And suppose that the integral is invariant under a continuous symmetry. Mathematically such a symmetry is represented as a flow, φ, which acts on the variables as follows
where ε is a real variable indicating the amount of flow, and T is a real constant (which could be zero) indicating how much the flow shifts time.
The action integral flows to
which may be regarded as a function of ε. Calculating the derivative at ε' = 0 and using Leibniz's rule, we get
Notice that the Euler–Lagrange equations imply
Substituting this into the previous equation, one gets
Again using the Euler–Lagrange equations we get
Substituting this into the previous equation, one gets
From which one can see that
is a constant of the motion, i.e., it is a conserved quantity. Since φ[q, 0] = q, we get and so the conserved quantity simplifies to
To avoid excessive complication of the formulas, this derivation assumed that the flow does not change as time passes. The same result can be obtained in the more general case.
Field-theoretic derivationEdit
Noether's theorem may also be derived for tensor fields φA where the index A ranges over the various components of the various tensor fields. These field quantities are functions defined over a four-dimensional space whose points are labeled by coordinates xμ where the index μ ranges over time (μ = 0) and three spatial dimensions (μ = 1, 2, 3). These four coordinates are the independent variables; and the values of the fields at each event are the dependent variables. Under an infinitesimal transformation, the variation in the coordinates is written
whereas the transformation of the field variables is expressed as
By this definition, the field variations δφA result from two factors: intrinsic changes in the field themselves and changes in coordinates, since the transformed field αA depends on the transformed coordinates ξμ. To isolate the intrinsic changes, the field variation at a single point xμ may be defined
If the coordinates are changed, the boundary of the region of space–time over which the Lagrangian is being integrated also changes; the original boundary and its transformed version are denoted as Ω and Ω’, respectively.
Noether's theorem begins with the assumption that a specific transformation of the coordinates and field variables does not change the action, which is defined as the integral of the Lagrangian density over the given region of spacetime. Expressed mathematically, this assumption may be written as
where the comma subscript indicates a partial derivative with respect to the coordinate(s) that follows the comma, e.g.
Since ξ is a dummy variable of integration, and since the change in the boundary Ω is infinitesimal by assumption, the two integrals may be combined using the four-dimensional version of the divergence theorem into the following form
The difference in Lagrangians can be written to first-order in the infinitesimal variations as
However, because the variations are defined at the same point as described above, the variation and the derivative can be done in reverse order; they commute
Using the Euler–Lagrange field equations
the difference in Lagrangians can be written neatly as
Thus, the change in the action can be written as
Since this holds for any region Ω, the integrand must be zero
For any combination of the various symmetry transformations, the perturbation can be written
where is the Lie derivative of φA in the Xμ direction. When φA is a scalar or ,
These equations imply that the field variation taken at one point equals
Differentiating the above divergence with respect to ε at ε = 0 and changing the sign yields the conservation law
where the conserved current equals
Manifold/fiber bundle derivationEdit
Suppose we have an n-dimensional oriented Riemannian manifold, M and a target manifold T. Let be the configuration space of smooth functions from M to T. (More generally, we can have smooth sections of a fiber bundle over M.)
Examples of this M in physics include:
• In classical mechanics, in the Hamiltonian formulation, M is the one-dimensional manifold R, representing time and the target space is the cotangent bundle of space of generalized positions.
• In field theory, M is the spacetime manifold and the target space is the set of values the fields can take at any given point. For example, if there are m real-valued scalar fields, , then the target manifold is Rm. If the field is a real vector field, then the target manifold is isomorphic to R3.
Now suppose there is a functional
called the action. (Note that it takes values into R, rather than C; this is for physical reasons, and doesn't really matter for this proof.)
To get to the usual version of Noether's theorem, we need additional restrictions on the action. We assume is the integral over M of a function
called the Lagrangian density, depending on φ, its derivative and the position. In other words, for φ in
Suppose we are given boundary conditions, i.e., a specification of the value of φ at the boundary if M is compact, or some limit on φ as x approaches ∞. Then the subspace of consisting of functions φ such that all functional derivatives of at φ are zero, that is:
and that φ satisfies the given boundary conditions, is the subspace of on shell solutions. (See principle of stationary action)
Now, suppose we have an infinitesimal transformation on , generated by a functional derivation, Q such that
for all compact submanifolds N or in other words,
for all x, where we set
If this holds on shell and off shell, we say Q generates an off-shell symmetry. If this only holds on shell, we say Q generates an on-shell symmetry. Then, we say Q is a generator of a one parameter symmetry Lie group.
Now, for any N, because of the Euler–Lagrange theorem, on shell (and only on-shell), we have
Since this is true for any N, we have
But this is the continuity equation for the current defined by:[12]
which is called the Noether current associated with the symmetry. The continuity equation tells us that if we integrate this current over a space-like slice, we get a conserved quantity called the Noether charge (provided, of course, if M is noncompact, the currents fall off sufficiently fast at infinity).
Noether's theorem is an on shell theorem: it relies on use of the equations of motion—the classical path. It reflects the relation between the boundary conditions and the variational principle. Assuming no boundary terms in the action, Noether's theorem implies that
The quantum analogs of Noether's theorem involving expectation values, e.g. ⟨∫d4x ∂·J⟩ = 0, probing off shell quantities as well are the Ward–Takahashi identities.
Generalization to Lie algebrasEdit
Suppose we have two symmetry derivations Q1 and Q2. Then, [Q1Q2] is also a symmetry derivation. Let's see this explicitly. Let's say
where f12 = Q1[f2μ] − Q2[f1μ]. So,
This shows we can extend Noether's theorem to larger Lie algebras in a natural way.
Generalization of the proofEdit
This applies to any local symmetry derivation Q satisfying QS ≈ 0, and also to more general local functional differentiable actions, including ones where the Lagrangian depends on higher derivatives of the fields. Let ε be any arbitrary smooth function of the spacetime (or time) manifold such that the closure of its support is disjoint from the boundary. ε is a test function. Then, because of the variational principle (which does not apply to the boundary, by the way), the derivation distribution q generated by q[ε][Φ(x)] = ε(x)Q[Φ(x)] satisfies q[ε][S] ≈ 0 for every ε, or more compactly, q(x)[S] ≈ 0 for all x not on the boundary (but remember that q(x) is a shorthand for a derivation distribution, not a derivation parametrized by x in general). This is the generalization of Noether's theorem.
To see how the generalization is related to the version given above, assume that the action is the spacetime integral of a Lagrangian that only depends on φ and its first derivatives. Also, assume
for all ε.
More generally, if the Lagrangian depends on higher derivatives, then
Example 1: Conservation of energyEdit
Looking at the specific case of a Newtonian particle of mass m, coordinate x, moving under the influence of a potential V, coordinatized by time t. The action, S, is:
The first term in the brackets is the kinetic energy of the particle, whilst the second is its potential energy. Consider the generator of time translations Q = d/dt. In other words, . Note that x has an explicit dependence on time, whilst V does not; consequently:
so we can set
The right hand side is the energy, and Noether's theorem states that (i.e. the principle of conservation of energy is a consequence of invariance under time translations.
More generally, if the Lagrangian does not depend explicitly on time, the quantity
(called the Hamiltonian) is conserved.
Example 2: Conservation of center of momentumEdit
Still considering 1-dimensional time, let
i.e. N Newtonian particles where the potential only depends pairwise upon the relative displacement.
For , let's consider the generator of Galilean transformations (i.e. a change in the frame of reference). In other words,
Note that
This has the form of so we can set
where is the total momentum, M is the total mass and is the center of mass. Noether's theorem states:
Example 3: Conformal transformationEdit
Both examples 1 and 2 are over a 1-dimensional manifold (time). An example involving spacetime is a conformal transformation of a massless real scalar field with a quartic potential in (3 + 1)-Minkowski spacetime.
For Q, consider the generator of a spacetime rescaling. In other words,
The second term on the right hand side is due to the "conformal weight" of . Note that
This has the form of
(where we have performed a change of dummy indices) so set
Noether's theorem states that (as one may explicitly check by substituting the Euler–Lagrange equations into the left hand side).
Note that if one tries to find the Ward–Takahashi analog of this equation, one runs into a problem because of anomalies.
Application of Noether's theorem allows physicists to gain powerful insights into any general theory in physics, by just analyzing the various transformations that would make the form of the laws involved invariant. For example:
In quantum field theory, the analog to Noether's theorem, the Ward–Takahashi identity, yields further conservation laws, such as the conservation of electric charge from the invariance with respect to a change in the phase factor of the complex field of the charged particle and the associated gauge of the electric potential and vector potential.
The Noether charge is also used in calculating the entropy of stationary black holes.[13]
See alsoEdit
1. ^ See also Noether's second theorem.
2. ^ Noether E (1918). "Invariante Variationsprobleme". Nachr. D. König. Gesellsch. D. Wiss. Zu Göttingen, Math-phys. Klasse. 1918: 235–257.
3. ^ Cosserat E., Cosserat F. (1909). Théorie des corps déformables. Paris: Hermann.
4. ^ Thompson, W.J. (1994). Angular Momentum: an illustrated guide to rotational symmetries for physical systems. 1. Wiley. p. 5. ISBN 0-471-55264-X.
5. ^ The term "Noether charge" occurs in Seligman, Group theory and its applications in physics, 1980: Latin American School of Physics, Mexico City, American Institute of Physics, 1981. It comes enters wider use during the 1980s, e.g. by G. Takeda in: Errol Gotsman, Gerald Tauber (eds.) From SU(3) to Gravity: Festschrift in Honor of Yuval Ne'eman, 1985, p. 196.
6. ^ Nina Byers (1998) "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws." in Proceedings of a Symposium on the Heritage of Emmy Noether, held on 2–4 December 1996, at the Bar-Ilan University, Israel, Appendix B.
7. ^ Lanczos 1970, pp. 401–3
8. ^ Lanczos 1970, pp. 403–4
9. ^ a b Goldstein 1980, pp. 592–3
10. ^ Lanczos 1970, pp. 404–5
11. ^ Goldstein 1980, pp. 593–4
12. ^ Michael E. Peskin; Daniel V. Schroeder (1995). An Introduction to Quantum Field Theory. Basic Books. p. 18. ISBN 0-201-50397-2.
13. ^ Vivek Iyer; Wald (1995). "A comparison of Noether charge and Euclidean methods for Computing the Entropy of Stationary Black Holes". Physical Review D. 52 (8): 4430–9. arXiv:gr-qc/9503052 . Bibcode:1995PhRvD..52.4430I. doi:10.1103/PhysRevD.52.4430.
External linksEdit |
a6a03d3474ae6e3d | Take the 2-minute tour ×
The time is treated differently in special relativity and quantum mechanics. What is the exact difference and why relativistic quantum mechanics (Dirac equation etc.) works?
share|improve this question
Er...time is treated differently in relativistic mechanics and non-relativistic quantum mechanics, but that is the same as saying that time is treated differently in relativistic and non-relativistic classical mechanics. – dmckee Jul 15 '11 at 2:04
Quantum mechanics doesn't per se imply relativity. – Siyuan Ren Jul 15 '11 at 3:18
The Schrödinger equation of non-relativistic QM is second order in the time derivative and is not Lorentz invariant. On the other hand, the Dirac equation is first order in the time derivative and is invariant under Lorentz transformations. So I think this is the main difference between non-relativistic and relativistic QM. In the latter, the time is treated in (almost) the same way as spatial coordinates. Also the spin is a relativistic effect because it emerges naturally only in relativistic QM. – Andyk Jul 15 '11 at 14:33
Dear @ANKU: Your above comment the Schrodinger equation of non-relativistic QM is second order in the time derivative was probably written in a bit of a hurry. :-) More importantly, is it possible to formulate the main question using precise terms? – Qmechanic Jul 17 '11 at 15:51
Oops, it's second order in the space and first order in time. But the point is this, we make it first order in space and time derivative so that it becomes Lorentz invariant. Right? – Andyk Jul 18 '11 at 2:12
3 Answers 3
up vote 1 down vote accepted
Quantum mechanics can be reconciled with special relativity to make quantum field theory, but there are some awkward things going on in that marriage. SR treats time symmetrically with position, but in quantum mechanics, position is an operator and time isn't. Baez at UCR has a nice discussion of that here: http://math.ucr.edu/home/baez/uncertainty.html
share|improve this answer
Well, QFT reconciles this by disposing of the position as an operator. – Marek Jul 19 '11 at 20:18
And Dirac's approach (and Feynman's) makes time an "operator" (or equivalently an integration variable in the path integral). This answer is no more satisfying than saying "In classical mechanics, position is a function, and time is a parameter". That's true, but only if you choose to parametrize by time and not proper time. The same is true in quantum mechanics. – Ron Maimon Aug 13 '11 at 20:09
Time is always time. It is special. Another thing is its involvement in transformations of measured data from one reference system to another. This involvement does not change its meaning. In a given reference frame the time is unique and the space coordinates are multiple - according to the number of particles to be observed.
Concerning Dirac equation, it took some efforts to make it work after its invention. It works because it was made work, if you like. Besides, it depends what exactly do you mean by "relativistic QM". QED, for example, is rather difficult to make work. Its sensible results only appear at page 500 or so, when the infrared catastrophe is resolved.
share|improve this answer
I send my article to you PDF format in attachment. Please can you analysis . And Please let me visit in my web site 'www.timeflow.org' . Thanks.
Combining General Relativity Theory with Quantum Theory ‘The lifetime of a mass or an energy in space is its Mc2 energy’ Ref.(3). Due to this characteristic feature of a substance, conversions of photons of the wawe-particle (energy-mass) or electrons continue consistently. Hence, the binary conversion behavior of a photon implies binary conversion behavior of a big mass space object. In other words, the behavior of a photon is a miniature version of the behavior of a big mass space object. Because of General Relativity Theory, ‘One hour in the Sun remains behind with respect to, one hour in the Earth. One hour in the Earth remains behind with respect to, one hour in the Moon. One hour in the Moon remains behind with respect to, one hour in Alpha Ray. One hour in the Alpha Ray remains behind with respect to, one hour in Beta Ray. One hour in the Beta Ray remains behind with respect to, one hour in Gamma Ray. They all show the same physical behaviour. Let us observe the behaviors of two photons, say one like is a big mass, and the other like is a small mass. Since the photon with small mass has a short lifetime, it will transform faster from mass into energy and vice versa. The big mass photon has a longer lifetime. Hence, the speed of transformation from mass to energy or from energy to mass is slower. Smaller the mass of a photon is, the much bigger the kinetic energy is. The kinetic energy of a photon is given by, e=hf ‘In order to calculate the lifetime of a mass or an energy in space, we can assume time flow to be time/energy; in any case, no matter what value we assign to time flow, that will not change the present result: the lifetime of a mass or an energy in space is its Mc2 energy. When this is calculated, the lifetime of 1 kg mass in space is 2,851,927,903.26… years, or 9.10 16 s’ Ref.(3) All photons’ and all free sub-atomic particles’ lifetimes are their periods or 1/f . In other words the periods are lifetimes for photons and for free sub-atomic particles. And a period is equal its Mc2 particle energy x 1 s/joule or erg. If the period is high, lifetime is high. And Mc2 is high. Or vice versa. Like astronomical objects. This is universal law. The mass of the low frequency of a photon has a big value . For example: Substituting an Alpha Ray with 1,67.109 Hz frequency into the formula yields, e= hf = (6,62 .10-34) x (1,67.109) time = (energy) x (time flow) t = 1/1,67.109 = Mc2 .1 s/joule ; M = 6,64 .10-27 kg On the other hand, for a high frequency photon the mass has a small value. As an example to show this is the case, consider a Beta Ray with 1,22.1013 Hz frequency into the formula gives, e= hf = (6,62 .10-34) x (1,22 .1013) t = 1/1,22.1013 = Mc2 .1 s/joule ; M= 9,109.10-31 kg These two examples shows us that by equating the period of a photon to Mc2 energy in a unit time flow provides us the actual values of the mass. These two examples are valid for x-Ray ,Gamma-Ray and Light-Ray. When mass decreases, frequency increases. And the transformation from mass to energy becomes more uncertain. We understand that Quantum Mechanics is not different from Classical Mechanics and Relativistic Mechanics, in fact. The characteristics of the particles in Quantum Theory are the same as the character of the mass in the General Relativity Theory. They are subject to the same physical processes. So, Einstein’s expression “God does not throw dice!” is still valid. References (R1): [.1.] Salih Kircalar, ’Utilization of Time:Time Flow’, Galilean Electrodynamics 13, SI 1, 2 (2002). [.2.] Salih Kircalar, ‘Time Effects Caused by Mass or Energy’, Galilean Electrodynamics 15, SI 1,8 (2004). [.3.] Salih Kircalar, ‘Mass or Energy & Quantum Mechanics’ , Galilean electrodynamics 18, J/F, 2 (2007). Salih Kircalar Güzel Otomotiv Kizilelma Cad.No:99/B Findikzade-Istanbul /TURKEY e-mail:kircalars@hotmail.com
share|improve this answer
This answer(v1) seems to be just rambling text with no relation to the question(v1). – Qmechanic Sep 23 '12 at 19:51
Your Answer
|
21ff9f04b804c8d7 | Low-energy electron diffraction
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Low-energy electron diffraction (LEED) is a technique for the determination of the surface structure of single-crystalline materials by bombardment with a collimated beam of low energy electrons (20–200 eV)[1] and observation of diffracted electrons as spots on a fluorescent screen.
Figure 1: LEED pattern of a Si(100) reconstructed surface. The underlying lattice is a square lattice while the surface reconstruction has a 2x1 periodicity. As discussed in the text, the pattern shows that reconstruction exists in symmetrically equivalent domains which are oriented along different crystallographic axes. The diffraction spots are generated by acceleration of elastically scattered electrons onto a hemispherical fluorescent screen. Also seen is the electron gun which generates the primary electron beam. It covers up parts of the screen.
LEED may be used in one of two ways:
1. Qualitatively, where the diffraction pattern is recorded and analysis of the spot positions gives information on the symmetry of the surface structure. In the presence of an adsorbate the qualitative analysis may reveal information about the size and rotational alignment of the adsorbate unit cell with respect to the substrate unit cell.
2. Quantitatively, where the intensities of diffracted beams are recorded as a function of incident electron beam energy to generate the so-called I-V curves. By comparison with theoretical curves, these may provide accurate information on atomic positions on the surface at hand.
Historical perspective[2][edit]
Davisson and Germer's discovery of electron diffraction[edit]
The theoretical possibility of the occurrence of electron diffraction first emerged in 1924 when Louis de Broglie introduced wave mechanics and proposed the wavelike nature of all particles. In his Nobel laureated work de Broglie postulated that the wavelength of a particle with linear momentum p is given by h/p, where h is Planck's constant. The de Broglie hypothesis was confirmed experimentally at Bell Labs in 1927 when Clinton Davisson and Lester Germer fired low-energy electrons at a crystalline nickel target and observed that the angular dependence of the intensity of backscattered electrons showed diffraction patterns. These observations were consistent with the diffraction theory for X-rays developed by Bragg and Laue earlier. Before the acceptance of the de Broglie hypothesis diffraction was believed to be an exclusive property of waves.
Davisson and Germer published notes of their electron diffraction experiment result in Nature and in Physical Review in 1927. One month after Davisson and Germer's work appeared, Thompson and Reid published their electron diffraction work with higher kinetic energy (thousand times higher than the energy used by Davisson and Germer) in the same journal. Those experiments revealed the wave property of electrons and opened up an era of electron diffraction study.
Development of LEED as a tool in surface science[edit]
Though discovered in 1927, Low Energy Electron Diffraction did not become a popular tool for surface analysis until the early 1960s. The main reasons were that monitoring directions and intensities of diffracted beams was a difficult experimental process due to inadequate vacuum techniques and slow detection methods such as a Faraday cup. Also, since LEED is a surface sensitive method, it required well-ordered surface structures. Techniques for the reconstruction of clean metal surfaces first became available much later. In the early 1960s LEED experienced a renaissance as ultra high vacuum became widely available and the post acceleration detection method was introduced. Using this technique diffracted electrons were accelerated to high energies to produce clear and visible diffraction patterns on a fluorescent screen.
It soon became clear that the kinematic (single scattering) theory, which had been successfully used to explain X-ray diffraction experiments, was inadequate for the quantitative interpretation of experimental data obtained from LEED. At this stage a detailed determination of surface structures, including adsorption sites, bond angles and bond lengths was not possible. A dynamical electron diffraction theory which took into account the possibility of multiple scattering was established in the late 1960s. With this theory it later became possible to reproduce experimental data with high precision.
Experimental Setup[edit]
In order to keep the studied sample clean and free from unwanted adsorbates, LEED experiments are performed in an ultra-high-vacuum environment (10−9 mbar).
Figure 2 Diagram of a LEED optics apparatus.
The most important elements in an LEED experiment are[2]
1. A sample holder with the prepared sample
2. An electron gun
3. A display system, usually a hemispherical fluorescent screen on which the diffraction pattern can be observed directly
4. A sputtering gun for cleaning the surface
5. An Auger-Electron Spectroscopy system in order to determine the purity of the surface.
A simplified sketch of an LEED setup is shown in figure 2.[3]
Sample preparation[edit]
The sample is usually prepared outside the vacuum chamber by cutting a slice of around 1 mm in thickness and 1 cm in diameter along the desired crystallographic axis. The correct alignment of the crystal can be achieved with the help of x-ray methods and should be within 1° of the desired angle.[4] After being mounted in the UHV chamber the sample is chemically cleaned and flattened. Unwanted surface contaminants are removed by ion sputtering or by chemical processes such as oxidation and reduction cycles. The surface is flattened by annealing at high temperatures. Once a clean and well-defined surface is prepared, monolayers can be adsorbed on the surface by exposing it to a gas consisting of the desired adsorbate atoms or molecules.
Often the annealing process will let bulk impurities diffuse to the surface and therefore give rise to a re-contamination after each cleaning cycle. The problem is that impurities which adsorb without changing the basic symmetry of the surface, cannot easily be identified in the diffraction pattern. Therefore in many LEED experiments Auger Spectroscopy is used to accurately determine the purity of the sample.
Electron gun[edit]
In the electron gun, monochromatic electrons are emitted by a cathode filament which is at a negative potential, typically 10-600 V, with respect to the sample. The electrons are accelerated and focused into a beam, typically about 0.1 to 0.5 mm wide, by a series of electrodes serving as electron lenses. Some of the electrons incident on the sample surface are backscattered elastically, and diffraction can be detected if sufficient order exists on the surface. This typically requires a region of single crystal surface as wide as the electron beam, although sometimes polycrystalline surfaces such as highly oriented pyrolytic graphite (HOPG) are sufficient.
Detector system[edit]
An LEED detector usually contains three or four hemispherical concentric grids and a phosphor screen or other position-sensitive detector. The grids are used for screening out the inelastically scattered electrons. Most new LEED systems use a reverse view scheme, which has a minimized electron gun, and the pattern is viewed from behind through a transmission screen and a viewport. Recently, a new digitized position sensitive detector called a delay-line detector with better dynamic range and resolution has been developed.
The LEED contains a retarding field analyzer to block inelastically scattered electrons. Because only spherical fields around the sampled point are allowed and the geometry of the sample and the surrounding area is not spherical, no field is allowed. Therefore the first grid screens the space above the sample from the retarding field. The next grid is at a potential to block low energy electrons, it is called the suppressor or the gate. To make the retarding field homogeneous and mechanically more stable this grid often consists of two grids. The fourth grid is only necessary when the LEED is used like a tetrode and the current at the screen is measured, when it serves as screen between the gate and the anode.
Using the detector for Auger electron spectroscopy[edit]
To improve the measured signal in Auger electron spectroscopy, the gate voltage is scanned in a linear ramp. An RC circuit serves to derive the second derivative, which is then amplified and digitized. To reduce the noise, multiple passes are summed up. The first derivative is very large due to the residual capacitive coupling between gate and the anode and may degrade the performance of the circuit. By applying a negative ramp to the screen this can be compensated. It is also possible to add a small sine to the gate. A high Q RLC circuit is tuned to the second harmonic to detect the second derivative.
Data acquisition[edit]
Image 1: LEED pattern of a clean Platinum-Rhodium (100) (Miller-index) single crystal. Taken in high vacuum using an electron gun with an energy of 85 eV.
Image 2: LEED pattern of CO on Platinum-Rhodium (100) (Miller-index) surface of a single crystal. Taken in high vacuum using an electron gun with an energy of 94 eV.
A modern data acquisition system usually contains a CCD/CMOS camera pointed to the screen for diffraction pattern visualization and a computer for data recording and further analysis.
The shown images are examples of LEED diffraction patterns. The difference between image 1 and 2 is remarkable; where image 1 is of an clean (100) Platinum/ Rhodium single crystal, and image 2 of the same crystal with CO adsorbed on the surface. The original surface order of the clean crystal is clearly visible in image 1, it shows a C(1X1) structure; the extra spots in image 2 are caused by the CO on the surface and are an example of a C(2X2) structure. The diffraction spots are generated by acceleration of elastically scattered electrons onto a hemispherical fluorescent screen, a retarding field analyzer. In the middle one can see the bright spot of the electron gun which generates the primary electron beam.
Theory of LEED[edit]
Surface Sensitivity[edit]
The basic reason for the high surface sensitivity of LEED is the fact that for low-energy electrons the interaction between the solid and electrons is especially strong. Upon penetrating the crystal, primary electrons will lose kinetic energy due to inelastic scattering processes such as plasmon- and phonon excitations as well as electron-electron interactions. In cases where the detailed nature of the inelastic processes is unimportant they are commonly treated by assuming an exponential decay of the primary electron beam intensity, I0, in the direction of propagation:
I(d) = I_0 * e^{-d/\Lambda(E)}
Here d is the penetration depth and \Lambda(E) denotes the inelastic mean free path, defined as the distance an electron can travel before its intensity has decreased by the factor 1/e. While the inelastic scattering processes and consequently the electronic mean free path depend on the energy, it is relatively independent of the material. The mean free path turns out to be minimal (5-10 Å) in the energy range of low-energy electrons (20 − 200 eV).[1] This effective attenuation means that only a few atomic layers are sampled by the electron beam and as a consequence the contribution of deeper atoms to the diffraction progressively decreases.
Kinematic theory: single scattering[edit]
Figure 3: Ewald's sphere construction for the case of diffraction from a 2D-lattice. The intersections between Ewald's sphere and reciprocal lattice rods define the allowed diffracted beams.
Kinematic diffraction is defined as the situation where electrons impinging on a well-ordered crystal surface are elastically scattered only once by that surface. In the theory the electron beam is represented by a plane wave with a wavelength in accordance to the de Broglie hypothesis:
\lambda = \frac{h}{\sqrt{2mE}}, \qquad \lambda[\textrm{nm}]\approx\sqrt{\frac{1.5}{E[\textrm{eV}]}}
The interaction between the scatterers present in the surface and the incident electrons is most conveniently described in reciprocal space. In three dimensions the primitive reciprocal lattice vectors are related to the real space lattice {a, b, c} in the following way:[5]
\textbf{a}^* &=\frac{2\pi\textbf{b}\times\textbf{c}}{\textbf{a}\cdot(\textbf{b}\times\textbf{c})}, \\
\textbf{b}^* &= \frac{2\pi\textbf{c}\times\textbf{a}}{\textbf{b}\cdot(\textbf{c}\times\textbf{a})}, \\
\textbf{c}^* &= \frac{2\pi\textbf{a}\times\textbf{b}}{\textbf{c}\cdot(\textbf{a}\times\textbf{b})}
For an incident electron with wave vector \textbf{k}_0=2\pi/\lambda_0 and scattered wave vector \begin{align}\textbf{k}=2\pi/\lambda\end{align} the condition for constructive interference and hence diffraction of scattered electron waves is given by the Laue condition
Figure 4: Ewald's sphere construction for the case of normal incidence of the primary electron beam. The diffracted beams are indexed according to the values of h and l.
\textbf{k}-\textbf{k}_0 = \textbf{G}_\textrm{hkl}, (1)
where (h,k,l) is a set of integers and
\textbf{G}_\textrm{hkl} = h\textbf{a}^*+k\textbf{b}^*+l\textbf{c}^*
is a vector of the reciprocal lattice. The magnitudes of the wave vectors are unchanged, i.e. |\textbf{k}_0|=|\textbf{k}|, since only elastic scattering is considered. Since the mean free path of low energy electrons in a crystal is only a few angstroms, only the first few atomic layers contribute to the diffraction. This means that there are no diffraction conditions in the direction perpendicular to the sample surface. As a consequence the reciprocal lattice of a surface is a 2D lattice with rods extending perpendicular from each lattice point. The rods can be pictured as regions where the reciprocal lattice points are infinitely dense. Therefore in the case of diffraction from a surface equation (1) reduces to the 2D form:[2]
\textbf{k}^{||}-\textbf{k}_0^{||} = \textbf{G}_\textrm{hk}=h\textbf{a}^*+k\textbf{b}^*, (2)
where \textbf{a}^* and \textbf{b}^* are the primitive translation vectors of the 2D reciprocal lattice of the surface and \textbf{k}^{||},\textbf{k}_0^{||} denote the component of respectively the reflected and incident wave vector parallel to the sample surface. \textbf{a}^* and \textbf{b}^* are related to the real space surface lattice in the following way:
\textbf{a}^* &=\frac{2\pi\textbf{b}\times\hat{\textbf{n}}}{|\textbf{a}\times\textbf{b}|}\\
\textbf{b}^* &=\frac{2\pi\hat{\textbf{n}}\times{\textbf{a}}}{|\textbf{a}\times\textbf{b}|}
The Laue condition equation (2) can readily be visualized using the Ewald's sphere construction. Figure 4 shows a simple illustration of this principle: The wave vector \textbf{k}_0 of the incident electron beam is drawn such that it terminates at a reciprocal lattice point. The Ewald's sphere is then the sphere with radius |\textbf{k}_0| and origin at the center of the incident wave vector.
By construction every wave vector centered at the origin and terminating at an intersection between a rod and the sphere will then satisfy the Laue condition and thus represent an allowed diffracted beam.
Interpretation of LEED patterns[edit]
Figure 5: Real space- and reciprocal lattices for the case of a) a (100) face of a simple cubic lattice and b) a (2x1) commensurate superstructure. The white spots in the LEED pattern are the extra spots associated with the adsorbate structure.
Figure 4 shows the Ewald's sphere for the case of normal incidence of the primary electron beam, as would be the case in an actual LEED setup. It is apparent that the pattern observed on the fluorescent screen is a direct picture of the reciprocal lattice of the surface. The size of the Ewald's sphere and hence the number of diffraction spots on the screen is controlled by the incident electron energy. From the knowledge of the reciprocal lattice models for the real space lattice can be constructed and the surface can be characterized at least qualitatively in terms of the surface periodicity and the point group. Figure 5.a shows a model of an unreconstructed (100) face of a simple cubic crystal and the expected LEED pattern. The spots are indexed according to the values of h and k.
We now consider the case of an overlaying superstructure on a substrate surface. If the LEED pattern of the underlying (1x1) surface is known, spots due to the superstructure can be identified as extra spots or super spots. Figure 5.b shows the simple example of a (2x1) superstructure on a square lattice.
For a commensurate superstructure the symmetry and the rotational alignment with respect to adsorbent surface can be determined from the LEED pattern. This is easiest shown by using a matrix notation,[1] where the primitive translation vectors of the superlattice {as,bs} are linked to the primitive translation vectors of the underlying (1x1) lattice {a,b} in the following way
\textbf{a}_s &= G_{11}\textbf{a} + G_{12}\textbf{b},\\
\textbf{b}_s &= G_{21}\textbf{a} + G_{22}\textbf{b}.
The matrix for the superstructure then is
G_{11}&G_{12} \\
Similarly, the primitive translation vectors of the lattice describing the extra spots {as*,bs*} are linked to the primitive translation vectors of the reciprocal lattice {a*,b*}
\textbf{a}_s^* &= G_{11}^*\textbf{a}^* + G_{12}^*\textbf{b}^*,\\
\textbf{b}_s^* &= G_{21}^*\textbf{a}^* + G_{22}^*\textbf{b}^*.
G* is related to G in the following way
G^* &= (G^{-1})^T\\
G_{22}&-G_{21} \\
An essential problem when considering LEED patterns is the existence of symmetrically equivalent domains. Domains may lead to diffraction patterns which have higher symmetry than the actual surface at hand. The reason is that usually the cross sectional area of the primary electron beam (~1 mm²) is large compared to the average domain size on the surface and hence the LEED pattern might be a superposition of diffraction beams from domains oriented along different axes of the substrate lattice.
However, since the average domain size generally is larger than the coherence length of the probing electrons, interference between electrons scattered from different domains can be neglected. Therefore the total LEED pattern emerges as the incoherent sum of the diffraction patterns associated with the individual domains.
Figure 6 shows the superposition of the diffraction patterns for the two orthogonal domains (2x1) and (1x2) on a square lattice, i.e. for the case where one structure is just rotated by 90° with respect to the other. The (2x1) structure and the respective LEED pattern are shown in figure 5.b. It is apparent that the local symmetry of the surface structure is twofold while the LEED pattern exhibits a fourfold symmetry.
Figure 1 shows a real diffraction pattern of the same situation for the case of a Si(100) surface. However, here the (2x1) structure is formed due to surface reconstruction.
Figure 6: Superposition of the LEED patterns associated with the two orthogonal domains (2x1) and (1x2). The LEED pattern has a fourfold rotational symmetry.
Dynamical theory: multiple scattering[edit]
The inspection of the LEED pattern gives a qualitative picture of the surface periodicity i.e. the size of the surface unit cell and to a certain degree of surface symmetries. However it will give no information about the atomic arrangement within a surface unit cell or the sites of adsorbed atoms. For instance if the whole superstructure in figure 5.b is shifted such that the atoms adsorb in bridge sites instead of on-top sites the LEED pattern will be the same.
A more quantitative analysis of LEED experimental data can be achieved by analysis of so-called I-V curves, which are measurements of the intensity versus incident electron energy. The I-V curves can be recorded by using a camera connected to computer controlled data handling or by direct measurement with a movable Faraday cup. The experimental curves are then compared to computer calculations based on the assumption of a particular model system. The model is changed in an iterative process until a satisfactory agreement between experimental and theoretical curves is achieved. A quantitative measure for this agreement is the so-called reliability- or R-factor. A commonly used reliability factor is the one proposed by Pendry.[6] It is expressed in terms of the logarithmic derivative of the intensity:
L(E)&= I'/I.
The R-factor is then given by:
R &= \sum_g \int (Y_\textrm{gth}-Y_\textrm{gexpt})^2dE/\sum_g \int (Y^2_\textrm{gth}+Y^2_\textrm{gexpt})dE,
where Y(E)=L^{-1}/(L^{-2}+V^2_{oi}) and V_{oi} is the imaginary part of the electron self-energy. In generally R_p\leq0.2 is considered as a good agreement, R_p\simeq0.3 is considered mediocre and R_p\simeq0.5 is considered a bad agreement. Figure 7 shows examples of the comparison between experimental I-V spectra and theoretical calculations.
Figure 7 Examples of the comparison between experimental data and a theoretical calculation (An AlNiCo Quasicrystal surface) Thanks to R. Diehl and N. Ferralis for providing the data.
Dynamical LEED calculations[edit]
The term dynamical stems from the studies of X-ray diffraction and describes the situation where the response of the crystal to an incident wave is included self-consistently and multiple scattering can occur. The aim of any dynamical LEED theory is to calculate the intensities of diffraction of an electron beam impinging on a surface as accurately as possible.
A common method to achieve this is the self-consistent multiple scattering approach.[7] One essential point in this approach is the assumption that the scattering properties of the surface, i.e. of the individual atoms, are known in detail. The main task then reduces to the determination of the effective wave field incident on the individual scatters present in the surface, where the effective field is the sum of the primary field and the field emitted from all the other atoms. This must be done in a self-consistent way, since the emitted field of an atom depends on the incident effective field upon it. Once the effective field incident on each atom is determined, the total field emitted from all atoms can be found and its asymptotic value far from the crystal then gives the desired intensities.
A common approach in LEED calculations is to describe the scattering potential of the crystal by a "muffin tin" model, where the crystal potential can be imagined being divided up by non-overlapping spheres centered at each atom such that the potential has a spherically symmetric form inside the spheres and is constant everywhere else. The choice of this potential reduces the problem to scattering from spherical potentials, which can be dealt with effectively. The task is then to solve the Schrödinger equation for an incident electron wave in that "muffin tin" potential.
Related Techniques[edit]
Tensor LEED[edit]
In LEED the exact atomic configuration of a surface is determined by a trial and error process where measured I-V curves are compared to computer-calculated spectra under the assumption of a model structure. From an initial reference structure a set of trial structures is created by varying the model parameters. The parameters are changed until an optimal agreement between theory and experiment is achieved. However, for each trial structure a full LEED calculation with multiple scattering corrections must be conducted. For systems with a large parameter space the need for computational time might become significant. This is the case for complex surfaces structures or when considering large molecules as adsorbates.
Tensor LEED[8][9] is an attempt to reduce the computational effort needed by avoiding full LEED calculations for each trial structure. The scheme is as follows: One first defines a reference surface structure for which the I-V spectrum is calculated. Next a trial structure is created by displacing some of the atoms. If the displacements are small the trial structure can be considered as a small perturbation of the reference structure and first-order perturbation theory can be used to determine the I-V curves of a large set of trial structures.
Spot Profile Analysis Low-Energy Electron Diffraction[edit]
A real surface is not perfectly periodic but has many imperfections in the form of dislocations, atomic steps, terraces and the presence of unwanted adsorbed atoms. This departure from a perfect surface leads to a broadening of the diffraction spots and adds to the background intensity in the LEED pattern.
SPA-LEED[10] is a technique where the intensity of diffraction beams is measured in order to determine the diffraction spot profiles. The spots are sensitive to the irregularities in the surface structure and their examination therefore permits more-detailed conclusions about some surface characteristics. Using SPA-LEED may for instance permit a quantitative determination of the surface roughness, terrace sizes or surface steps.[10]
See also[edit]
1. ^ a b c K. Oura, V.G. Lifshifts, A.A. Saranin, A. V. Zotov, M. Katayama (2003). Surface Science. Springer-Verlag, Berlin Heidelberg New York. pp. 1–45.
2. ^ a b c M.A. Van Hove, W.H. Weinberg, C. M. Chan (1986). Low-Energy Electron Diffraction. Springer-Verlag, Berlin Heidelberg New York. pp. 1–27, 46–89, 92–124, 145–172. doi:10.1002/maco.19870380711. ISBN 3-540-16262-3.
3. ^ Zangwill, A., "Physics at Surfaces", Cambridge University Press (1988), p.33
4. ^ Pendry (1974). Low-Energy Electron Diffraction. Academic Press Inc. (London) LTD. pp. 1–75.
5. ^ C. Kittel (1996). "2". Introduction to Solid State Physics. John Wiley, US.
6. ^ J.B. Pendry (1980). "Reliability Factors for LEED Calculations". J. Phys. C 13: 937. Bibcode:1980JPhC...13..937P. doi:10.1088/0022-3719/13/5/024.
7. ^ E.G. McRae (1967). "Self-Consistent Multiple-Scattering Approach to the Interpretation of Low-Energy Electron Diffraction". Surface Science 8 (1-2): 14–34. Bibcode:1967SurSc...8...14M. doi:10.1016/0039-6028(67)90071-4.
8. ^ P.J. Rous J.B. Pendry (1989). "Tensor LEED I: A Technuique for high speed surface structure determination by low energy electron diffraction.". Comp. Phys. Comm. 54 (1): 137–156. Bibcode:1989CoPhC..54..137R. doi:10.1016/0010-4655(89)90039-8.
9. ^ P.J. Rous J.B. Pendry (1989). "The theory of Tensor LEED.". Surf. Sci. 219 (3): 355–372. Bibcode:1989SurSc.219..355R. doi:10.1016/0039-6028(89)90513-X.
10. ^ a b M. Henzler (1982). "Studies of Surface Imperfections". Appl. Surf. Sci. 11/12: 450. Bibcode:1982ApSS...11..450H. doi:10.1016/0378-5963(82)90092-7.
• P. Goodman (General Editor), Fifty Years of Electron Diffraction, D. Reidel Publishing, 1981
• D. Human et al., Low energy electron diffraction using an electronic delay-line detector, Rev. Sci. Inst. 77 023302 (2006)
External links[edit] |
3167fb1d614e9871 | Stationary state
From Wikipedia, the free encyclopedia
Jump to: navigation, search
Main articles: quantum state and wavefunction
In quantum mechanics, a stationary state is an eigenvector of the Hamiltonian, implying the probability density associated with the wavefunction is independent of time.[1] This corresponds to a quantum state with a single definite energy (instead of a quantum superposition of different energies). It is also called energy eigenvector, energy eigenstate, energy eigenfunction, or energy eigenket. It is very similar to the concept of atomic orbital and molecular orbital in chemistry, with some slight differences explained below.
A harmonic oscillator in classical mechanics (A-B) and quantum mechanics (C-H). In (A-B), a ball, attached to a spring, oscillates back and forth. (C-H) are six solutions to the Schrödinger Equation for this situation. The horizontal axis is position, the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. (C,D,E,F), but not (G,H), are stationary states, or standing waves. The standing-wave oscillation frequency, times Planck's constant, is the energy of the state.
A stationary state is called stationary because the system remains in the same state as time elapses, in every observable way. For a single-particle Hamiltonian, this means that the particle has a constant probability distribution for its position, its velocity, its spin, etc.[2] (This is true assuming the particle's environment is also static, i.e. the Hamiltonian is unchanging in time.) The wavefunction itself is not stationary: It continually changes its overall complex phase factor, so as to form a standing wave. The oscillation frequency of the standing wave, times Planck's constant, is the energy of the state according to the de Broglie relation.
Stationary states are quantum states that are solutions to the time-independent Schrödinger Equation:
\hat H |\Psi\rangle=E_{\Psi} |\Psi\rangle,
• |\Psi\rangle is a quantum state, which is a stationary state if it satisfies this equation;
• \hat H is the Hamiltonian operator;
• E_{\Psi} is a real number, and corresponds to the energy eigenvalue of the state |\Psi\rangle.
This is an eigenvalue equation: \hat H is a linear operator on a vector space, |\Psi\rangle is an eigenvector of \hat H, and E_{\Psi} is its eigenvalue.
If a stationary state |\Psi\rangle is plugged into the time-dependent Schrödinger Equation, the result is:[3]
i\hbar\frac{\partial}{\partial t} |\Psi\rangle = E_{\Psi}|\Psi\rangle
Assuming that \hat H is time-independent (unchanging in time), this equation holds for any time t. Therefore this is a differential equation describing how |\Psi\rangle varies in time. Its solution is:
|\Psi(t)\rangle = e^{-iE_{\Psi}t/\hbar}|\Psi(0)\rangle
Therefore a stationary state is a standing wave that oscillates with an overall complex phase factor, and its oscillation angular frequency is equal to its energy divided by \hbar.
Stationary state properties[edit]
Three wavefunction solutions to the time-dependent Schrödinger equation for a harmonic oscillator. Left: The real part (blue) and imaginary part (red) of the wavefunction. Right: The probability of finding the particle at a certain position. The top two rows are two stationary states, and the bottom is the superposition state \psi_N \equiv (\psi_0+\psi_1)/\sqrt{2}, which is not a stationary state. The right column illustrates why stationary states are called "stationary".
As shown above, a stationary state is not mathematically constant:
However, all observable properties of the state are in fact constant. For example, if |\Psi(t)\rangle represents a simple one-dimensional single-particle wavefunction \Psi(x,t), the probability that the particle is at location x is:
|\Psi(x,t)|^2 = \left| e^{-iE_{\Psi}t/\hbar}\Psi(x,0)\right|^2 = \left| e^{-iE_{\Psi}t/\hbar}\right|^2 \left| \Psi(x,0)\right|^2 = \left|\Psi(x,0)\right|^2
which is independent of the time t.
The Heisenberg picture is an alternative mathematical formulation of quantum mechanics where stationary states are truly mathematically constant in time.
As mentioned above, these equations assume that the Hamiltonian is time-independent. This means simply that stationary states are only stationary when the rest of the system is fixed and stationary as well. For example, a 1s electron in a hydrogen atom is in a stationary state, but if the hydrogen atom reacts with another atom, then the electron will of course be disturbed.
Spontaneous decay[edit]
Spontaneous decay complicates the question of stationary states. For example, according to simple (nonrelativistic) quantum mechanics, the hydrogen atom has many stationary states: 1s, 2s, 2p, and so on, are all stationary states. But in reality, only the ground state 1s is truly "stationary": An electron in a higher energy level will spontaneously emit one or more photons to decay into the ground state.[4] This seems to contradict the idea that stationary states should have unchanging properties.
The explanation is that the Hamiltonian used in nonrelativistic quantum mechanics is only an approximation to the Hamiltonian from quantum field theory. The higher-energy electron states (2s, 2p, 3s, etc.) are stationary states according to the approximate Hamiltonian, but not stationary according to the true Hamiltonian, because of vacuum fluctuations. On the other hand, the 1s state is truly a stationary state, according to both the approximate and the true Hamiltonian.
Comparison to "orbital" in chemistry[edit]
An orbital is a stationary state (or approximation thereof) of a one-electron atom or molecule; more specifically, an atomic orbital for an electron in an atom, or a molecular orbital for an electron in a molecule.[5]
For a molecule that contains only a single electron (e.g. atomic hydrogen or H2+), an orbital is exactly the same as a total stationary state of the molecule. However, for a many-electron molecule, an orbital is completely different from a total stationary state, which is a many-particle state requiring a more complicated description (such as a Slater determinant). In particular, in a many-electron molecule, an orbital is not the total stationary state of the molecule, but rather the stationary state of a single electron within the molecule. This concept of an orbital is only meaningful under the approximation that if we ignore the electron-electron repulsion terms in the Hamiltonian as a simplifying assumption, we can decompose the total eigenvector of a many-electron molecule into separate contributions from individual electron stationary states (orbitals), each of which are obtained under the one-electron approximation. (Luckily, chemists and physicists can often (but not always) use this "single-electron approximation.") In this sense, in a many-electron system, an orbital can be considered as the stationary state of an individual electron in the system.
In chemistry, calculation of molecular orbitals typically also assume the Born–Oppenheimer approximation.
See also[edit]
2. ^ Cohen-Tannoudji, Claude, Bernard Diu, and Franck Laloë. Quantum Mechanics: Volume One. Hermann, 1977. p. 32.
Further reading[edit] |
70b7acc9f43c4b2a | quantum mechanics
quantum mechanics
quantum mechanics: see quantum theory.
In quantum mechanics, the Hamiltonian H is the observable corresponding to the total energy of the system. As with all observables, the spectrum of the Hamiltonian is the set of possible outcomes when one measures the total energy of a system. Like any other self-adjoint operator, the spectrum of the Hamiltonian can be decomposed, via its spectral measures, into pure point, absolutely continuous, and singular parts. The pure point spectrum can be associated to eigenvectors, which in turn are the bound states of the system. The absolutely continuous spectrum corresponds to the free states. The singular spectrum, interestingly enough, comprises physically impossible outcomes. For example, consider the finite potential well, which admits bound states with discrete negative energies and free states with continuous positive energies.
Schrödinger equation
The Hamiltonian generates the time evolution of quantum states. If left| psi (t) rightrangle is the state of the system at time t, then
H left| psi (t) rightrangle = mathrm{i} hbar {partialoverpartial t} left| psi (t) rightrangle.
where hbar is the reduced Planck constant . This equation is known as the Schrödinger equation. (It takes the same form as the Hamilton-Jacobi equation, which is one of the reasons H is also called the Hamiltonian.) Given the state at some initial time (t = 0), we can integrate it to obtain the state at any subsequent time. In particular, if H is independent of time, then
left| psi (t) rightrangle = expleft(-{mathrm{i}Ht over hbar}right) left| psi (0) rightrangle.
Note: In introductory physics literature, the following is often taken as an assumption:
The eigenkets (eigenvectors) of H, denoted left| a rightrang (using Dirac bra-ket notation), provide an orthonormal basis for the Hilbert space. The spectrum of allowed energy levels of the system is given by the set of eigenvalues, denoted {Ea}, solving the equation:
H left| a rightrangle = E_a left| a rightrangle.
Since H is a Hermitian operator, the energy is always a real number.
From a mathematically rigorous point of view, care must be taken with the above assumption. Operators on infinite-dimensional Hilbert spaces need not have eigenvalues (the set of eigenvalues does not necessarily coincide with the spectrum of an operator). However, all routine quantum mechanical calculations can be done using the physical formulation.
Similarly, the exponential operator on the right hand side of the Schrödinger equation is sometimes defined by the power series. One might notice that taking polynomials of unbounded and not everywhere defined operators may not make mathematical sense, much less power series. Rigorously, to take functions of unbounded operators, a functional calculus is required. In the case of the exponential function, the continuous, or just the holomorphic functional calculus suffices. We note again, however, that for common calculations the physicist's formulation is quite sufficient.
By the *-homomorphism property of the functional calculus, the operator
U = expleft(-{mathrm{i}Ht over hbar}right)
is an unitary operator. It is the time evolution operator, or propagator, of a closed quantum system. If the Hamiltonian is time-independent, {U(t)} form a one parameter unitary group (more than a semigroup); this gives rise to the physical principle of detailed balance.
Energy eigenket degeneracy, symmetry, and conservation laws
In many systems, two or more energy eigenstates have the same energy. A simple example of this is a free particle, whose energy eigenstates have wavefunctions that are propagating plane waves. The energy of each of these plane waves is inversely proportional to the square of its wavelength. A wave propagating in the x direction is a different state from one propagating in the y direction, but if they have the same wavelength, then their energies will be the same. When this happens, the states are said to be degenerate.
It turns out that degeneracy occurs whenever a nontrivial unitary operator U commutes with the Hamiltonian. To see this, suppose that |a> is an energy eigenket. Then U|a> is an energy eigenket with the same eigenvalue, since
UH |arangle = U E_a|arangle = E_a (U|arangle) = H ; (U|arangle).
Since U is nontrivial, at least one pair of |arang and U|arang must represent distinct states. Therefore, H has at least one pair of degenerate energy eigenkets. In the case of the free particle, the unitary operator which produces the symmetry is the rotation operator, which rotates the wavefunctions by some angle while otherwise preserving their shape.
The existence of a symmetry operator implies the existence of a conserved observable. Let G be the Hermitian generator of U:
U = I - mathrm{i} epsilon G + O(epsilon^2)
It is straightforward to show that if U commutes with H, then so does G:
[H, G] = 0
frac{part}{part t} langlepsi(t)|G|psi(t)rangle = frac{1}{mathrm{i}hbar} langlepsi(t)|[G,H]|psi(t)rangle = 0
In obtaining this result, we have used the Schrödinger equation, as well as its dual,
langlepsi (t)|H = - mathrm{i} hbar {partialoverpartial t} langlepsi(t)|.
Thus, the expected value of the observable G is conserved for any state of the system. In the case of the free particle, the conserved quantity is the angular momentum.
Hamilton's equations
Hamilton's equations in classical Hamiltonian mechanics have a direct analogy in quantum mechanics. Suppose we have a set of basis states left{left| n rightrangleright}, which need not necessarily be eigenstates of the energy. For simplicity, we assume that they are discrete, and that they are orthonormal, i.e.,
langle n' | n rangle = delta_{nn'}.
Note that these basis states are assumed to be independent of time. We will assume that the Hamiltonian is also independent of time.
The instantaneous state of the system at time t, left| psileft(tright) rightrangle, can be expanded in terms of these basis states:
|psi (t)rangle = sum_{n} a_n(t) |nrangle
a_n(t) = langle n | psi(t) rangle.
The coefficients an(t) are complex variables. We can treat them as coordinates which specify the state of the system, like the position and momentum coordinates which specify a classical system. Like classical coordinates, they are generally not constant in time, and their time dependence gives rise to the time dependence of the system as a whole.
The expectation value of the Hamiltonian of this state, which is also the mean energy, is
langle H(t) rangle stackrel{mathrm{def}}{=} langlepsi(t)|H|psi(t)rangle
= sum_{nn'} a_{n'}^* a_n langle n'|H|n rangle
where the last step was obtained by expanding left| psileft(tright) rightrangle in terms of the basis states.
Each of the an(t)'s actually corresponds to two independent degrees of freedom, since the variable has a real part and an imaginary part. We now perform the following trick: instead of using the real and imaginary parts as the independent variables, we use an(t) and its complex conjugate an*(t). With this choice of independent variables, we can calculate the partial derivative
frac{partial langle H rangle}{partial a_{n'}^{*}}
= sum_{n} a_n langle n'|H|n rangle = langle n'|H|psirangle
By applying Schrödinger's equation and using the orthonormality of the basis states, this further reduces to
= mathrm{i} hbar frac{partial a_{n'}}{partial t}
Similarly, one can show that
frac{partial langle H rangle}{partial a_n}
= - mathrm{i} hbar frac{partial a_{n}^{*}}{partial t}
If we define "conjugate momentum" variables πn by
pi_{n}(t) = mathrm{i} hbar a_n^*(t)
then the above equations become
frac{partial langle H rangle}{partial pi_{n}} = frac{partial a_{n}}{partial t} quad,quad frac{partial langle H rangle}{partial a_n} = - frac{partial pi_{n}}{partial t}
which is precisely the form of Hamilton's equations, with the a_ns as the generalized coordinates, the pi_ns as the conjugate momenta, and langle Hrangle taking the place of the classical Hamiltonian.
See also
Search another word or see quantum mechanicson Dictionary | Thesaurus |Spanish
Copyright © 2015 Dictionary.com, LLC. All rights reserved.
• Please Login or Sign Up to use the Recent Searches feature |
8d5f296e35ce4cb2 | AskDefine | Define eigenvectors
User Contributed Dictionary
1. Plural of eigenvector
Extensive Definition
In mathematics, a vector may be thought of as an arrow. It has a length, called its magnitude, and it points in some particular direction. A linear transformation may be considered to operate on a vector to change it, usually changing both its magnitude and its direction. An eigenvector of a given linear transformation is a non-zero vector which is multiplied by a constant called the as a result of that transformation. The direction of the eigenvector is either unchanged by that transformation (for positive eigenvalues) or reversed (for negative eigenvalues).
For example, an eigenvalue of +2 means that the eigenvector is doubled in length and points in the same direction. An eigenvalue of +1 means that the eigenvector is unchanged, while an eigenvalue of −1 means that the eigenvector is reversed in direction. An eigenspace of a given transformation is the span of the eigenvectors of that transformation with the same eigenvalue, together with the zero vector (which has no direction). An eigenspace is an example of a subspace of a vector space.
In linear algebra, every linear transformation between finite-dimensional vector spaces can be given by a matrix, which is a rectangular array of numbers arranged in rows and columns. Standard methods for finding eigenvalues, eigenvectors, and eigenspaces of a given matrix are discussed below.
These concepts play a major role in several branches of both pure and applied mathematics — appearing prominently in linear algebra, functional analysis, and to a lesser extent in nonlinear mathematics.
Many kinds of mathematical objects can be treated as vectors: functions, harmonic modes, quantum states, and frequencies, for example. In these cases, the concept of direction loses its ordinary meaning, and is given an abstract definition. Even so, if this abstract direction is unchanged by a given linear transformation, the prefix "eigen" is used, as in eigenfunction, eigenmode, eigenstate, and eigenfrequency.
Euler had also studied the rotational motion of a rigid body and discovered the importance of the principal axes. As Lagrange realized, the principal axes are the eigenvectors of the inertia matrix. In the early 19th century, Cauchy saw how their work could be used to classify the quadric surfaces, and generalized it to arbitrary dimensions. Cauchy also coined the term racine caractéristique (characteristic root) for what is now called eigenvalue; his term survives in characteristic equation.
Fourier used the work of Laplace and Lagrange to solve the heat equation by separation of variables in his famous 1822 book Théorie analytique de la chaleur. Sturm developed Fourier's ideas further and he brought them to the attention of Cauchy, who combined them with his own ideas and arrived at the fact that symmetric matrices have real eigenvalues. Schwarz studied the first eigenvalue of Laplace's equation on general domains towards the end of the 19th century, while Poincaré studied Poisson's equation a few years later.
At the start of the 20th century, Hilbert studied the eigenvalues of integral operators by viewing the operators as infinite matrices. He was the first to use the German word eigen to denote eigenvalues and eigenvectors in 1904, though he may have been following a related usage by Helmholtz. "Eigen" can be translated as "own", "peculiar to", "characteristic", or "individual" — emphasizing how important eigenvalues are to defining the unique nature of a specific transformation. For some time, the standard term in English was "proper value", but the more distinctive term "eigenvalue" is standard today.
Definitions: the eigenvalue equation
see also Eigenplane
Linear transformations of a vector space, such as rotation, reflection, stretching, compression, shear or any combination of these, may be visualized by the effect they produce on vectors. In other words, they are vector functions. More formally, in a vector space L a vector function A is defined if for each vector x of L there corresponds a unique vector y = A(x) of L. For the sake of brevity, the parentheses around the vector on which the transformation is acting are often omitted. A vector function A is linear if it has the following two properties: where x and y are any two vectors of the vector space L and α is any scalar. Such a function is variously called a linear transformation, linear operator, or linear endomorphism on the space L. The key equation in this definition is the eigenvalue equation, Ax = λx. Most vectors x will not satisfy such an equation. A typical vector x changes direction when acted on by A, so that Ax is not a multiple of x. This means that only certain special vectors x are eigenvectors, and only certain special numbers λ are eigenvalues. Of course, if A is a multiple of the identity matrix, then no vector changes direction, and all non-zero vectors are eigenvectors.
The requirement that the eigenvector be non-zero is imposed because the equation A0 = λ0 holds for every A and every λ. Since the equation is always trivially true, it is not an interesting case. In contrast, an eigenvalue can be zero in a nontrivial way. Each eigenvector is associated with a specific eigenvalue. One eigenvalue can be associated with several or even with infinite number of eigenvectors.
Geometrically (Fig. 2), the eigenvalue equation means that under the transformation A eigenvectors experience only changes in magnitude and sign — the direction of Ax is the same as that of x. The eigenvalue λ is simply the amount of "stretch" or "shrink" to which a vector is subjected when transformed by A. If λ = 1, the vector remains unchanged (unaffected by the transformation). A transformation I under which a vector x remains unchanged, Ix = x, is defined as identity transformation. If λ = –1, the vector flips to the opposite direction (rotates to 180°); this is defined as reflection.
If x is an eigenvector of the linear transformation A with eigenvalue λ, then any scalar multiple αx is also an eigenvector of A with the same eigenvalue. Similarly if more than one eigenvector share the same eigenvalue λ, any linear combination of these eigenvectors will itself be an eigenvector with eigenvalue λ. . Together with the zero vector, the eigenvectors of A with the same eigenvalue form a linear subspace of the vector space called an eigenspace.
The eigenvectors corresponding to different eigenvalues are linearly independent meaning, in particular, that in an n-dimensional space the linear transformation A cannot have more than n eigenvectors with different eigenvalues.
If a basis is defined in vector space, all vectors can be expressed in terms of components. For finite dimensional vector spaces with dimension n, linear transformations can be represented with n × n square matrices. Conversely, every such square matrix corresponds a linear transformation for a given basis. Thus, in a the two-dimensional vector space R2 fitted with standard basis, the eigenvector equation for a linear transformation A can be written in the following matrix representation:
\begin a_ & a_ \\ a_ & a_ \end \begin x \\ y \end = \lambda \begin x \\ y \end,
where the juxtaposition of matrices means matrix multiplication.
Characteristic equation
When the transformation is represented by a square matrix, the eigenvalue equation can be expressed as
A \mathbf - \lambda I \mathbf = \mathbf.
It is known from linear algebra that this equation has a non-zero solution for x if, and only the determinant
\det(A - \lambda I) = 0.
This equation is defined as the characteristic equation (less often, secular equation) of A, and the left-hand side is defined as the characteristic polynomial. When expanded, this gives a polynomial equation for \lambda. The eigenvector x or its components are not present in the characteristic equation.
The matrix
\begin 2 & 1\\1 & 2 \end
defines a linear transformation of the real plane. The eigenvalues of this transformation are given by the characteristic equation
\det\begin 2-\lambda & 1\\1 & 2-\lambda \end = (2-\lambda)^2 - 1 = 0.
The roots of this equation (i.e. the values of \lambda for which the equation holds are \lambda=1 and \lambda=3. Having found the eigenvalues, it is possible to find the eigenvectors. Considering the first the eigenvalue \lambda=3, we have
\begin 2 & 1\\1 & 2 \end\beginx\\y\end = 3\times\beginx\\y\end.
Both rows of this matrix equation reduces to the single linear equation x=y. To find an eigenvector, we are free to choose any value for x, so one picking x=1 and setting y=x, we find the eigenvector to be get
We can check this is an eigenvector by checking that :\begin2&1\\1&2\end\begin1\\1\end = \begin3\\3\end. For the eigenvalue \lambda=1, a similar process leads to the equation x=-y, and hence the eigenvector is given by
The complexity of the problem for finding roots/eigenvalues of the characteristic polynomial increases rapidly with increasing the degree of the polynomial (the dimension of the vector space). There are exact solutions for dimensions below 5, but for higher dimensions there are no exact solutions and one has to resort to numerical methods to find them approximately. For large symmetric sparse matrices, Lanczos algorithm is used to compute eigenvalues and eigenvectors.
Existence and Multiplicity of Eigenvalues
For transformations on real vector spaces, the coefficients of the characteristic polynomial are all real. However, the roots are not necessarily real; they may well be complex numbers, or a mixture of real and complex numbers. For example, a matrix representing a planar rotation of 45 degrees will not leave any non-zero vector pointing in the the same direction. Over a complex vector space, the fundamental theorem of algebra guarantees that the characteristic polynomial has at least one root, and thus the linear transformation has at least one eigenvalue.
As well as distinct roots, the characteristic equation may also have repeated roots. However, having repeated roots does not imply there are multiple distinct (i.e. linearly independent) eigenvectors with that eigenvalue. The algebraic multiplicity of an eigenvalue is defined as the multiplicity of the corresponding root of the characteristic polynomial. The geometric multiplicity of an eigenvalue is defined as the dimension of the associated eigenspace, i.e. number of linearly independent eigenvectors with that eigenvalue.
Over a complex space, the sum of the algebraic multiplicities will equal the dimension of the vector space, but the sum of the geometric multiplicities may be smaller. In a sense, then it is possible that there may not be sufficient eigenvectors to span the entire space. This is intimately related to the question of whether a given matrix may be diagonalized by a suitable choice of coordinates.
Example: Shear
Shear in the plane is a transformation in which all points along a given line remain fixed while other points are shifted parallel to that line by a distance proportional to their perpendicular distance from the line. Shearing a plane figure does not change its area. Shear can be horizontal − along the X axis, or vertical − along the Y axis. In horizontal shear (see figure), a point P of the plane moves parallel to the X axis to the place P' so that its coordinate y does not change while the x coordinate increments to become x' = x + k y, where k is called the shear factor.
The matrix of a horizontal shear transformation is \begin1 & k\\ 0 & 1\end. The characteristic equation is λ2 − 2 λ + 1 = (1 − λ)2 = 0 which has a single, repeated root λ = 1. Therefore, the eigenvalue λ = 1 has algebraic multiplicity 2. The eigenvector(s) are found as solutions of
\begin1 - 1 & k\\ 0 & 1 - 1 \end\beginx\\ y\end = \begin0 & k\\ 0 & 0 \end\beginx\\ y\end = ky = 0.
The last equation equivalent y = 0 which is a straight line along the x axis. This line represents the one-dimensional eigenspace. In the case of shear the algebraic multiplicity of the eigenvalue (2) is greater than its geometric multiplicity (1, the dimension of the eigenspace). The eigenvector is a vector along the x axis. The case of vertical shear with transformation matrix \begin1 & 0\\ k & 1\end is dealt with in a similar way; the eigenvector in vertical shear is along the y axis. Applying repeatedly the shear transformation changes the direction of any vector in the plane closer and closer to the direction of the eigenvector.
Example: Uniform scaling and Reflection
As a one-dimensional vector space, consider a rubber string tied to unmoving support in one end, such as that on a child's sling. Pulling the string away from the point of attachment stretches it and elongates it by some scaling factor λ which is a real number. Each vector on the string is stretched equally, with the same scaling factor λ, and although elongated it preserves its original direction. For a two-dimensional vector space, consider a rubber sheet stretched equally in all directions such as a small area of the surface of an inflating balloon (Fig. 3). All vectors originating at the fixed point on the balloon surface (the origin) are stretched equally with the same scaling factor λ. This transformation in two-dimensions is described by the 2×2 square matrix:
A \mathbf = \begin\lambda & 0\\0 & \lambda\end \begin x \\ y \end = \begin\lambda \cdot x + 0 \cdot y \\0 \cdot x + \lambda \cdot y\end = \lambda \begin x \\ y \end = \lambda \mathbf.
Expressed in words, the transformation is equivalent to multiplying the length of any vector by λ while preserving its original direction. Since the vector taken was arbitrary, every non-zero vector in the vector space is an eigenvector. Whether the transformation is stretching (elongation, extension, inflation), or shrinking (compression, deflation) depends on the scaling factor: if λ > 1, it is stretching, if λ < 1, it is shrinking. Negative values of λ correspond to a reversal of direction, followed by a stretch or a shrink, depending on the absolute value of λ.
Example: Unequal scaling
For a slightly more complicated example, consider a sheet that is stretched unequally in two perpendicular directions along the coordinate axes, or, similarly, stretched in one direction, and shrunk in the other direction. In this case, there are two different scaling factors: k1 for the scaling in direction x, and k2 for the scaling in direction y. The transformation matrix is \begink_1 & 0\\0 & k_2\end, and the characteristic equation is (k_1-\lambda)(k_2-\lambda) = 0. The eigenvalues, obtained as roots of this equation are λ1 = k1, and λ2 = k2 which means, as expected, that the two eigenvalues are the scaling factors in the two directions. Plugging k1 back in the eigenvalue equation gives one of the eigenvectors:
\begin0 & 0\\0 & k_2 - k_1\end \begin x \\ y\end = \begin0\\0\end or, more simply, y=0.
Thus, the eigenspace is the x-axis. Similarly, substituting \lambda=k_2 shows that the corresponding eigenspace is the y-axis. In this case, both eigenvalues have algebraic and geometric multiplicities equal to 1. If a given eigenvalue are greater that 1, the vectors are stretched in the direction of the corresponding eigenvector; if less than 1, they are shrunken in that direction. Negative eigenvalues correspond to reflections followed by a stretch or shrink. In general, matrices that are diagonalizable over the real numbers represent scalings and reflections: the eigenvalues represent the scaling factors (and appear as the diagonal terms), and the eigenvectors are the directions of the scalings.
The figure shows the case where k_1>1 and 1>k_2>0. The rubber sheet is stretched along the x axis and simultaneously shrunk along the y axis. After repeatedly applying this transformation of stretching/shrinking many times, almost any vector on the surface of the rubber sheet will be oriented closer and closer to the direction of the x axis (the direction of stretching). The exceptions are vectors along the y-axis, which will gradually shrink away to nothing.
Example: Rotation
details Rotation matrix A rotation in a plane is a transformation that describes motion of a vector, plane, coordinates, etc., around a fixed point. Clearly, for rotations other than through 0° and 180°, every vector in the real plane will have it's direction changed, and thus there cannot be any eigenvectors. But this is not necessarily if we consider the same matrix over a complex vector space.
A counterclockwise rotation in the horizontal plane about the origin at an angle φ is represented by the matrix
\mathbf = \begin \cos \varphi & -\sin \varphi \\ \sin \varphi & \cos \varphi \end.
The characteristic equation of R is λ2 − 2λ cos φ + 1 = 0. This quadratic equation has a discriminant D = 4 (cos2 φ − 1) = − 4 sin2 φ which is a negative number whenever φ is not equal a multiple of 180°. A rotation of 0°, 360°, … is just the identity transformation, (a uniform scaling by +1) while a rotation of 180°, 540°, …, is a reflection (uniform scaling by -1). Otherwise, as expected, there are no real eigenvalues or eigenvectors for rotation in the plane.
Rotation matrices on complex vector spaces
The characteristic equation has two complex roots λ1 and λ2. If we choose to think of the rotation matrix as a linear operator on the complex two dimensional, we can consider these complex eigenvalues. The roots are complex conjugates of each other: λ1,2 = cos φ ± i sin φ = e ± iφ, each with an algebraic multiplicity equal to 1, where i is the imaginary unit.
The first eigenvector is found by substituting the first eigenvalue, λ1, back in the eigenvalue equation:
\begin \cos \varphi - \lambda_1 & -\sin \varphi \\ \sin \varphi & \cos \varphi - \lambda_1 \end \begin x \\ y \end = \begin - i \sin \varphi & -\sin \varphi \\ \sin \varphi & - i \sin \varphi \end \begin x \\ y \end = \begin 0 \\ 0 \end.
The last equation is equivalent to the single equation x=iy, and again we are free to set x=1 to give the eigenvector
Similarly, substituting in the second eigenvalue gives the single equation x=-iy and so the eigenvector is given by
Although not diagonalizable over the reals, the rotation matrix is diagonalizable over the complex numbers, and again the eigenvalues appear on the diagonal. Thus rotation matrices acting on complex spaces can be thought of as scaling matrices, with complex scaling factors.
Infinite-dimensional spaces and Spectral Theory
details Spectral theorem If the vector space is an infinite dimensional Banach space, the notion of eigenvalues can be generalized to the concept of spectrum. The spectrum is the set of scalars λ for which (T − λ)−1 is not defined; that is, such that T − λ has no bounded inverse.
Clearly if λ is an eigenvalue of T, λ is in the spectrum of T. In general, the converse is not true. There are operators on Hilbert or Banach spaces which have no eigenvectors at all. This can be seen in the following example. The bilateral shift on the Hilbert space ℓ 2(Z) (that is, the space of all sequences of scalars … a−1, a0, a1, a2, … such that
\cdots + |a_|^2 + |a_0|^2 + |a_1|^2 + |a_2|^2 + \cdots
converges) has no eigenvalue but does have spectral values.
In infinite-dimensional spaces, the spectrum of a bounded operator is always nonempty. This is also true for an unbounded self adjoint operator. Via its spectral measures, the spectrum of any self adjoint operator, bounded or otherwise, can be decomposed into absolutely continuous, pure point, and singular parts. (See Decomposition of spectrum.)
The hydrogen atom is an example where both types of spectra appear. The eigenfunctions of the hydrogen atom Hamiltonian are called eigenstates and are grouped into two categories. The bound states of the hydrogen atom correspond to the discrete part of the spectrum (they have a discrete set of eigenvalues which can be computed by Rydberg formula) while the ionization processes are described by the continuous part (the energy of the collision/ionization is not quantified).
A common example of such maps on infinite dimensional spaces are the action of differential operators on function spaces. As an example, on the space of infinitely differentiable functions, the process of differentiation defines a linear operator since
\displaystyle\frac(af+bg) = a \frac + b \frac,
where f(t) and g(t) are differentiable functions, and a and b are constants).
The eigenvalue equation for linear differential operators is then a set of one or more differential equations. The eigenvectors are commonly called eigenfunctions. The most simple case is the eigenvalue equation for differentiation of a real valued function by a single real variable. In this case, the eigenvalue equation becomes the linear differential equation
\displaystyle\frac f(x) = \lambda f(x).
Here λ is the eigenvalue associated with the function, f(x). This eigenvalue equation has a solution for all values of λ. If λ is zero, the solution is
f(x) = A,
where A is any constant; if λ is non-zero, the solution is the exponential function
f(x) = Ae^.
If we expand our horizons to complex valued functions, the value of λ can be any complex number. The spectrum of d/dt is therefore the whole complex plane. This is an example of a continuous spectrum.
Example: waves on a string
The displacement, h(x,t), of a stressed rope fixed at both ends, like the vibrating strings of a string instrument, satisfies the wave equation
\frac = c^2\frac,
which is a linear partial differential equation, where c is the constant wave speed. The normal method of solving such an equation is separation of variables. If we assume that h can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations:
X=-\fracX and T=-\omega^2 T.
Each of these is an eigenvalue equation (the unfamilar form of the eigenvalue is chosen merely for convenience). For any values of the eigenvalues, the eigenfunctions are given by
X = \sin(\frac + \phi) and T = \sin(\omega t + \psi).
If we impose boundary conditions -- that the ends of the string are fixed with X(x)=0 at x=0 and x=L, for example -- we can constrain the eigenvalues. For those boundary conditions, we find
\sin(\phi) = 0, and so the phase angle \phi=0
\sin(\frac) = 0,
and so the constant \omega is constrained to take one of the values \omega_n = \frac, where n is any integer. Thus the clamped string supports a family of standing waves of the form
h(x,t) = \sin(n\pi x/L)\sin(\omega_n t).
From the point of view of our musical instrument, the frequency \omega_n is the frequency of the nth harmonic overtone.
The spectral theorem for matrices can be stated as follows. Let A be a square n × n matrix. Let q1 ... qk be an eigenvector basis, i.e. an indexed set of k linearly independent eigenvectors, where k is the dimension of the space spanned by the eigenvectors of A. If k = n, then A can be written
where Q is the square n × n matrix whose i-th column is the basis eigenvector qi of A and Λ is the diagonal matrix whose diagonal elements are the corresponding eigenvalues, i.e. Λii = λi.
Schrödinger equation
H\psi_E = E\psi_E \,
where |\Psi_E\rangle is an eigenstate of H. It is a self adjoint operator, the infinite dimensional analog of Hermitian matrices (see Observable). As in the matrix case, in the equation above H|\Psi_E\rangle is understood to be the vector obtained by application of the transformation H to |\Psi_E\rangle.
Molecular orbitals
Geology and glaciology: (orientation tensor)
In geology, especially in the study of glacial till, eigenvectors and eigenvalues are used as a method by which a mass of information of a clast fabric's constituents' orientation and dip can be summarized in a 3-D space by six numbers. In the field, a geologist may collect such data for hundreds or thousands of clasts in a soil sample, which can only be compared graphically such as in a Tri-Plot (Sneed and Folk) diagram , , or as a Stereonet on a Wulff Net . The output for the orientation tensor is in the three orthogonal (perpendicular) axes of space. Eigenvectors output from programs such as Stereo32 are in the order E1 ≥ E2 ≥ E3, with E1 being the primary orientation of clast orientation/dip, E2 being the secondary and E3 being the tertiary, in terms of strength. The clast orientation is defined as the eigenvector, on a compass rose of 360°. Dip is measured as the eigenvalue, the modulus of the tensor: this is valued from 0° (no dip) to 90° (vertical). The relative values of E1, E2, and E3 are dictated by the nature of the sediment's fabric. If E1 = E2 = E3, the fabric is said to be isotropic. If E1 = E2 > E3 the fabric is planar. If E1 > E2 > E3 the fabric is linear. See 'A Practical Guide to the Study of Glacial Sediments' by Benn & Evans, 2004 .
Factor analysis
In factor analysis, the eigenvectors of a covariance matrix or correlation matrix correspond to factors, and eigenvalues to the variance explained by these factors. Factor analysis is a statistical technique used in the social sciences and in marketing, product management, operations research, and other applied sciences that deal with large quantities of data. The objective is to explain most of the covariability among a number of observable random variables in terms of a smaller number of unobservable latent variables called factors. The observable random variables are modeled as linear combinations of the factors, plus unique variance terms. Eigenvalues are used in analysis used by Q-methodology software; factors with eigenvalues greater than 1.00 are considered significant, explaining an important amount of the variability in the data, while eigenvalues less than 1.00 are considered too weak, not explaining a significant portion of the data variability.
In image processing, processed images of faces can be seen as vectors whose components are the brightnesses of each pixel. The dimension of this vector space is the number of pixels. The eigenvectors of the covariance matrix associated to a large set of normalized pictures of faces are called eigenfaces; this is an example of principal components analysis. They are very useful for expressing any face image as a linear combination of some of them. In the facial recognition branch of biometrics, eigenfaces provide a means of applying data compression to faces for identification purposes. Research related to eigen vision systems determining hand gestures has also been made. More on determining sign language letters using eigen systems can be found here:
Similar to this concept, eigenvoices concept is also developed which represents the general direction of variability in human pronunciations of a particular utterance, such as a word in a language. Based on a linear combination of such eigenvoices, a new voice pronunciation of the word can be constructed. These concepts have been found useful in automatic speech recognition systems, for speaker adaptation.
Tensor of inertia
In mechanics, the eigenvectors of the inertia tensor define the principal axes of a rigid body. The tensor of inertia is a key quantity required in order to determine the rotation of a rigid body around its center of mass.
Stress tensor
Eigenvalues of a graph
In spectral graph theory, an eigenvalue of a graph is defined as an eigenvalue of the graph's adjacency matrix A, or (increasingly) of the graph's Laplacian matrix, which is either T−A or I−T 1/2AT −1/2, where T is a diagonal matrix holding the degree of each vertex, and in T −1/2, 0 is substituted for 0−1/2. The kth principal eigenvector of a graph is defined as either the eigenvector corresponding to the kth largest eigenvalue of A, or the eigenvector corresponding to the kth smallest eigenvalue of the Laplacian. The first principal eigenvector of the graph is also referred to merely as the principal eigenvector.
See also
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• .
• Pigolkina, T. S. and Shulman, V. S., Eigenvalue (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
• Pigolkina, T. S. and Shulman, V. S., Eigenvector (in Russian), In:Vinogradov, I. M. (Ed.), Mathematical Encyclopedia, Vol. 5, Soviet Encyclopedia, Moscow, 1977.
• .
• .
• .
• .
• .
External links
• MIT Video Lecture on Eigenvalues and Eigenvectors at Google Video, from MIT OpenCourseWare
• ARPACK is a collection of FORTRAN subroutines for solving large scale (sparse) eigenproblems.
• IRBLEIGS, has MATLAB code with similar capabilities to ARPACK. (See this paper for a comparison between IRBLEIGS and ARPACK.)
• LAPACK is a collection of FORTRAN subroutines for solving dense linear algebra problems
• ALGLIB includes a partial port of the LAPACK to C++, C#, Delphi, etc.
eigenvectors in Arabic: قيمة ذاتية
eigenvectors in Belarusian (Tarashkevitsa): Уласныя лікі, вэктары й прасторы
eigenvectors in Czech: Vlastní číslo
eigenvectors in Danish: Egenværdi, egenvektor og egenrum
eigenvectors in German: Eigenwertproblem
eigenvectors in Esperanto: Ajgeno kaj ajgenvektoro
eigenvectors in Spanish: Vector propio y valor propio
eigenvectors in French: Valeur propre, vecteur propre et espace propre
eigenvectors in Korean: 고유값
eigenvectors in Italian: Autovettore e autovalore
eigenvectors in Hebrew: ערך עצמי
eigenvectors in Lithuanian: Tikrinių verčių lygtis
eigenvectors in Hungarian: Sajátvektor és sajátérték
eigenvectors in Dutch: Eigenwaarde (wiskunde)
eigenvectors in Japanese: 固有値
eigenvectors in Norwegian: Egenvektor
eigenvectors in Polish: Wartość własna
eigenvectors in Portuguese: Valor próprio
eigenvectors in Romanian: Vector propriu
eigenvectors in Russian: Собственные векторы, значения и пространства
eigenvectors in Slovenian: Lastna vrednost
eigenvectors in Finnish: Ominaisarvo, ominaisvektori ja ominaisavaruus
eigenvectors in Swedish: Egenvärde, egenvektor
eigenvectors in Vietnamese: Vectơ riêng
eigenvectors in Ukrainian: Власний вектор
eigenvectors in Urdu: ویژہ قدر
eigenvectors in Chinese: 特征向量
Privacy Policy, About Us, Terms and Conditions, Contact Us
Material from Wikipedia, Wiktionary, Dict
Valid HTML 4.01 Strict, Valid CSS Level 2.1 |
cb6e6ca3290081c1 | Uncertainty principle
From Wikipedia, the free encyclopedia
(Redirected from Heisenberg Uncertainty Principle)
Jump to: navigation, search
In quantum mechanics, the uncertainty principle, also known as Heisenberg's uncertainty principle, is any of a variety of mathematical inequalities[1] asserting a fundamental limit to the precision with which certain pairs of physical properties of a particle, known as complementary variables, such as position x and momentum p, can be known.
Introduced first in 1927, by the German physicist Werner Heisenberg, it states that the more precisely the position of some particle is determined, the less precisely its momentum can be known, and vice versa.[2] The formal inequality relating the standard deviation of position σx and the standard deviation of momentum σp was derived by Earle Hesse Kennard[3] later that year and by Hermann Weyl[4] in 1928:
(ħ is the reduced Planck constant, h / 2π).
Historically, the uncertainty principle has been confused[5][6] with a somewhat similar effect in physics, called the observer effect, which notes that measurements of certain systems cannot be made without affecting the systems. Heisenberg offered such an observer effect at the quantum level (see below) as a physical "explanation" of quantum uncertainty.[7] It has since become clear, however, that the uncertainty principle is inherent in the properties of all wave-like systems,[8] and that it arises in quantum mechanics simply due to the matter wave nature of all quantum objects. Thus, the uncertainty principle actually states a fundamental property of quantum systems, and is not a statement about the observational success of current technology.[9] It must be emphasized that measurement does not mean only a process in which a physicist-observer takes part, but rather any interaction between classical and quantum objects regardless of any observer.[10] (N.B. on precision: If δx and δp are the precisions of position and momentum obtained in an individual measurement and , their standard deviations in an ensemble of individual measurements on similarly prepared systems, then "There are, in principle, no restrictions on the precisions of individual measurements and , but the standard deviations will always satisfy ".[11])
Since the uncertainty principle is such a basic result in quantum mechanics, typical experiments in quantum mechanics routinely observe aspects of it. Certain experiments, however, may deliberately test a particular form of the uncertainty principle as part of their main research program. These include, for example, tests of number–phase uncertainty relations in superconducting[12] or quantum optics[13] systems. Applications dependent on the uncertainty principle for their operation include extremely low noise technology such as that required in gravitational-wave interferometers.[14]
The uncertainty principle is not readily apparent on the macroscopic[15] scales of everyday experience. So it is helpful to demonstrate how it applies to more easily understood physical situations. Two alternative frameworks for quantum physics offer different explanations for the uncertainty principle. The wave mechanics picture of the uncertainty principle is more visually intuitive, but the more abstract matrix mechanics picture formulates it in a way that generalizes more easily.[citation needed]
Wave mechanics interpretation[edit]
(Ref [10])
Main article: Wave packet
Main article: Schrödinger equation
According to the de Broglie hypothesis, every object in the universe is a wave, a situation which gives rise to this phenomenon. The position of the particle is described by a wave function . The time-independent wave function of a single-moded plane wave of wavenumber k0 or momentum p0 is
The Born rule states that this should be interpreted as a probability density amplitude function in the sense that the probability of finding the particle between a and b is
In the case of the single-moded plane wave, is a uniform distribution. In other words, the particle position is extremely uncertain in the sense that it could be essentially anywhere along the wave packet. Consider a wave function that is a sum of many waves, however, we may write this as
with representing the amplitude of these modes and is called the wave function in momentum space. In mathematical terms, we say that is the Fourier transform of and that x and p are conjugate variables. Adding together all of these plane waves comes at a cost, namely the momentum has become less precise, having become a mixture of waves of many different momenta.
One way to quantify the precision of the position and momentum is the standard deviation σ. Since is a probability density function for position, we calculate its standard deviation.
Matrix mechanics interpretation[edit]
(Ref [10])
Main article: Matrix mechanics
The physical meaning of the non-commutativity can be understood by considering the effect of the commutator on position and momentum eigenstates. Let be a right eigenstate of position with a constant eigenvalue x0. By definition, this means that Applying the commutator to yields
where Î is the identity operator.
Suppose, for the sake of proof by contradiction, that is also a right eigenstate of momentum, with constant eigenvalue p0. If this were true, then one could write
On the other hand, the above canonical commutation relation requires that
Robertson–Schrödinger uncertainty relations[edit]
The most common general form of the uncertainty principle is the Robertson uncertainty relation.[17]
For an arbitrary Hermitian operator we can associate a standard deviation
where the brackets indicate an expectation value. For a pair of operators  and , we may define their commutator as
In this notation, the Robertson uncertainty relation is given by
The Robertson uncertainty relation immediately follows from a slightly stronger inequality, the Schrödinger uncertainty relation,[18]
where we have introduced the anticommutator,
• For position and linear momentum, the canonical commutation relation implies the Kennard inequality from above:
where i, j, k are distinct and Ji denotes angular momentum along the xi axis. This relation implies that unless all three components vanish together, only a single component of a system's angular momentum can be defined with arbitrary precision, normally the component parallel to an external (magnetic or electric) field. Moreover, for , a choice , in angular momentum multiplets, ψ = |j, m 〉, bounds the Casimir invariant (angular momentum squared, ) from below and thus yields useful constraints such as j (j + 1) ≥ m (m + 1), and hence jm, among others.
• In non-relativistic mechanics, time is privileged as an independent variable. Nevertheless, in 1945, L. I. Mandelshtam and I. E. Tamm derived a non-relativistic time–energy uncertainty relation, as follows.[26][27] For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator , the following formula holds:
(Refs [10][19])
Quantum harmonic oscillator stationary states[edit]
the variances may be computed directly,
The product of these standard deviations is then
In particular, the above Kennard bound[3] is saturated for the ground state n=0, for which the probability density is just the normal distribution.
Quantum harmonic oscillator with Gaussian initial condition[edit]
where we have used the notation to denote a normal distribution of mean μ and variance σ2. Copying the variances above and applying trigonometric identities, we can write the product of the standard deviations as
From the relations
we can conclude
Coherent states[edit]
Main article: Coherent state
A coherent state is a right eigenstate of the annihilation operator,
which may be represented in terms of Fock states as
Therefore, every coherent state saturates the Kennard bound
with position and momentum each contributing an amount in a "balanced" way. Moreover, every squeezed coherent state also saturates the Kennard bound although the individual contributions of position and momentum need not be balanced in general.
Particle in a box[edit]
Main article: Particle in a box
Consider a particle in a one-dimensional box of length . The eigenfunctions in position and momentum space are
where and we have used the de Broglie relation . The variances of and can be calculated explicitly:
The product of the standard deviations is therefore
For all , the quantity is greater than 1, so the uncertainty principle is never violated. For numerical concreteness, the smallest value occurs when , in which case
Constant momentum[edit]
Main article: Wave packet
where we have introduced a reference scale , with describing the width of the distribution−−cf. nondimensionalization. If the state is allowed to evolve in free space, then the time-dependent momentum and position space wave functions are
such that the uncertainty product can only increase with time as
Additional uncertainty relations[edit]
Mixed states[edit]
The Robertson–Schrödinger uncertainty relation may be generalized in a straightforward way to describe mixed states.[31]
Phase space[edit]
In the phase space formulation of quantum mechanics, the Robertson–Schrödinger relation follows from a positivity condition on a real star-square function. Given a Wigner function with star product ★ and a function f, the following is generally true:[32]
Choosing , we arrive at
or, explicitly, after algebraic manipulation,
Systematic and statistical errors[edit]
The inequalities above focus on the statistical imprecision of observables as quantified by the standard deviation . Heisenberg's original version, however, was dealing with the systematic error, a disturbance of the quantum system produced by the measuring apparatus, i.e., an observer effect.
If we let represent the error (i.e., inaccuracy) of a measurement of an observable A and the disturbance produced on a subsequent measurement of the conjugate variable B by the former measurement of A, then the inequality proposed by Ozawa[6] — encompassing both systematic and statistical errors — holds:
Heisenberg uncertainty principle, as originally described in the 1927 formulation, mentions only the first term of Ozawa inequality, regarding the systematic error. Using the notation above to describe the error/disturbance effect of sequential measurements (first A, then B), it could be written as
The formal derivation of Heisenberg relation is possible but far from intuitive. It was not proposed by Heisenberg, but formulated in a mathematically consistent way only in recent years.[33][34] Also, it must be stressed that the Heisenberg formulation is not taking into account the intrinsic statistical errors and . There is increasing experimental evidence[8][35] [36][37] that the total quantum uncertainty cannot be described by the Heisenberg term alone, but requires the presence of all the three terms of the Ozawa inequality.
Using the same formalism,[1] it is also possible to introduce the other kind of physical situation, often confused with the previous one, namely the case of simultaneous measurements (A and B at the same time):
The two simultaneous measurements on A and B are necessarily[38] unsharp or weak.
It is also possible to derive an uncertainty relation that, as the Ozawa's one, combines both the statistical and systematic error components, but keeps a form very close to the Heisenberg original inequality. By adding Robertson[1]
and Ozawa relations we obtain
The four terms can be written as:
as the inaccuracy in the measured values of the variable A and
as the resulting fluctuation in the conjugate variable B, Fujikawa[39] established an uncertainty relation similar to the Heisenberg original one, but valid both for systematic and statistical errors:
Quantum entropic uncertainty principle[edit]
For many distributions, the standard deviation is not a particularly natural way of quantifying the structure. For example, uncertainty relations in which one of the observables is an angle has little physical meaning for fluctuations larger than one period.[24][40][41][42] Other examples include highly bimodal distributions, or unimodal distributions with divergent variance.
A solution that overcomes these issues is an uncertainty based on entropic uncertainty instead of the product of variances. While formulating the many-worlds interpretation of quantum mechanics in 1957, Hugh Everett III conjectured a stronger extension of the uncertainty principle based on entropic certainty.[43] This conjecture, also studied by Hirschman[44] and proven in 1975 by Beckner[45] and by Iwo Bialynicki-Birula and Jerzy Mycielski[46] is that, for two normalized, dimensionless Fourier transform pairs f(a) and g(b) where
the Shannon information entropies
are subject to the following constraint,
where the logarithms may be in any base.
The probability distribution functions associated with the position wave function ψ(x) and the momentum wave function φ(x) have dimensions of inverse length and momentum respectively, but the entropies may be rendered dimensionless by
where x0 and p0 are some arbitrarily chosen length and momentum respectively, which render the arguments of the logarithms dimensionless. Note that the entropies will be functions of these chosen parameters. Due to the Fourier transform relation between the position wave function ψ(x) and the momentum wavefuction φ(p), the above constraint can be written for the corresponding entropies as
where h is Planck's constant.
Depending on one's choice of the x0 p0 product, the expression may be written in many ways. If x0 p0 is chosen to be h, then
If, instead, x0 p0 is chosen to be ħ, then
If x0 and p0 are chosen to be unity in whatever system of units are being used, then
where h is interpreted as a dimensionless number equal to the value of Planck's constant in the chosen system of units.
The quantum entropic uncertainty principle is more restrictive than the Heisenberg uncertainty principle. From the inverse logarithmic Sobolev inequalities[47]
In other words, the Heisenberg uncertainty principle, is a consequence of the quantum entropic uncertainty principle, but not vice versa. A few remarks on these inequalities. First, the choice of base e is a matter of popular convention in physics. The logarithm can alternatively be in any base, provided that it be consistent on both sides of the inequality. Second, recall the Shannon entropy has been used, not the quantum von Neumann entropy. Finally, the normal distribution saturates the inequality, and it is the only distribution with this property, because it is the maximum entropy probability distribution among those with fixed variance (cf. here for proof).
Harmonic analysis[edit]
Further mathematical uncertainty inequalities, including the above entropic uncertainty, hold between a function f and its Fourier transform ƒ̂:[48][49][50]
Signal processing [edit]
Benedicks's theorem[edit]
Amrein-Berthier[51] and Benedicks's theorem[52] intuitively says that the set of points where f is non-zero and the set of points where ƒ̂ is nonzero cannot both be small.
One expects that the factor CeC|S||Σ| may be replaced by CeC(|S||Σ|)1/d, which is only known if either S or Σ is convex.
Hardy's uncertainty principle[edit]
The mathematician G. H. Hardy formulated the following uncertainty principle:[55] it is not possible for f and ƒ̂ to both be "very rapidly decreasing." Specifically, if f in L2(R) is such that
( an integer),
where P is a polynomial of degree (Nd)/2 and A is a real d×d positive definite matrix.
This result was stated in Beurling's complete works without proof and proved in Hörmander[56] (the case ) and Bonami, Demange, and Jaming[57] for the general case. Note that Hörmander–Beurling's version implies the case ab > 1 in Hardy's Theorem while the version by Bonami–Demange–Jaming covers the full strength of Hardy's Theorem. A different proof of Beurling's theorem based on Liouville's theorem appeared in ref.[58]
A full description of the case ab < 1 as well as the following extension to Schwartz class distributions appears in ref.[59]
Theorem. If a tempered distribution is such that
Werner Heisenberg and Niels Bohr
In his celebrated 1927 paper, "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik" ("On the Perceptual Content of Quantum Theoretical Kinematics and Mechanics"), Heisenberg established this expression as the minimum amount of unavoidable momentum disturbance caused by any position measurement,[2] but he did not give a precise definition for the uncertainties Δx and Δp. Instead, he gave some plausible estimates in each case separately. In his Chicago lecture[63] he refined his principle:
Kennard[3] in 1927 first proved the modern inequality:
Terminology and translation[edit]
Heisenberg's microscope[edit]
Problem 2 – If a large aperture is used for the microscope, the electron's location can be well resolved (see Rayleigh criterion); but by the principle of conservation of momentum, the transverse momentum of the incoming photon affects the electrons beamline momentum and hence, the new momentum of the electron resolves poorly. If a small aperture is used, the accuracy of both resolutions is the other way around.
Critical reactions[edit]
Albert Einstein believed that randomness is a reflection of our ignorance of some fundamental property of reality, while Niels Bohr believed that the probability distributions are fundamental and irreducible, and depend on which measurements we choose to perform. Einstein and Bohr debated the uncertainty principle for many years. Some experiments within the first decade of the twenty-first century have cast doubt on observer effect aspects of the uncertainty principle.[66][67]
Einstein's slit[edit]
A similar analysis with particles diffracting through multiple slits is given by Richard Feynman.[68]
Einstein's box[edit]
EPR paradox for entangled particles[edit]
Popper's criticism[edit]
Main article: Popper's experiment
Karl Popper approached the problem of indeterminacy as a logician and metaphysical realist.[75] He disagreed with the application of the uncertainty relations to individual particles rather than to ensembles of identically prepared particles, referring to them as "statistical scatter relations".[75][76] In this statistical interpretation, a particular measurement may be made to arbitrary precision without invalidating the quantum theory. This directly contrasts with the Copenhagen interpretation of quantum mechanics, which is non-deterministic but lacks local hidden variables.
In 1934, Popper published Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations) in Naturwissenschaften,[77] and in the same year Logik der Forschung (translated and updated by the author as The Logic of Scientific Discovery in 1959), outlining his arguments for the statistical interpretation. In 1982, he further developed his theory in Quantum theory and the schism in Physics, writing:
Many-worlds uncertainty[edit]
Free will[edit]
Some scientists including Arthur Compton[80] and Martin Heisenberg[81] have suggested that the uncertainty principle, or at least the general probabilistic nature of quantum mechanics, could be evidence for the two-stage model of free will. One critique, however, is that apart from the basic role of quantum mechanics as a foundation for chemistry, nontrivial biological mechanisms requiring quantum mechanics are unlikely, due to the rapid decoherence time of quantum systems at room temperature.[82] The standard view, however, is that this decoherence is overcome by both screening and decoherence-free subspaces found in biological cells.[83]
See also[edit]
1. ^ a b c Sen, D. (2014). "The uncertainty relations in quantum mechanics" (PDF). Current Science. 107 (2): 203–218.
2. ^ a b c Heisenberg, W. (1927), "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik", Zeitschrift für Physik (in German), 43 (3–4): 172–198, Bibcode:1927ZPhy...43..172H, doi:10.1007/BF01397280. . Annotated pre-publication proof sheet of Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, March 21, 1927.
3. ^ a b c Kennard, E. H. (1927), "Zur Quantenmechanik einfacher Bewegungstypen", Zeitschrift für Physik (in German), 44 (4–5): 326, Bibcode:1927ZPhy...44..326K, doi:10.1007/BF01391200.
6. ^ a b Ozawa, Masanao (2003), "Universally valid reformulation of the Heisenberg uncertainty principle on noise and disturbance in measurement", Physical Review A, 67 (4): 42105, arXiv:quant-ph/0207121free to read, Bibcode:2003PhRvA..67d2105O, doi:10.1103/PhysRevA.67.042105
7. ^ Werner Heisenberg, The Physical Principles of the Quantum Theory, p. 20
8. ^ a b Rozema, L. A.; Darabi, A.; Mahler, D. H.; Hayat, A.; Soudagar, Y.; Steinberg, A. M. (2012). "Violation of Heisenberg's Measurement-Disturbance Relationship by Weak Measurements". Physical Review Letters. 109 (10). arXiv:1208.0034v2free to read. Bibcode:2012PhRvL.109j0404R. doi:10.1103/PhysRevLett.109.100404.
10. ^ a b c d L.D. Landau, E.M. Lifshitz (1977). Quantum Mechanics: Non-Relativistic Theory. Vol. 3 (3rd ed.). Pergamon Press. ISBN 978-0-08-020940-1. Online copy.
11. ^ Section 3.2 of Ballentine, Leslie E. (1970), "The Statistical Interpretation of Quantum Mechanics", Reviews of Modern Physics, 42 (4): 358–381, doi:10.1103/RevModPhys.42.358 . This fact is experimentally well-known for example in quantum optics (see e.g. chap. 2 and Fig. 2.1 Leonhardt, Ulf (1997), Measuring the Quantum State of Light, Cambridge: Cambridge University Press, ISBN 0 521 49730 2
12. ^ Elion, W. J.; M. Matters, U. Geigenmüller & J. E. Mooij; Geigenmüller, U.; Mooij, J. E. (1994), "Direct demonstration of Heisenberg's uncertainty principle in a superconductor", Nature, 371 (6498): 594–595, Bibcode:1994Natur.371..594E, doi:10.1038/371594a0
13. ^ Smithey, D. T.; M. Beck, J. Cooper, M. G. Raymer; Cooper, J.; Raymer, M. G. (1993), "Measurement of number–phase uncertainty relations of optical fields", Phys. Rev. A, 48 (4): 3159–3167, Bibcode:1993PhRvA..48.3159S, doi:10.1103/PhysRevA.48.3159, PMID 9909968
14. ^ Caves, Carlton (1981), "Quantum-mechanical noise in an interferometer", Phys. Rev. D, 23 (8): 1693–1708, Bibcode:1981PhRvD..23.1693C, doi:10.1103/PhysRevD.23.1693
16. ^ Claude Cohen-Tannoudji; Bernard Diu; Franck Laloë (1996), Quantum mechanics, Wiley-Interscience: Wiley, pp. 231–233, ISBN 978-0-471-56952-7
17. ^ a b Robertson, H. P. (1929), "The Uncertainty Principle", Phys. Rev., 34: 163–64, Bibcode:1929PhRv...34..163R, doi:10.1103/PhysRev.34.163
18. ^ a b Schrödinger, E. (1930), "Zum Heisenbergschen Unschärfeprinzip", Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse, 14: 296–303
19. ^ a b Griffiths, David (2005), Quantum Mechanics, New Jersey: Pearson
21. ^ Davidson, E. R. (1965), "On Derivations of the Uncertainty Principle", J. Chem. Phys., 42 (4): 1461, Bibcode:1965JChPh..42.1461D, doi:10.1063/1.1696139
23. ^ Jackiw, Roman (1968), "Minimum Uncertainty Product, Number‐Phase Uncertainty Product, and Coherent States", J. Math. Phys., 9 (3): 339, Bibcode:1968JMP.....9..339J, doi:10.1063/1.1664585
24. ^ a b Carruthers, P.; Nieto, M. M. (1968), "Phase and Angle Variables in Quantum Mechanics", Rev. Mod. Phys., 40 (2): 411, Bibcode:1968RvMP...40..411C, doi:10.1103/RevModPhys.40.411
27. ^ Hilgevoord, Jan (1996). "The uncertainty principle for energy and time." American Journal of Physics 64.12, 1451-1456, [1]; Hilgevoord, Jan (1998). "The uncertainty principle for energy and time. II." American Journal of Physics 66.5 396-402. [2]
28. ^ The broad linewidth of fast decaying states makes it difficult to accurately measure the energy of the state, and researchers have even used detuned microwave cavities to slow down the decay rate, to get sharper peaks. Gabrielse, Gerald; H. Dehmelt (1985), "Observation of Inhibited Spontaneous Emission", Physical Review Letters, 55 (1): 67–70, Bibcode:1985PhRvL..55...67G, doi:10.1103/PhysRevLett.55.67, PMID 10031682
29. ^ Likharev, K.K.; A.B. Zorin (1985), "Theory of Bloch-Wave Oscillations in Small Josephson Junctions", J. Low Temp. Phys., 59 (3/4): 347–382, Bibcode:1985JLTP...59..347L, doi:10.1007/BF00683782
32. ^ Curtright, T.; Zachos, C. (2001). "Negative Probability and Uncertainty Relations". Modern Physics Letters A. 16 (37): 2381–2385. arXiv:hep-th/0105226free to read. Bibcode:2001MPLA...16.2381C. doi:10.1142/S021773230100576X.
33. ^ Busch, P.; Lahti, P.; Werner, R. F. (2013). "Proof of Heisenberg's Error-Disturbance Relation". Physical Review Letters. 111 (16). arXiv:1306.1565free to read. Bibcode:2013PhRvL.111p0405B. doi:10.1103/PhysRevLett.111.160405.
34. ^ Busch, P.; Lahti, P.; Werner, R. F. (2014). "Heisenberg uncertainty for qubit measurements". Physical Review A. 89. arXiv:1311.0837free to read. Bibcode:2014PhRvA..89a2129B. doi:10.1103/PhysRevA.89.012129.
35. ^ Erhart, J.; Sponar, S.; Sulyok, G.; Badurek, G.; Ozawa, M.; Hasegawa, Y. (2012). "Experimental demonstration of a universally valid error-disturbance uncertainty relation in spin measurements". Nature Physics. 8 (3): 185–189. arXiv:1201.1833free to read. Bibcode:2012NatPh...8..185E. doi:10.1038/nphys2194.
36. ^ Baek, S.-Y.; Kaneda, F.; Ozawa, M.; Edamatsu, K. (2013). "Experimental violation and reformulation of the Heisenberg's error-disturbance uncertainty relation". Scientific Reports. 3: 2221. Bibcode:2013NatSR...3E2221B. doi:10.1038/srep02221.
37. ^ Ringbauer, M.; Biggerstaff, D.N.; Broome, M.A.; Fedrizzi, A.; Branciard, C.; White, A.G. (2014). "Experimental Joint Quantum Measurements with Minimum Uncertainty". Physical Review Letters. 112: 020401. arXiv:1308.5688free to read. Bibcode:2014PhRvL.112b0401R. doi:10.1103/PhysRevLett.112.020401.
38. ^ Björk, G.; Söderholm, J.; Trifonov, A.; Tsegaye, T.; Karlsson, A. (1999). "Complementarity and the uncertainty relations". Physical Review. A60: 1878. arXiv:quant-ph/9904069free to read. Bibcode:1999PhRvA..60.1874B. doi:10.1103/PhysRevA.60.1874.
39. ^ Fujikawa, Kazuo (2012). "Universally valid Heisenberg uncertainty relation". Physical Review A. 85 (6). arXiv:1205.1360free to read. Bibcode:2012PhRvA..85f2117F. doi:10.1103/PhysRevA.85.062117.
40. ^ Judge, D. (1964), "On the uncertainty relation for angle variables", Il Nuovo Cimento, 31 (2): 332–340, doi:10.1007/BF02733639
41. ^ Bouten, M.; Maene, N.; Van Leuven, P. (1965), "On an uncertainty relation for angle variables", Il Nuovo Cimento, 37 (3): 1119–1125, doi:10.1007/BF02773197
42. ^ Louisell, W. H. (1963), "Amplitude and phase uncertainty relations", Physics Letters, 7 (1): 60–61, Bibcode:1963PhL.....7...60L, doi:10.1016/0031-9163(63)90442-6
44. ^ Hirschman, I. I., Jr. (1957), "A note on entropy", American Journal of Mathematics, 79 (1): 152–156, doi:10.2307/2372390, JSTOR 2372390.
45. ^ Beckner, W. (1975), "Inequalities in Fourier analysis", Annals of Mathematics, 102 (6): 159–182, doi:10.2307/1970980, JSTOR 1970980.
46. ^ Bialynicki-Birula, I.; Mycielski, J. (1975), "Uncertainty Relations for Information Entropy in Wave Mechanics", Communications in Mathematical Physics, 44 (2): 129, Bibcode:1975CMaPh..44..129B, doi:10.1007/BF01608825
47. ^ Chafaï, D. (2003), Gaussian maximum of entropy and reversed log-Sobolev inequality, arXiv:math/0102227free to read, doi:10.1007/978-3-540-36107-7_5, ISBN 978-3-540-00072-3
49. ^ Folland, Gerald; Sitaram, Alladi (May 1997), "The Uncertainty Principle: A Mathematical Survey", Journal of Fourier Analysis and Applications, 3 (3): 207–238, doi:10.1007/BF02649110, MR 98f:42006
51. ^ Amrein, W.O.; Berthier, A.M. (1977), "On support properties of Lp-functions and their Fourier transforms", Journal of Functional Analysis, 24 (3): 258–267, doi:10.1016/0022-1236(77)90056-8.
52. ^ Benedicks, M. (1985), "On Fourier transforms of functions supported on sets of finite Lebesgue measure", J. Math. Anal. Appl., 106 (1): 180–183, doi:10.1016/0022-247X(85)90140-4
53. ^ Nazarov, F. (1994), "Local estimates for exponential polynomials and their applications to inequalities of the uncertainty principle type,", St. Petersburg Math. J., 5: 663–717
54. ^ Jaming, Ph. (2007), "Nazarov's uncertainty principles in higher dimension", J. Approx. Theory, 149 (1): 30–41, doi:10.1016/j.jat.2007.04.005
55. ^ Hardy, G.H. (1933), "A theorem concerning Fourier transforms", Journal of the London Mathematical Society, 8 (3): 227–231, doi:10.1112/jlms/s1-8.3.227
56. ^ Hörmander, L. (1991), "A uniqueness theorem of Beurling for Fourier transform pairs", Ark. Mat., 29: 231–240, Bibcode:1991ArM....29..237H, doi:10.1007/BF02384339
57. ^ Bonami, A.; Demange, B.; Jaming, Ph. (2003), "Hermite functions and uncertainty principles for the Fourier and the windowed Fourier transforms", Rev. Mat. Iberoamericana, 19: 23–55., arXiv:math/0102111free to read, Bibcode:2001math......2111B, doi:10.4171/RMI/337
58. ^ Hedenmalm, H. (2012), "Heisenberg's uncertainty principle in the sense of Beurling", J. Anal. Math., 118 (2): 691–702, doi:10.1007/s11854-012-0048-9
59. ^ Demange, Bruno (2009), Uncertainty Principles Associated to Non-degenerate Quadratic Forms, Société Mathématique de France, ISBN 978-2-85629-297-6
60. ^ American Physical Society online exhibit on the Uncertainty Principle
61. ^ Bohr, Niels; Noll, Waldemar (1958), "Atomic Physics and Human Knowledge", American Journal of Physics, New York: Wiley, 26 (8): 38, Bibcode:1958AmJPh..26..596B, doi:10.1119/1.1934707
63. ^ a b c Heisenberg, W. (1930), Physikalische Prinzipien der Quantentheorie (in German), Leipzig: Hirzel English translation The Physical Principles of Quantum Theory. Chicago: University of Chicago Press, 1930.
64. ^ Cassidy, David; Saperstein, Alvin M. (2009), "Beyond Uncertainty: Heisenberg, Quantum Physics, and the Bomb", Physics Today, New York: Bellevue Literary Press, 63: 185, Bibcode:2010PhT....63a..49C, doi:10.1063/1.3293416
67. ^ "U of T scientists cast doubt on the uncertainty principle". utoronto.ca. Retrieved 22 June 2016.
68. ^ Feynman lectures on Physics, vol 3, 2–2
74. ^ Gerardus 't Hooft has at times advocated this point of view.
77. ^ Popper, Karl; Carl Friedrich von Weizsäcker (1934), "Zur Kritik der Ungenauigkeitsrelationen (Critique of the Uncertainty Relations)", Naturwissenschaften, 22 (48): 807–808, Bibcode:1934NW.....22..807P, doi:10.1007/BF01496543.
80. ^ Compton, A. H. (1931). "The Uncertainty Principle and Free Will". Science. 74 (1911): 172. Bibcode:1931Sci....74..172C. doi:10.1126/science.74.1911.172. PMID 17808216.
81. ^ Heisenberg, M. (2009). "Is free will an illusion?". Nature. 459 (7244): 164. Bibcode:2009Natur.459..164H. doi:10.1038/459164a.
82. ^ Davies, P. C. W. (2004). "Does quantum mechanics play a non-trivial role in life?". Biosystems. 78 (1–3): 69–79. doi:10.1016/j.biosystems.2004.07.001. PMID 15555759.
External links[edit] |
f0a5ca7507e57562 | HOME » PROGRAMS/ACTIVITIES » Annual Thematic Program
IMA Annual Program Year Workshop
Mathematical and Algorithmic Challenges in Electronic Structure Theory
September 29 - October 3, 2008
Eric CancesCERMICS
Anna KrylovUniversity of Southern California
Juan MezaLawrence Berkeley Laboratory
John PerdewTulane University
Group Photo
Electronic structure calculations are the very core of quantum chemistry and play an increasingly important role in nano-technologies, molecular biology and materials science.
This workshop will focus on two topics:
• the mathematical challenges in developing accurate, efficient, and robust algorithms for electronic structure calculations of large systems;
• the latest methodological developments and the remaining open problems in Density Functional Theory.
Algorithms for electronic structure calculations:
Density functional theory (DFT) is the most widely used ab initio method in material simulations. DFT can be used to calculate the electronic structure, the charge density, the total energy and the atomic forces of a material system, and with the advance of new algorithms and supercomputers, DFT can now be used to study thousand-atom systems. But there are many problems that either require much larger systems (more than 100,000 atoms), or many total energy calculation steps (molecular dynamics or atomic relaxations). Some possible applications include the study of nanostructures and the design of novel materials.
Unfortunately, conventional DFT algorithms scale as O(N3), where N is the size of the system (e.g., the number of atoms) putting many problems beyond the reach of even planned petascale computers. Therefore understanding the electronic structures of larger systems will require new mathematical advancements and algorithms. Some areas that will be addressed in this workshop include linear-scaling methods that reduce the order of complexity for DFT algorithms, large-scale nonlinear eigenvalue problems, and optimization techniques for solving the Schrödinger equation. In addition, we will discuss the implementation and parallelization of these methods for large supercomputer systems.
Contrarily to DFT, wavefunction theory provides us with a series of increasingly refined systematic approximations to the exact solution of the electronic Schrödinger equation. Wave function based electronic structure methods, which are implemented in a variety of packaged programs, can now be routinely employed to predict structures, spectra, properties and reactivity of molecules, sometimes with accuracy rivaling that of the experiment. However, due to the steep computational scaling, mathematical and algorithmic complexity, the following challenges remain:
• properties calculation for correlated wave functions;
• extending efficient and predictive methods and algorithms for open-shell and electronically excited species;
• reducing the computational cost and scaling.
The workshop will discuss the mathematical and algorithmic aspects of the above in the context of coupled-cluster (including equation-of-motion) and multi-reference methods.
Methodological developments in the Density Functional Theory:
The density functional theory (DFT) of Hohenberg, Kohn and Sham is a way to find the ground-state density n(r) and energy E of a many-electron system (atom, molecule, condensed material) by solving a constrained minimization problem whose first order optimality conditions (the Kohn-Sham equations) can be written as a nonlinear eigenvalue problem. It resembles the Hartree-Fock theory, but is formally exact because it includes the effects of electron correlation as well as exchange in the density functional for the exchange-correlation energy Exc[n] and in its functional derivative, the exchange-correlation potential vxc([n],r). Time-dependent properties and excited states are also accessible through a time-dependent version of DFT. Density functional theory is much more computationally efficient than correlated-wavefunction theory, especially for large systems, but has the disadvantage that in practice Exc[n] and vxc([n],r) must be approximated (usually through a nonsytematic "educated guess"), leading in many cases to moderate but useful accuracy. Used almost exclusively in condensed matter physics since the 1970's, DFT became popular in quantum chemistry in the 1990's due to the development of more accurate approximations.
Besides the algorithmic challenges discussed above, the principal challenges facing DFT are (a) better understanding of the exact theory itself and derivation of further exact properties of Exc[n] and vxc([n],r), and (b) improved approximations that satisfy known exact constraints and sometimes are also fitted to known data. For example, it has been argued that the approximations should (i) be one- and many-electron self-interaction-free, (ii) recover full exact exchange under uniform density scaling to the high-density limit, and (iii) include nonlocal correlation effects, including static correlation and the van der Waals interaction between nonoverlapping densities. For implicit density functionals that are explicit orbital functionals, vxc([n],r) can be constructed by the optimized effective potential method. For time-dependent DFT, a self-interaction-free vxc with memory is needed. These and related problems may be explored in this workshop, with emphasis on their mathematical aspects.
Connect With Us: |
a7931b2871eaa1af | Take the 2-minute tour ×
Can anybody give a list of the most fundamental assumptions of quantum mechanics in plain english ?
share|improve this question
what do you mean with assumptions ? – Stefano Borini Nov 10 '10 at 0:39
Just like we have a dimensionless point particle, the constant linear flow of time, and an immovable space fabric having straight lines in the classical Newtonian mechanics ? – mumtaz Nov 10 '10 at 0:45
Sadly, quantum mechanics is not written in plain English. – Mark Eichenlaub Nov 10 '10 at 1:57
as a quantum chemist would say, yes, it exists, but its projection in the english space has lower dimension, thus has a larger error due to the variational theorem. – Stefano Borini Nov 10 '10 at 8:39
Note that quantum mechanics is often taught in an axiomatic way, but it does not come from axioms: it comes from observations of how the world really works. – dmckee Apr 27 '12 at 17:16
6 Answers 6
up vote 7 down vote accepted
Uncertain Principles: 7 essential elements of QM
Sorry but I can't be made plainer than that in English. The link has been provided by @user26143 , the copy by @DeerHunter .
Posted on: January 20, 2010 11:13 AM, by Chad Orzel ( who happens to be a USER here, on Physics.SE).
1) Particles are waves, and vice versa. Quantum physics tells us that every object in the universe has both particle-like and wave-like properties. It's not that everything is really waves, and just sometimes looks like particles, or that everything is made of particles that sometimes fool us into thinking they're waves. Every object in the universe is a new kind of object-- call it a "quantum particle" that has some characteristics of both particles and waves, but isn't really either.
Quantum particles behave like particles, in that they are discrete and (in principle) countable. Matter and energy come in discrete chunks, and whether you're trying to locate an atom or detect a photon of light, you will find it in one place, and one place only.
Quantum particles also behave like waves, in that they show effects like diffraction and interference. If you send a beam of electrons or a beam of photons through a narrow slit, they will spread out on the far side. If you send the beam at two closely spaced slits, they will produce a pattern of alternating bright and dark spots on the far side of the slits, as if they were water waves passing through both slits at once and interfering on the other side. This is true even though each individual particle is detected at a single location, as a particle.
2) Quantum states are discrete. The "quantum" in quantum physics refers to the fact that everything in quantum physics comes in discrete amounts. A beam of light can only contain integer numbers of photons-- 1, 2, 3, 137, but never 1.5 or 22.7. An electron in an atom can only have certain discrete energy values-- -13.6 electron volts, or -3.4 electron volts in hydrogen, but never -7.5 electron volts. No matter what you do, you will only ever detect a quantum system in one of these special allowed states.
3) Probability is all we ever know. When physicists use quantum mechanics to predict the results of an experiment, the only thing they can predict is the probability of detecting each of the possible outcomes. Given an experiment in which an electron will end up in one of two places, we can say that there is a 17% probability of finding it at point A and an 83% probability of finding it at point B, but we can never say for sure that a single given electron will definitely end up at A or definitely end up at B. No matter how careful we are to prepare each electron in exactly the same way, we can never say for definitiviely what the outcome of the experiment will be. Each new electron is a completely new experiment, and the final outcome is random.
4) Measurement determines reality. Until the moment that the exact state of a quantum particle is measured, that state is indeterminate, and in fact can be thought of as spread out over all the possible outcomes. After a measurement is made, the state of the particle is absolutely determined, and all subsequent measurements on that particle will return produce exactly the same outcome.
This seems impossible to believe-- it's the problem that inspired Erwin Schrödinger's (in)famous thought experiment regarding a cat that is both alive and dead-- but it is worth reiterating that this is absolutely confirmed by experiment. The double-slit experiment mentioned above can be thought of as confirmation of this indeterminacy-- until it is finally measured at a single position on the far side of the slits, an electron exists in a superposition of both possible paths. The interference pattern observed when many electrons are recorded one after another is a direct consequence of the superposition of multiple states.
The Quantum Zeno Effect is another example of the effects of quantum measurement: making repeated measurements of a quantum system can prevent it from changing its state. Between measurements, the system exists in a superposition of two possible states, with the probability of one increasing and the other decreasing. Each measurements puts the system back into a single definite state, and the evolution has to start over.
The effects of measurement can be interpreted in a number of different ways-- as the physical "collapse" of a wavefunction, as the splitting of the universe into many parallel worlds, etc.-- but the end result is the same in all of them. A quantum particle can and will occupy multiple states right up until the instant that it is measured; after the measurement it is in one and only one state.
5) Quantum correlations are non-local. One of the strangest and most important consequences of quantum mechanics is the idea of "entanglement." When two quantum particles interact in the right way, their states will depend on one another, no matter how far apart they are. You can hold one particle in Princeton and send the other to Paris, and measure them simultaneously, and the outcome of the measurement in Princeton will absolutely and unequivocally determine the outcome of the measurement in Paris, and vice versa.
The correlation between these states cannot possibly be described by any local theory, in which the particles have definite states. These states are indeterminate until the instant that one is measured, at which time the states of both are absolutely determined, no matter how far apart they are. This has been experimentally confirmed dozens of times over the last thirty years or so, with light and even atoms, and every new experiment has absolutely agreed with the quantum prediction.
It must be noted that this does not provide a means of sending signals faster than light-- a measurement in Paris will determine the state of a particle in Princeton, but the outcome of each measurement is completely random. There is no way to manipulate the Parisian particle to produce a specifc result in Princeton. The correlation between measurements will only be apparent after the fact, when the two sets of results are compared, and that process has to take place at speeds slower than that of light.
6) Everything not forbidden is mandatory. A quantum particle moving from point A to point B will take absolutely every possible path from A to B, at the same time. This includes paths that involve highly improbable events like electron-positron pairs appearing out of nowhere, and disappearing again. The full theory of quantum electro-dynamics (QED) involves contributions from every possible process, even the ridiculously unlikely ones.
It's worth emphasizing that this is not some speculative mumbo-jumbo with no real applicability. A QED prediction of the interaction between an electron and a magnetic field correctly describes the interaction to 14 decimal places. As weird as the idea seems, it is one of the best-tested theories in the history of science.
7) Quantum physics is not magic. [...] As strange as quantum physics is [...] it does not suspend all the rules of common sense. The bedrock principles of physics are still intact: energy is still conserved, entropy still increases, nothing can move faster than the speed of light. You cannot exploit quantum effects to build a perpetual motion machine, or to create telepathy or clairvoyance.
Quantum mechanics has lots of features that defy our classical intuition-- indeterminate states, probabilitistic measurements, non-local effects-- but it is still subject to the most important rule at all: If something sounds too good to be true, it probably is. Anybody trying to peddle a perpetual motion machine or a mystic cure using quantum buzzwords is deluded at best, or a scam artist at worst.
share|improve this answer
Especially well told in point #5 is that the nonlocal effect is seen only when the measurements are brought together in one place. That detail is too often overlooked by writers trying to convey quantum mechanics to non-physicists. – DarenW Aug 1 '11 at 7:25
I cannot agree with Orzel, mainly because he is adding interpretation to the principles, which muddies the waters too much to be an acceptable answer. – joseph f. johnson Jan 14 '12 at 16:59
this is not a very clear list of underlying assumptions really.. this is a (popularized) list of interpretations and results of the current QM teachings. – BjornW Oct 21 '13 at 9:34
In a rather concise manner, Shankar describes four postulates of nonrelativistic quantum mechanics.
I. The state of the particle is represented by a vector $|\Psi(t)\rangle$ in a Hilbert space.
II. The independent variables $x$ and $p$ of classical mechanics are represented by Hermitian operators $X$ and $P$ with the following matrix elements in the eigenbasis of $X$
$$\langle x|X|x'\rangle = x \delta(x-x')$$
$$\langle x|P|x' \rangle = -i\hbar \delta^{'}(x-x')$$
The operators corresponding to dependent variable $\omega(x,p)$ are given Hermitian operators
$$\Omega(X,P)=\omega(x\rightarrow X,p \rightarrow P)$$
III. If the particle is in a state $|\Psi\rangle$, measurement of the variable (corresponding to) $\Omega$ will yield one of the eigenvalues $\omega$ with probability $P(\omega)\propto |\langle \omega|\Psi \rangle|^{2}$. The state of the system will change from $|\Psi \rangle$ to $|\omega \rangle$ as a result of the measurement.
IV. The state vector $|\Psi(t) \rangle$ obeys the Schrödinger equation
$$i\hbar \frac{d}{dt}|\Psi(t)\rangle=H|\Psi(t)\rangle$$
where $H(X,P)=\mathscr{H}(x\rightarrow X, p\rightarrow P)$ is the quantum Hamiltonian operator and $\mathscr{H}$ is the Hamiltonian for the corresponding classical problem.
After that, Shankar discusses the postulates and the differences between quantum mechanics and classical mechanics, hopefully in plain english. Probably you want to take a look at that book: Principles of Quantum Mechanics
There are, of course, other sets of equivalent postulates.
share|improve this answer
+1, I was thinking of the same thing, I just couldn't remember the list of assumptions in detail. – David Z Nov 10 '10 at 2:26
Ah, the first mention of Shankar I've seen! – Mark C Nov 10 '10 at 6:04
Thanks for the conciseness and reference to Shankar's book. I will definitely have a go at it. – mumtaz Nov 10 '10 at 8:27
This is hardly in plain english ;) – Stefano Borini Nov 10 '10 at 8:28
@Stefano ...yeah its not , thats why i up-voted it but could not mark it as the answer ;) – mumtaz Nov 10 '10 at 8:43
Here are some ways of looking at it for an absolute beginner. It is only a start. It is not the way you will finally look at it, but there is nothing actually incorrect here.
First, plain English just won't do, you are going to need some math. Do you know anything about complex numbers? If not, learn a little more before you begin.
Second, in quantum mechanics, we don't work with probabilities, you know, numbers between 0 and 1 that represent the likelihood of something happening. Instead we work with the square root of probabilities, called "probability amplitudes", which are complex numbers. When we are done, we convert these back to probabilities. This is a fundamental change; in a way the complex number amplitudes are just as important as the probabilities. The logic is therefore different from what you are used to.
Third, in quantum mechanics, we don't imagine that we know all about objects in between measurements, rather we know that we don't. Therefore we only focus on what we can actually measure, and possible results of a particular measurement. This is because some of the assumed imagined properties of objects in between measurements contradict one another, but we can get a theory where we can always work with the measurements themselves in a consistent way. Any measurement (meter reading, etc.) or property in between measurements has only a probability amplitude of yielding a given result, and the property only co-arises with the measurement itself.
Fourth, if we have several mutually exclusive possibilities, the probability amplitude of either one or the other happening is just the sum of the probability amplitudes of each happening. Note that this does not mean that the probabilities themselves add, the way we are used to classically. This is new and different.
If we add up the probabilities (not the amplitudes this time but their "squares") of all possible mutually exclusive measurements, gotten by "squaring" the probability amplitudes of each mutually exclusive possibilities, we always get 1 as usual. This is called unitarity.
This is not the whole picture, but it should help somewhat. As a reference, try looking at three videos of some lectures that Feynman once gave at Cornell. That should help more.
share|improve this answer
thanks for the link to vids – mumtaz Nov 10 '10 at 8:52
The probability amplitudes are not quite « square roots » of the probabilities, they differ from the square root by a phase factor which is important. Perhaps a parenthetical remark should be put into the third paragraph to this effect. – joseph f. johnson Jan 14 '12 at 17:07
@josephf.johnson: One can declare them to be signed square roots if one implements "operator i" and doubles the size of the Hilbert space. This is a clearer point of view for those who do not like complex numbers sneaking into physics without a physical argument. – Ron Maimon Apr 27 '12 at 18:22
The suggestion you sketch here will indeed work, but doesn't change the issue I brought up, of the phase factors. Relative phase factors are physical, whether you use the formalism of complex numbers or use an alternative real space with double the dimension (this procedure is called base change). – joseph f. johnson Apr 28 '12 at 15:36
I don't really understand exactly what you mean with assumptions, but I guess you ask about the physical nature and behavior of the quantistic world. I am going to be very inaccurate in the following writing, but I need some space to get things understood. What you are going to read is a harsh simplification.
The first assumption is the fact that particles at the quantistic level don't have a position, like real-life objects, they don't even have a speed. Well, they sort of have it, but in reality they are "fuzzy". Fuzzy in the sense that you cannot really assign a position to them, or a speed.
To be fair, you perfectly know a real-world case scenario where you have to give up the concepts of speed and position for an entity. Take a guitar, and pluck a string. Where is the "vibration" ? Can you point at one specific point in the string where the "vibration" is, or how fast this vibration is traveling. I guess you would say that the vibration is "on the string" or "in the space between one end of the string, and the other hand". However, there are parts of the string that are vibrating more, and parts that are not vibrating at all, so you could rightfully say that a larger part of the "vibration" is in the center of the string, a smaller part is on the lateral parts close to the ends, and no vibration is at the very ends. In some sense, you will experience higher "presence" of the vibration in the center.
With electrons, it's exactly the same. Think "electron" as "vibration" in the previous example. Soon, you will realize that you cannot really claim anything about the position of an electron, for the fact that has wave nature. You are forced to see the electron not as a charged ball rolling here and there, but as a diffuse entity totally similar to the concept of "vibration". As a result, you cannot say anything about the position of an electron, but you can claim that there are zones in the space where the electron has "higher presence" and zones that have "smaller presence". These zones are probabilistic in nature, and this probability is directly obtained by a mathematical description called the wavefunction.
The wavefunction mainly depends on the external potential, meaning that the presence of charges, such as protons and other electrons, will affect the perceived potential and will have an effect on the probability distribution in space of an electron.
A second assumption is the mapping between physical quantities (such as energy, or momentum) and quantistic operators. Take for example the momentum of an everyday object: it is given by its mass times its speed. In the quantistic world, you don't really have the concept of position, hence you don't have the concept of speed, hence you have to reinterpret the whole thing in terms of probability (rememeber the string ?), which means that what you have about a quantistic object is its wavefunction, and you have to extract the information about the momentum from this wavefunction. How do you do it ?
Well, there's a dogma in quantum mechanics, that if you apply a magic "operator" to the wavefunction, it gives you the quantity you want. There are (simple) rules to generate these operators, and you can apply them to a wavefunction to query the momentum, or the total energy, the kinetic energy and so on.
share|improve this answer
so the first thing we notice down there is a clear shift from a deterministic to a probabilistic notion of existence ? – mumtaz Nov 10 '10 at 0:57
I would not say "probabilistic existence". It's not like if you say "the electron has, say, 40% probability of being here" does not mean that it spends its time 40% here and 60% somewhere else. I think the plucked string is the most appropriate real-world example. It's not like in the middle of the string there's all the vibration 40 % of the time, and 60% is somewhere else. The vibration is just 40 % there and 60 % on the rest of the string. – Stefano Borini Nov 10 '10 at 1:05
so we dont have real dimensionless point particles then. These are extended in space , right ? ...but wait are we talking about the same space that we have an intuitive notion of from our empirical experience or is the space down there also funky ? – mumtaz Nov 10 '10 at 1:21
@mumtaz:The electron is technically handled as a dimensionless point, but it's irrelevant. What you work on is the coordinate of an electron, but what you have is a coordinate of space where an electron's wavefunction has a given value. when you say wavefunction $\psi(x)$, your $x$ is a dimensionless point, but it's not really like you have a infinitesimal particle stuck in point $x$. – Stefano Borini Nov 10 '10 at 8:36
Dirac seems to have thought that the most fundamental assumption of Quantum Mechanics is the Principle of Superposition I read somewhere, perhaps an obituary, that he began every year his lecture course on QM by taking a piece of chalk, placing it on the table, and saying, now I paraphrase:
One possible state of the piece of chalk is that it is here. Another possible state is that it is over there (pointing to another table). Now according to Quantum Mechanics, there is another possible state of the chalk in which it is partly here, and partly there.
In his book he explains more about what this « partly » means: it means that the properties of a piece of chalk that is partly in state 1 (here) and partly in state 2 (there), are in between the properties of state 1 and state 2.
EDIT: I find I have compiled the basic explanations given by Dirac in the first edition of his book. Unfortunately, it is relativistic, but I am not going to re-type it all.
« we must consider the photons as being controlled by waves, in some way which cannot be understood from the point of view of ordinary mechanics. This intimate connexion between waves and particles is of very great generality in the new quantum mechanics. ...
« The waves and particles should be regarded as two abstractions which are useful for describing the same physical reality. One must not picture this reality as containing both the waves and particles together and try to construct a mechanism, acting according to classical laws, which shall correctly describe their connexion and account for the motion of the particles.
« Corresponding to the case of the photon, which we say is in a given state of polarizationn when it has been passed through suitable polarizing apparatus, we say that any atomic system is in a given state when it has been prepared in a given way, which may be repeated arbitrarily at will. The method of preparation may then be taken as the specification of the state. The state of a system in the general case then includes any information that may be known about its position in space from the way in which it was prepared, as well as any information about its internal condition.
« We must now imagine the states of any system to be related in such a way that whenever the system is definitely in one state, we can equally well consider it as being partly in each of two or more other states. The original state must be regarded as the result of a kind of superposition of the two or more new states, in a way that cannnot be conceived on classical ideas.
« When a state is formed by the superposition of two other states, it will have properties that are in a certain way intermediate between those of the two original states and that approach more or less closely to those of either of them according to the greater or less `weight' attached to this state in the superposition process.
« We must regard the state of a system as referring to its condition throughout an indefinite period of time and not to its condition at a particular time, ... A system, when once prepared in a given state, remains in that state so long as it remains undisturbed.....It is sometimes purely a matter of convenience whether we are to regard a system as being disturbed by a certain outside influence, so that its state gets changed, or whether we are to regard the outside influence as forming a part of and coming in the definition of the system, so that with the inclusion of the effects of this influence it is still merely running through its course in one particular state. There are, however, two cases when we are in general obliged to consider the disturbance as causing a change in state of the system, namely, when the disturbance is an observation and when it consists in preparing the system so as to be in a given state.
« With the new space-time meaning of a state we need a corresponding space-time meaning of an observation. [, Dirac has not actually mentioned anything about `observation' up to this point...]
share|improve this answer
In plain English: Quantum physics is the realization that probabilities are Pythagorean quantities that evolve linearly.
That's it. All the rest is detail.
See slides 5-7 in Scott Aaronson's presentation: here.
share|improve this answer
Your Answer
|
1fcae5b6535ce6ba | Shielding of Electrons
in Atoms from H (Z=1) to Lw (Z=103)
When we study the binding energies of electrons in the atoms, we note that each electron is different. The nuclear charge around which it is in orbit, the nature of its orbit and the orbits of all the electrons, create a unique environment which is a sum of many electrostatic interactions, which change continually from instant to instant as the electrons perform their little dance around and through the nucleus. The average effect of all of these forces conspire to create a stable environment so that physical properties such as energy and angular momentum are conserved in the atom. The actual motions are far to complex to be adequately described (if we could describe them at all) and also be useful in helping us to understand the atom. On the other hand, these conserved properties are the best means at our disposal for aiding our understanding.
The chemical and physical nature of the atom is inherently bound up in the energy and angular momentum of its constitutent electrons. It is this which makes H2O so different from H2S. The concept of shielding was introduced early on as a means to explain the changes in binding energy and it has become a useful pedagodgical tool for teaching and explaining the periodic table.
When several electrons swirl around a positively charged nucleus, they will not only experience the attractive Coulomb potential of that nucleus but also the repulsive Coulomb potential with each other. If we consider a single electron ina particular orbit about a nucleus with a positive charge of Z, we can exactly solve the problem at both the non-relativistic (Schrödinger equation) and the relativistic (Dirac equation) level. But as soon as an additional electron appears in another orbit around the same nucleus, the repulsive force of this new electron will reduce the net attractive force of the nucleus upon the first electron. The extent of this reduction may be large or small depending upon the relative position of the two orbitals. For instance, if the first electron only travels in regions very close to the nucleus while the second is only very far from it, the decrease in the attractive potential may be quite small. However, if the opposite positions are maintained, then the decrease in charge may be almost exactly that equal to the charge of the shielding electron. When many electrons are present, they all contribute in their own, unique way to the shielding of each other from the attractive force of the nucleus. Each will be less strongly bound to the nucleus because of this shielding. The extent to which this attractive force is diminished is a measure of the change in the physical (i.e. spectroscopy) and chemical (i.e. reactivity) properties of the atom.
All the motions which contribute to the shielding of a given electron are far to complicated to measure (and cannot even be measured because of the Heisenberg Uncertainty principle) but their average effect is manifested directly in the change of binding energy. Based on this, we have taken all of the known electron binding energies and have compared them with the exact Dirac (relativistic solution) for an electron in the same quantum state for a single electron atom of the same charge. By using an iterative solution, we have calculated the shielding that must be present for each electron to give rise to the observed binding energy. The following links go to various tables and graphs which record these shielding constants. With this information, you can readily explain the changes in chemistry that are observed as you go down a group or across a row in the periodic table.
Atomic Shielding Constants
Hydrogen and the Alkali MetalsGroup 1: H to FrGroup 1: H to Fr
The Alkaline Earth MetalsGroup 2: Be to RaGroup 2: Be to Ra
Transition ElementsGroup 3: Sc, Y, La, AcGroup 3: Sc, Y, La, Ac
Transition ElementsGroup 4: Ti, Zr, HfGroup 4: Ti, Zr, Hf
Transition ElementsGroup 5: V, Nb, TaGroup 5: V, Nb, Ta
Transition ElementsGroup 6: Cr, Mo, WGroup 6: Cr, Mo, W
Transition ElementsGroup 7: Mn, Tc, ReGroup 7: Mn, Tc, Re
Transition ElementsGroup 8: Fe, Ru, OsGroup 8: Fe, Ru, Os
Transition ElementsGroup 9: Co, Rh, IrGroup 9: Co, Rh, Ir
Transition ElementsGroup 10: Ni, Pd, PtGroup 10: Ni, Pd, Pt
Transition ElementsGroup 11: Cu, Ag, AuGroup 11: Cu, Ag, Au
Transition ElementsGroup 12: Zn, Cd, HgGroup 12: Zn, Cd, Hg
Boron FamilyGroup 13: B to TlGroup 13: B to Tl
Carbon FamilyGroup 14: C to PbGroup 14: C to Pb
Nitrogen FamilyGroup 15: N to BiGroup 15: N to Bi
Oxygen Family
The Chalcogenides
Group 16: O to PoGroup 16: O to Po
The HalogensGroup 17: F to AtGroup 17: F to At
The Noble or Inert GasesGroup 18: He to RnGroup 18: He to Rn
The LanthanidesCe to LuCe to Lu
The ActinidesTh to LwTh to Lw
More Graphs of Electron Shielding Constants
All ElectronsAll n=1 ElectronsAll n=2 ElectronsAll n=3 Electrons
All n=4 ElectronsAll n=5 ElectronsAll n=6 ElectronsAll n=7 Electrons
All s ElectronsAll p ElectronsAll d ElectronsAll f Electrons
Author: Dan Thomas Email: <>
Last Updated: Sun, Feb 9, 1997 |
ee2f22c8d25d027b | Take the 2-minute tour ×
For a time dependent wavefunction, are the instantaneous probability densities meaningful? (The question applies for instances or more generally short lengths of time that are not multiples of the period.)
What experiment could demonstrate the existence of a time dependent probability density?
Can an isolated system be described by a time dependent wavefunction? How would this not violate conservation of energy?
I see the meaning of the time averaged probability density. Is the time dependence just a statistical construct?
share|improve this question
Hello Praxeo, Welcome to Physics.SE. Please try to ask questions only related to the topic (of yours) or consider a revision to focus on your question. The problem (for now) is, you're asking lot of questions... – Waffle's Crazy Peanut Nov 15 '12 at 16:38
2 Answers 2
up vote 1 down vote accepted
1) Why do you believe that instantaneous probability densities are not meaningful?
2) Essentially any non-stationary state for which you need to compute time-dependent wavefunctions: e.g. chemical reaction dynamics, particle scattering, etc.
3) Yes, the time dependant Schrödinger equation applies to isolated systems.
4) By definition energy is conserved in an isolated system. Moreover, the Schrödinger equation conserves energy because the generator of time translations is the Hamiltonian and this commutes with itself $[H,H]=0$, i.e. energy is conserved. For isolated systems, the Hamiltonian is time-independent (explicitly) and the time-dependent wavefunction $\Psi$ has the well-known form $\Psi = \Phi e^{-iEt/\hbar}$, with $E$ the energy of the isolated system.
5) I do not understand the question.
share|improve this answer
In (4), one needs a further condition that the Hamiltonian is itself time-independent, $\frac{\partial H}{\partial t} = 0$. – Stan Liou Nov 16 '12 at 9:34
As well, one has to distinguish the energy certainty from the energy conservation. – Vladimir Kalitvianski Nov 16 '12 at 15:17
@StanLiou: Conservation, by definition, implies zero production $d_iH/dt=0$. If the Hamiltonian has explicit time dependence then the equation of motion contains a 'flow' term $d_eH/dt$ but the production term continues being zero. – juanrga Nov 16 '12 at 18:28
@VladimirKalitvianski: Not sure what do you mean, but the conservation law $[H,H]=0$ is independent of the kind of quantum state. – juanrga Nov 16 '12 at 18:32
It's certainly correct and tautologous to say that energy is conserved in an isolated system. But if your last sentence were correct, all quantum systems whatever would conserve energy, because $[H,H] = 0$ is an exact identity and $H$ is always the generator of time translation. Hence, I expected the point of $[H,H] = 0$ to be a reference to $\frac{dA}{dt} = \frac{\partial A}{\partial t} + \frac{1}{i\hbar}[A,H]$ in the Heisenberg picture or analogous expectations in the Schrödinger picture. In the Lagrangian formalism, energy through Noether's theorem also needs no explicit time dependence. – Stan Liou Nov 16 '12 at 19:13
Yes, $|\psi(t)|^2$ is an instantaneous probability density.
Passage of a wave packet can be experimentally observed.
An isolated system can be in a superposition of different energy eigenfunctions. It does not violate the energy conservation law because initially the system is not in an eigenstate - it has some energy uncertainty at $t=0$. This uncertainty evolves as any other uncertainty.
EDIT: Let us make a superposition of two states: $$\psi(t)=c_1\psi_1(x)e^{-iE_1 t}+c_2\psi_2(x)e^{-iE_2 t}.$$ It means that we can find in experiment the system in state 1 with probability $|c_1|^2$ and in state 2 with probability $|c_2|^2$. The system is free and this is due to coefficients $c_1$ and $c_2$ being constant in time (occupation numbers do not depend on time).
Measuring the system energy will give sometimes $E_1$ and sometimes $E_2$, with the same probabilities. So initially and later on the system does not have a certain energy. The state $H\psi$ depends on time as $$H\psi=c_1 E_1 \psi_1(x)e^{-iE_1 t}+c_2 E_2 \psi_2(x)e^{-iE_2 t}.$$ It is not an eigenstate of the Hamiltonian, so the time derivative $\partial\psi/\partial t$ is not proportional to $\psi$.
The Hamiltonian expectation value, however, does not depend on time: $$\langle\psi|H|\psi\rangle = |c_1|^2 E_1 + |c_2|^2 E_2 = const.$$ In other words, it is the energy expectation value that conserves, not the energy. The latter is undefined, uncertain in this free state.
You invoke the "energy conservation law" $dH/dt=0$ which is an operator relationship. If the system has a certain energy $E_n$ in the initial state, this value remains the system energy in later moments, so your "conservation law" may be cast in a form $dE(t)/dt=0$ that means $E=const=E(0)=E_n$.
But if the system does not have a certain energy at the initial state $\psi(0)$, then there is no $E(0)$ to conserve and your operator relationship turns into conservation of the expectation value.
share|improve this answer
Your Answer
|
3292104f368c9db8 | Skip to main content
Kirsten Stadermann is one such teacher. Although she originally intended to make a career of researching laser physics, she “accidentally” started teaching in Holland when a local school lost a teacher unexpectedly.
“It was such a great experience,” she recalls of her first few months on the job. “At the school I was smiling all day long… and I thought, well, that’s what I want to do.”
Stadermann started off scouring the literature for mention of state and national standards that mentioned one or more topics generally regarded as modern or quantum physics*. Although she faced difficulties in finding readily accessible documents, which ultimately limited her study to mostly European countries, she analyzed the curricula of 15 countries that mention quantum physics—five more than had been previously studied. In addition, some of the countries (Germany, for example) set their educational standards on a state-by-state basis, so in total, she reviewed 23 different curriculum documents.
She noticed immediately that almost all the countries approach quantum physics as an elective or advanced option for students already studying physics—about 5%-20% of the overall population of 17- to 19-year-olds. She was surprised to find, however, that Einsteinian physics is a central component of the standard physics curriculum in Australia and the German state of Bavaria for students as young as 14 or 15. Despite some teachers’ concerns that younger students would be hopelessly befuddled by such complex topics, research shows that they are actually quite capable of grasping the key concepts. In fact, the mind-bending aspect of physics served to increase student interest in the subject—particularly among girls.
This meshes with Stadermann’s own experience in the classroom. As she explains, high schoolers don’t have the foundation in math that is required for quantum calculations, so teachers are forced to discuss the concepts in qualitative terms. This frequently leads to broader speculations on the philosophical implications as students grapple with such seemingly impossible ideas as wave-particle duality or the Heisenberg uncertainty principle.
Stadermann recalls, “[At first] I was a little bit afraid that in the end they couldn’t pass the exams… but the funny part is that these classes—where we had these discussions—did much better than the other classes.”
Artist's rendering of quantum entanglement
One of the biggest challenges faced by physics teachers lies in the fact that quantum physics is anything but straightforward. Indeed, it’s the very fact that there is no one “right” interpretation that many students are excited by the subject. “It’s very important to show them…that everybody understands that nobody understands it,” she says. Not only does this assuage the students’ anxieties when faced with difficult concepts, it illustrates for the students the very nature of science itself.
Scientific understanding is, after all, anything but uniform and static. For centuries, individuals have grappled with confusing observations, argued adamantly amongst themselves, and worked together to synthesize their many insights into a set of overarching theories. And as students discuss the merits and implications of various models, isn’t that what they experience on a smaller scale? Stadermann hopes that by engaging more fully with the nature of science in the classroom, students will be less susceptible to the distrust that all too often follows scientists. In particular, she mentions the fact that many people are wary of climate scientists because of the disagreements that occasionally erupt between them. “[My students] can understand that that’s normal, not a bad thing,” she says optimistically.
Naturally, there are reasons that quantum physics hasn’t been adopted as standard curriculum by droves of countries. For one, every hour spent on quantum physics is one hour that isn’t spent on another topic. For another, quantum physics doesn’t lend itself to the inexpensive and simple lab experiments high school teachers usually rely on for concrete experience. Finally, how can you test a topic where there are no right answers?
Stadermann readily agrees that it is difficult to choose a topic to abandon in favor of quantum physics. In Holland, the school board caused a controversy when it elected to cut optics, a subject critical to the understanding of everything from glasses to telescopes. That doesn’t necessarily mean that the students understand less of the world, though. In fact, they can develop a greater appreciation of modern electronics and explore cutting-edge technologies like quantum computers—arguably even more important to a young person preparing for a 21st-century career.
While it is also true that quantum physics doesn’t lend itself easily to lab experiments, students aren’t automatically devoid of the laboratory experience. Stadermann and countless other teachers have found the PhET simulations put out by the University of Colorado to be invaluable. These free applications can run on nearly any computer—no expensive lab equipment needed—and provide the students with a way to tweak and test to their hearts’ content. Stadermann cautions only that the students must be encouraged to form their own explanations and discuss amongst themselves for the simulations to be truly useful.
“Talking about interpretations from Bohr and Einstein doesn’t really help if the students are not allowed to think themselves,” she warns.
A PhET simulation designed to teach students about the photoelectric effect. Credit: PhET
The question of testing quantum physics in secondary school is more difficult. Stadermann has used both multiple-choice questions and in-depth oral examinations to study the effectiveness of her teaching, and they tell drastically different stories. In the multiple-choice tests, her students performed poorly overall, averaging around 7 correct answers out of 20. When she actually spoke with them, though, it soon became clear that they had a broad understanding ranging over a variety of interpretations—and what’s more, they enjoyed delving into deep philosophical questions. Even more importantly in her mind, they showed a good comprehension of the nature of science. “The question is, what are we actually testing with the multiple choice tests?” she asks.
Teachers of many other subjects, like literature, already rely on open-ended or essay-based exam questions for this very reason. “But physics teachers always want a yes and no answer so the whole culture of physics is different,” Stadermann says. In this sense, physics teachers are used to comparing their classes’ completed exams against an answer key, and the habit dies hard.
Even so, Stadermann believes there is value in teaching quantum physics even if it is not always satisfactorily tested in final exams—and that’s the benefit it brings to the students. They learn to cope with competing perspectives and enjoy the philosophical and technological implications of quantum physics. Perhaps most telling is the fact that she typically has five students per year who go on to study physics, when most classes produce just one. It’s not the study of projectile motion that captures her students’ interest—it’s the parallel universes, quantum tunneling, and semiconductors that they simply can’t get enough of.
The math can come later.
–Eleanor Hook
*The complete list of topics considered by this study are as follows:
Blackbody radiation
Bohr atomic model
Discrete energy levels (line spectra)
Interactions between light and matter
Wave-particle duality/complementarity
Matter waves, quantitative (de Broglie)
Technical applications
Heisenberg’s uncertainty principle
Probabilistic/statistical predictions
Philosophical consequences/interpretations
One dimensional model/potential well
Atomic orbital model
Exclusion principle/periodic table
Schrödinger equation
Calculations of detection probability
1. Physics is one of the most common science subjects taken in Singapore, with most students taking either pure physics or combined sciences (physics/chemistry). Yet, for those who are new to the subject such as secondary three students, learning about mechanics, light waves, heat radiation, electromagnetism and the structure of atoms can be a little daunting.Quantum physics is one of the most interesting topic to know more.
Post a Comment
Popular Posts
How 4,000 Physicists Gave a Vegas Casino its Worst Week Ever
Ask a Physicist: Phone Flash Sharpie Shock!
Lexie and Xavier, from Orlando, FL want to know:
The Science of Ice Cream: Part One
|
b049781c23170d5d | Are there any truly dumb questions in physics?
I trade thoughts and correspond occasionally with other interested amateurs and some highly qualified professionals on an internet forum on theoretical physics. While most of the topics raised there are serious, intelligent and thoughtful questions involving complex interpretations of concepts, math, and experiments, sometimes there are queries posed that your initial reaction to is “not this again!” or, “didn’t we cover that subject the last time?” But if you take a few minutes to think about it, you realize that what might seem like a dumb question often turns out to be a smart question, because it leads you to think about an old issue in a new way, or to re-think old assumptions that you may have left unchallenged for too long. A couple of recent ones on our forum are like these.
My first response to one of these questions, “Does a photon have mass?” was dismay, and I posted a (too) quick response. In many ways this question is a nonsensical one. A photon has energy, else how could it displace another “particle” from a plate of another material per the photoelectric effect? And if you accept E=mc2, then energy = mass in that conceptual universe of thought, and of course a photon has mass. But if the question is about the classical notion of “rest mass”, then we’re in another quandary, because a photon is never at rest.
So, one thinks that this mass/energy duality might be akin to that other mysterious duality of modern physics, the “wave/particle duality”, except that in the case of mass and energy, at least both qualities can be measured.
In this thinker’s conceptual model, the answer to this question lies outside the quantum physics model of the universe. If what we designate as a photon carries energy but no “rest mass” then we must abandon the notion of it as a “particle” as it is considered in the QT universe. Instead we should see it as something more like a coherent wrinkle or distortion in the background fabric of the cosmos of which many of us are convinced that our universe is a part. If one sees the so-called “empty” cosmos as made up of an extremely high frequency electromagnetic field, our “photon” can be seen as simply a small but significant distortion of that field. Its apparent “velocity, c, then, is not seen as the passage of a particle “through” a medium but as a wave-like distortion of the medium itself, and its apparent velocity is a constant, constrained by the fine grain, the ultra high frequency of the medium, just as the apparent velocity of the passage of an ocean wave does not represent any forward movement of the medium itself, only the passage of a distortion. Think of cracking a whip.
A second “dumb” question is a little more complex. It goes like this: “Please explain to me the Quantum theory called “superposition.” Well, the answer is also complex. Superposition is often assumed to mean the presence of two entities such as electrons occupying the same space at the same time. This is a false assumption and we can all agree that two substantive things, two particles, say. cannot occupy the same space at the same time. (Note here for further reference however, that two (or more) wave conglomerations can, in fact, occupy the same space at the same time.)
The accepted definition of superposition is more of a mathematical construct and according to Wikipedia, it goes like this:
Quantum superposition is a fundamental principle of quantum mechanics that holds that a physical system—such as an electron—exists partly in all its particular, theoretically possible states (or, configuration of its properties) simultaneously; but, when measured or observed, it gives a result corresponding to only one of the possible configurations (as described in interpretation of quantum mechanics). Mathematically, it refers to a property of solutions to the Schrödinger equation; since the Schrödinger equation is linear, any linear combination of solutions to a particular equation will also be a solution of it. ”
In simpler terms, what this means is this: that you cannot know or predict at any time, what state any given electron might be in. It’s an easy out for a theory that purports to explain how everything works in nature, but is strangely unsatisfying as an explanation of some observed behavior. Physicists seem to accept it though, however much it sounds like religious dogma about the eternal mysteries.
A third question has come up more recently, about yet another mystery, something called “quantum entanglement.” Quantum dogma has it, and this has supposedly actually been observed, that two quantum entities can become “entangled,” so that if they are then separated by any distance, even as much as at both sides of the universe, if the state of one of them changes, say from a left spin to a right spin, or from a positive to a negative value, the other entangled entity is automatically, instantaneously, changed as well. Now, Einstein challenged this idea, both as presuming instantaneous action at a distance, without any known force being involved, as well as violating the principal that no action in the universe can exceed the speed of light.
Here is a way out of having to deal with both of these quantum conundrums, these paradoxes and contradictions that are somehow easily supported in the language of mathematics but not in the domain of observable reality. It requires only a simple conceptual adjustment, that we accept the notion that what physicists since perhaps the time of Democritus have assumed to have the nature of a “particle” is, in fact simply a very small, coherent, organized, higher concentration of energy in the field of the cosmos. It is a distortion of the field, and because of its concentration of energy, it generates a companion field in its local region. Back to entanglement for instance, if, for example we postulate a tightly bound energy field as the cosmos one might infer behavior something like what happens when you pull at a corner of a bed sheet to eliminate a wrinkle only to have the same wrinkle miraculously appear in the opposite corner
If taken seriously, it can be seen that this model enormously simplifies our conceptual vision from the microscopic world of physics out to the macro-macro model of the cosmologist. There is a place here to explain mystical phenomena from the “double slit experiment” out to the mysterious substance called “dark matter” which can then be seen as large, broad scale distortions in the cosmic field surrounding truly high energy concentrations such as stars, galaxies, and clusters. And “dark energy,” that other mysterious unseen substance can be seen as simply the substance of the field itself.
This doesn’t throw out all of the work of the last century. Much of the math will still apply as long as the mathematicians are willing to give their claim that “the math is the reality.” What the rest of us have to give up is something very tiny, and which no one has ever seen, anyway, that hypothetical little billiard ball that has hypnotized scientists and philosophers for a couple of thousand years.
The collection of assertions that make up what is collectively known as quantum theory has, for almost a hundred years, been considered the principal body of knowledge that underlies modern physics. Unfortunately, it remains after all that time a body full of contradictions, paradoxes, and uncertainties. It has frustrated all attempts at reconciliation with the dogma at the other end of its chain, the theory of general relativity, itself contradicted by the insubstantiality, the actual reality, of its two principal elements, space and time. QT must be seen, not as a body of knowledge, but as a body of supposition, that would have been abandoned early had it not been called on to fill an intellectual vacuum, and had it not had such a corps of vociferous supporters speaking a language most could not understand, that of the highest and most impenetrable mathematics.
The challenge we are trying to meet here is to replace those troubled, damaged, incomplete, and ontologically challenged models with one that is more complete, more consistent, that explains conceptually more observable phenomena.
It is difficult, we know, to mentally conceive a cosmos that is a 3-dimensional field of vibrating energy, with bundles of vibrations, perhaps the things we call photons, electrons, and the like, that are simply coherent distortions of the field itself, not foreign bodies moving through it; or the concept that everything that exists in the universe is made of those distortions, from the tiniest entities out to and including the stars—but it should not be more difficult than swallowing the thorny paradoxes and contradictions of quantum theory.
So, I’ll end with one more dumb question. “Is the Sun around which we annually circumnavigate a real tangible object , or is it perhaps just a truly bright spot out there in the sky?” As you might guess from my thoughts above, I’m leaning strongly toward the second notion.
About Charles Scurlock
Charles is a recently retired architect/planner and generalist problem-solver with a lifelong interest in science, physics, and cosmology, and the workings of the human mind. He has started this blog in the interest of sharing his ideas with others of like-(or not so like) minds.
This entry was posted in 6 General. Bookmark the permalink.
1 Response to Are there any truly dumb questions in physics?
1. pallsopp42 says:
Happy Christmas Eve, Chuck.
Nice posting!! You’re right. Those “dumb questions” aren’t dumb at all. I agree that thinking about them over time can refine ideas and create new insights into what answers might be.
I agree with you about the sun being a bright spot – a big one for sure!!
All the very best
Leave a Reply
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
44cc31ba4d2132b6 | All Issues
Volume 41, 2021
Volume 40, 2020
Volume 39, 2019
Volume 38, 2018
Volume 37, 2017
Volume 36, 2016
Volume 35, 2015
Volume 34, 2014
Volume 33, 2013
Volume 32, 2012
Volume 31, 2011
Volume 30, 2011
Volume 29, 2011
Volume 28, 2010
Volume 27, 2010
Volume 26, 2010
Volume 25, 2009
Volume 24, 2009
Volume 23, 2009
Volume 22, 2008
Volume 21, 2008
Volume 20, 2008
Volume 19, 2007
Volume 18, 2007
Volume 17, 2007
Volume 16, 2006
Volume 15, 2006
Volume 14, 2006
Volume 13, 2005
Volume 12, 2005
Volume 11, 2004
Volume 10, 2004
Volume 9, 2003
Volume 8, 2002
Volume 7, 2001
Volume 6, 2000
Volume 5, 1999
Volume 4, 1998
Volume 3, 1997
Volume 2, 1996
Volume 1, 1995
Discrete & Continuous Dynamical Systems - A
July 2014 , Volume 34 , Issue 7
Select all articles
A note on the Chern-Simons-Dirac equations in the Coulomb gauge
Nikolaos Bournaveas, Timothy Candy and Shuji Machihara
2014, 34(7): 2693-2701 doi: 10.3934/dcds.2014.34.2693 +[Abstract](2000) +[PDF](402.0KB)
We prove that the Chern-Simons-Dirac equations in the Coulomb gauge are locally well-posed from initial data in $H^s$ with $s>\frac{1}{4}$. To study nonlinear Wave or Dirac equations at this regularity generally requires the presence of null structure. The novel point here is that we make no use of the null structure of the system. Instead we exploit the additional elliptic structure in the Coulomb gauge together with the bilinear Strichartz estimates of Klainerman-Tataru.
Bubbling solutions for the Chern-Simons gauged $O(3)$ sigma model in $\mathbb{R}^2$
Kwangseok Choe, Jongmin Han and Chang-Shou Lin
2014, 34(7): 2703-2728 doi: 10.3934/dcds.2014.34.2703 +[Abstract](2048) +[PDF](510.1KB)
In this paper, we construct multivortex solutions of the elliptic governing equation for the self-dual Chern-Simons gauged $O(3)$ sigma model in $\mathbb{R}^2$ when the Chern-Simons coupling parameter is sufficiently small, and the location of singular points satisfy suitable conditions. Our solutions show concentration phenomena at some points of the singular points as the coupling parameter tends to zero, and they are locally asymptotically radial near each blow-up point.
Enveloping semigroups of systems of order d
Sebastián Donoso
2014, 34(7): 2729-2740 doi: 10.3934/dcds.2014.34.2729 +[Abstract](2055) +[PDF](414.9KB)
In this paper we study the Ellis semigroup of a $d$-step nilsystem and the inverse limit of such systems. By using the machinery of cubes developed by Host, Kra and Maass, we prove that such a system has a $d$-step topologically nilpotent enveloping semigroup. In the case $d=2$, we prove that these notions are equivalent, extending a previous result by Glasner.
Rank as a function of measure
Tomasz Downarowicz, Yonatan Gutman and Dawid Huczek
2014, 34(7): 2741-2750 doi: 10.3934/dcds.2014.34.2741 +[Abstract](2289) +[PDF](369.8KB)
We establish certain topological properties of rank understood as a function on the set of invariant measures on a topological dynamical system. To be exact, we show that rank is of Young class LU (i.e., it is the limit of an increasing sequence of upper semicontinuous functions).
Computability of the Julia set. Nonrecurrent critical orbits
Artem Dudko
2014, 34(7): 2751-2778 doi: 10.3934/dcds.2014.34.2751 +[Abstract](2133) +[PDF](611.1KB)
We prove, that the Julia set of a rational function $f$ is computable in polynomial time, assuming that the postcritical set of $f$ does not contain any critical points or parabolic periodic orbits.
On the characterization of $p$-harmonic functions on the Heisenberg group by mean value properties
Fausto Ferrari, Qing Liu and Juan Manfredi
2014, 34(7): 2779-2793 doi: 10.3934/dcds.2014.34.2779 +[Abstract](2042) +[PDF](371.1KB)
We characterize $p-$harmonic functions in the Heisenberg group in terms of an asymptotic mean value property, where $1 < p <\infty$, following the scheme described in [16] for the Euclidean case. The new tool that allows us to consider the subelliptic case is a geometric lemma, Lemma 3.2 below, that relates the directions of the points of maxima and minima of a function on a small subelliptic ball with the unit horizontal gradient of that function.
Schrödinger limit of weakly dissipative stochastic Klein--Gordon--Schrödinger equations and large deviations
Boling Guo, Yan Lv and Wei Wang
2014, 34(7): 2795-2818 doi: 10.3934/dcds.2014.34.2795 +[Abstract](2233) +[PDF](448.2KB)
This paper derives a Schrödinger approximation for weakly dissipative stochastic Klein--Gordon--Schrödinger equations with a singular perturbation and scaled small noises on a bounded domain. Detail uniform estimates are given to pass the limit as perturbation and noise disappear. Approximation in two different spaces are considered. Furthermore a large deviation principe of solutions is derived by weak convergence approach.
Pointwise hyperbolicity implies uniform hyperbolicity
Boris Hasselblatt, Yakov Pesin and Jörg Schmeling
2014, 34(7): 2819-2827 doi: 10.3934/dcds.2014.34.2819 +[Abstract](2086) +[PDF](384.9KB)
We provide a general mechanism for obtaining uniform information from pointwise data. For instance, a diffeomorphism of a compact Riemannian manifold with pointwise expanding and contracting continuous invariant cone families is an Anosov diffeomorphism, i.e., the entire manifold is uniformly hyperbolic.
Quantization coefficients for ergodic measures on infinite symbolic space
Mrinal Kanti Roychowdhury
2014, 34(7): 2829-2846 doi: 10.3934/dcds.2014.34.2829 +[Abstract](1781) +[PDF](414.8KB)
In this paper we consider an ergodic measure with bounded distortion on a symbolic space generated by an infinite alphabet, and showed that for each $r\in (0, +\infty)$ there exists a unique $k_r \in (0, +\infty)$ such that both the $k_r$-dimensional lower and upper quantization coefficients for its image measure $m$ with the support lying on the limit set generated by an infinite conformal iterated function system satisfying the strong open set condition are finite and positive. In addition, it shows that $k_r$ can be expressed by a simple formula involving the temperature function of the system. The result extends and generalizes a similar result of Roychowdhury established for a finite conformal iterated function system [Bull. Polish Acad. Sci. Math. 57 (2009)].
Curves of equiharmonic solutions, and problems at resonance
Philip Korman
2014, 34(7): 2847-2860 doi: 10.3934/dcds.2014.34.2847 +[Abstract](1783) +[PDF](395.7KB)
We consider the semilinear Dirichlet problem \[ \Delta u+kg(u)=\mu_1 \varphi_1+\cdots +\mu _n \varphi_n+e(x) \; \; for \; x \in \Omega, \; \; u=0 \; \; on \; \partial \Omega, \] where $\varphi_k$ is the $k$-th eigenfunction of the Laplacian on $\Omega$ and $e(x) \perp \varphi_k$, $k=1, \ldots,n$. Write the solution in the form $u(x)= \Sigma _{i=1}^n \xi _i \varphi_i+U(x)$, with $ U \perp \varphi_k$, $k=1, \ldots,n$. Starting with $k=0$, when the problem is linear, we continue the solution in $k$ by keeping $\xi =(\xi _1, \ldots,\xi _n)$ fixed, but allowing for $\mu =(\mu _1, \ldots,\mu _n)$ to vary. Studying the map $\xi \rightarrow \mu$ provides us with the existence and multiplicity results for the above problem. We apply our results to problems at resonance, at both the principal and higher eigenvalues. Our approach is suitable for numerical calculations, which we implement, illustrating our results.
Rotation-strain decomposition for the incompressible viscoelasticity in two dimensions
Zhen Lei
2014, 34(7): 2861-2871 doi: 10.3934/dcds.2014.34.2861 +[Abstract](1940) +[PDF](364.4KB)
In this paper, we revisit the 2D rotation-strain model which was derived in [14] for the motion of incompressible viscoelastic materials and prove its global well-posedness theory without making use of the equation of the rotation angle. The proof relies on a new identity satisfied by the strain matrix. The smallness assumptions are only imposed on the $H^2$ norm of initial velocity field and the initial strain matrix, which implies that the deformation tensor is allowed being away from the equilibrium of 2 in the maximum norm.
Long-time behavior for a class of degenerate parabolic equations
Hongtao Li, Shan Ma and Chengkui Zhong
2014, 34(7): 2873-2892 doi: 10.3934/dcds.2014.34.2873 +[Abstract](2425) +[PDF](455.5KB)
The long-time behavior of a class of degenerate parabolic equations in a bounded domain will be considered in the sense that the nonnegative diffusion coefficient $a(x)$ is allowed to vanish on a nonempty closed subset with zero measure. For this purpose, some appropriate weighted Sobolev spaces are introduced and the corresponding embedding theorem is established. Then, we show the global existence and uniqueness of weak solutions. Finally, we distinguish two cases (subcritical and supcritical) to prove the existence of compact attractors for the semigroup associated with this class of equations.
Generalized exact boundary synchronization for a coupled system of wave equations
Tatsien Li, Bopeng Rao and Yimin Wei
2014, 34(7): 2893-2905 doi: 10.3934/dcds.2014.34.2893 +[Abstract](2034) +[PDF](359.5KB)
By means of Moore-Penrose generalized inverse, a general framework is presented to treat the generalized exact boundary synchronization for a coupled systems of wave equations.
Development of traveling waves in an interacting two-species chemotaxis model
Tai-Chia Lin and Zhi-An Wang
2014, 34(7): 2907-2927 doi: 10.3934/dcds.2014.34.2907 +[Abstract](2364) +[PDF](510.1KB)
By constructing sub and super solutions, we establish the existence of traveling wave solutions to a two-species chemotaxis model, which describes two interacting species chemotactically reacting to a chemical signal that is degraded by the two species. We identify the full parameter regime in which the traveling wave solutions exist, derive the asymptotical decay rates of traveling wave solutions at far field and show that the traveling wave solutions are convergent as the chemical diffusion coefficient goes to zero.
Discrete admissibility and exponential trichotomy of dynamical systems
Adina Luminiţa Sasu and Bogdan Sasu
2014, 34(7): 2929-2962 doi: 10.3934/dcds.2014.34.2929 +[Abstract](2370) +[PDF](497.0KB)
The aim of this paper is to present a new and complex study on the asymptotic behavior of dynamical systems, providing necessary and sufficient conditions for the existence of the exponential trichotomy. We associate to a nonautonomous discrete dynamical system an input-output system and we define a new admissibility concept called $(l^\infty(\mathbb{Z}, X), l^1(\mathbb{Z}, X))$-admissibility. First, we prove that the admissibility is a sufficient condition for the existence of the trichotomy projections, for their uniform boundedness, for their compatibility with the coefficients of the initial dynamical system and for certain reversibility properties. Assuming that the associated input-output operators satisfy a natural boundedness condition, we deduce that the admissibility is a necessary and sufficient condition for the existence of the uniform exponential trichotomy. Next, based on admissibility arguments, we obtain, for the first time in the literature, that all the trichotomic properties of a nonautonomous system can be completely recovered from the trichotomic behavior of the associated discrete dynamical system. Finally, we apply the main results in order to obtain a new characterization for uniform exponential trichotomy of evolution families in terms of discrete admissibility.
Hyperbolicity and types of shadowing for $C^1$ generic vector fields
Raquel Ribeiro
2014, 34(7): 2963-2982 doi: 10.3934/dcds.2014.34.2963 +[Abstract](2126) +[PDF](466.8KB)
We study various types of shadowing properties and their implication for $C^1$ generic vector fields. We show that, generically, any of the following three hypotheses implies that an isolated set is topologically transitive and hyperbolic: (i) the set is chain transitive and satisfies the (classical) shadowing property, (ii) the set satisfies the limit shadowing property, or (iii) the set satisfies the (asymptotic) shadowing property with the additional hypothesis that stable and unstable manifolds of any pair of critical orbits intersect each other. In our proof we essentially rely on the property of chain transitivity and, in particular, show that it is implied by the limit shadowing property. We also apply our results to divergence-free vector fields.
The Aubry-Mather theorem for driven generalized elastic chains
Siniša Slijepčević
2014, 34(7): 2983-3011 doi: 10.3934/dcds.2014.34.2983 +[Abstract](2445) +[PDF](801.6KB)
We consider uniformly (DC) or periodically (AC) driven generalized infinite elastic chains (a generalized Frenkel-Kontorova model) with gradient dynamics. We first show that the union of supports of all space-time invariant measures, denoted by $\mathcal{A}$, projects injectively to a dynamical system on a 2-dimensional cylinder. We also prove existence of space-time ergodic measures supported on a set of rotationaly ordered configurations with an arbitrary (rational or irrational) rotation number. This shows that the Aubry-Mather structure of ground states persists if an arbitrary AC or DC force is applied. The set $\mathcal{A}$ attracts almost surely (in probability) configurations with bounded spacing. In the DC case, $\mathcal{A}$ consists entirely of equilibria and uniformly sliding solutions. The key tool is a new weak Lyapunov function on the space of translationally invariant probability measures on the state space, which counts intersections.
On the higher-order b-family equation and Euler equations on the circle
Min Zhu
2014, 34(7): 3013-3024 doi: 10.3934/dcds.2014.34.3013 +[Abstract](2060) +[PDF](388.2KB)
Considered herein is a geometric investigation on the higher-order b-family equation describing exponential curves of the manifold of smooth orientation-preserving diffeomorphisms of the unit circle in the plane. It is shown that the higher-order $b-$family equation can only be realized as an Euler equation on the Lie group Diff$(\mathbb{S}^1) $ of all smooth and orientation preserving diffeomorphisms on the circle if the parameter $b=2$ which corresponds to the higher-order Camassa-Holm equation with the metric $H^k, k\ge 1. $
2019 Impact Factor: 1.338
Email Alert
[Back to Top] |
f5d197edc740e379 |
Share this |
Latest Posts
The usual approach to the chemical bond is to "solve the Schrödinger equation", and this is done by attempting to follow the dynamics of the electrons. As we all know, that is impossible; the equation as usually presented requires you to know the potential field in which every particle moves, and since each electron is in motion, the problem becomes insoluble. Even classical gravity has no analytical solution for the three-body problem. We all know the answer – there are various assumptions and approximations made, and as Pople noted in his Nobel lecture, validation of very similar molecules allows you to assign values to the various difficult terms and you can get quite accurate answers for similar molecules.
However, you can only be sure of that if there are suitable examples from which to validate. So, quite accurate answers are obtained, but the question remains, is the output of any value in increasing the understanding of what is going on for chemists? In other words, can they say why A behaves differently to a seemingly similar B?
There is a second issue. Because validation and the requirement to obtain results equivalent to those observed, can we be sure they are obtained the right way? As an example, in 2006 some American chemists decided to test some programs that were considered tolerable advanced and available to general chemists on some quite basic compounds. The results were quite disappointing, even to the extent of showing that benzene was non-planar. (Moran, D. and five others. 2006. J. Amer. Chem. Soc. 128: 9342-9343.)
There is a third issue, and this seems to have passed without comment amongst chemists. In the state vector formalism of quantum mechanics, it is often stated that you cannot factorise the overall wave function. That is the basis of the Schrödinger cat paradox. The whole cat is in the superposition of states that differ on whether or not the nucleus has decayed. If you can factorise the state, the paradox disappears. You may still have to open the box to see what has happened to the cat, but the cat, being a macroscopic being, has behaved classically and was either dead or alive before you opened it. This, of course, is an interpretive issue. The possible classical states are "cat alive" (that has amplitude A) and "cat dead" (which has amplitude B). According to the state vector formalism, the actual state has amplitude (A B), hence thinking that the cat is in a superposition of states. The interesting thing about this is it is impossible to prove this wrong, because any attempt to observe the state collapses it to either A or B, and the "or" is the exclusive form. Is that science or another example of the mysticism that we accuse the ancients of believing, and we laugh at them for it? Why won't the future laugh at us? In my opinion, the argument that this procedure aids calculation is also misleading; classically you would calculate the probability that the nucleus had decayed, and the probability the rest of the device worked, and you could lay bets on whether the cat was alive or dead.
Accordingly, I am happy with factorizing the wave function. Indeed, every time you talk about a p orbital interacting with . . . you have factorized the atomic state, and in my opinion chemistry would be incomprehensible unless we do this sort of thing. However, I believe we can go further. Let us take the hydrogen atom, and accept that a given state has the action equal to nh associated with any state. We can factorise that (Schiller, R. 1962. Phys Rev 125 : 1100 – 1108 ) such that
nh = [(nr + ½) + ( l
+ ½)h
Here, while the quantum numbers count the action, they also count the number of radial and angular nodes respectively. What is interesting is the half quanta; why are they there? In my opinion, they have separate functions from the other quanta. For example, consider the ground state of hydrogen. We can rewrite (1) as
h = [( ½ ) + ( ½)]h (2)
What does (2) actually say? First there are no nodes. The second is the state actually complies with the Uncertainty Principle. Suppose instead, we put the RHS of (2) simply equal to 1. If we assign that to angular motion solely, we have the Bohr theory, and we know that is wrong. If we assign it to radial motion solely, we have the motion of the electron as lying on a line through the nucleus, which is actually a classical possibility. While that turns up in most text books, again I consider that to be wrong because it has zero angular uncertainty. You know the angular momentum (zero) and you know (or could know if you determined it) the orientation of the line. (The same reasoning shows why Bohr was wrong, although of course at the time he had no idea of the Uncertainty Principle.)
There is another good point about (2): it asserts the period involves two "cycles". That is a requirement for a wave, which must have a crest and a trough. If you have no nodes separating them, you need two cycles. Now, I wonder how many people reading this (if any??) can see what happens next?
Which gets me to a final question, at least for this post: how many chemists are actually happy with what theory offers them? Comments would be appreciated.
Posted by Ian Miller on Oct 22, 2017 9:45 PM BST
Posted by Ian Miller on Aug 28, 2017 12:19 AM BST
Posted by Ian Miller on Jul 3, 2017 3:23 AM BST
One issue that has puzzled me is what role, if any, does theory play in modern chemistry, other than having a number of people writing papers. Of course some people are carrying out computations, but does any of their work influence other chemists in any way? Are they busy talking to themselves? The reason why this has struck me is that the latest "Chemistry World" has an article "Do hydrogen bonds have covalent character?" Immediately below is the explanation, "Scientists wrangle over disagreement between charge transfer measurements." My immediate reaction was, what exactly is meant by "covalent character" and "charge transfer"? I know what I think a covalent bond is, which is a bond formed by two electrons from two atoms pairing to form a wave function component with half the periodic time of the waves on the original free atom. I also accept the dative covalent bond, such as that in the BH3NH3 molecule, where two electrons come from the same atom, and where the resultant bond has a strength and length as if the two electrons originated from separate atoms. That is clearly not what is meant for the hydrogen bond, but the saviour is that word "character". What does that imply?
What puzzles me here is that on reading the article, there are no charge transfer measurements. What we have, instead, are various calculations based on models, and the argument is whether the model involves transfer of electrons. However, as far as I can make out, there is no observational evidence at all. In the BH3NH3 molecule, obviously the two electrons for the bond start from the nitrogen atom, but the resultant dipole moment does not indicate a whole electron is transferred, although we could say it is, and then sent back to form the bond. However, in that molecule we have a dipole moment of over 6 Debye units. What is the change of dipole moment in forming the hydrogen bond? If we want to argue for charge transfer, we should at least know that.
From my point of view, the hydrogen bond is essentially very weak, and is at least an order of magnitude less strong than similar covalent bonds. This would suggest that if there were charge transfer, it is relatively minor. Why would such a small effect not be simply due to polarization? With the molecule BH3NH3 it is generally accepted that the lone pair on the ammonia enters the orbital structure of the boron system, with both being tetrahedral in structure, more or less. The dipole moment is about 6 Debye units, which does not correspond to one electron fully transferring to the boron system. There is clear charge transfer and the bond is effectively covalent.
Now, if we then look at ammonia, do we expect the lone pair on the nitrogen to transfer itself to the hydrogen atom of another ammonia molecule to form this hydrogen bond? If it corresponded to the boron example, then we would expect a change of at least several Debye units but as far as I know, there is no such change of dipole moment that is not explicable in terms of it being a condensed system. The article states there are experimental data to support charge transfer, but what is it?
Back to my original problem with computational chemistry: what role, if any, does theory play in modern chemistry? In this article we see a statement such as the NBO method falls foul of "basis set superposition error". What exactly does that mean, and how many chemists appreciate exactly what it means? We have a disagreement where one is accused of focusing on energies, while they focus on charge density shifts. At least energies are measurable. What bothers me is that such arguments on whether different people use the same terminology differently is a bit like arguing about how many angels can dance on the head of a pin. What we need from theory is a reasonably clear statement of what it means, and a clear statement of what assumptions are made, and what part validation plays in the computations.
Posted by Ian Miller on Apr 24, 2017 12:54 AM BST
An interesting thing happened for planetary science recently: two papers (Nature, vol 541 (Dauphas, pp 521 – 524; Fischer-Gödde and Kleine, pp 525 – 527) showed that much of how we think planets accreted is wrong. The papers showed that the Earth/Moon system has isotope distributions across a number of elements exactly the same as that found in enstatite chondrites, and that distribution applied over most of the accretion. The timing was based on the premise that different elements would be extracted into the core at different rates, and some not at all. Further, the isotope distributions of these elements are known to vary according to distance to the star, thus Earth is different from Mars, which in turn is clearly different from the asteroid belt. Exactly why they have this radial variation is an interesting question in itself, but for the moment, it is an established fact. If we assume this variation in isotope distribution follows a continuous function, then the variations we know about have sufficient magnitude that we can say that Earth accreted from material confined to a narrow zone.
Enstatite chondrites are highly reduced, their iron content tends to be as the metal or as a sulphide rather than as an oxide, and they may even contain small amounts of silicon as a silicide. They are also extremely dry, and it is assumed that they were formed at a very hot part of the accretion disk because they contain less forsterite and additionally you need very high temperatures to form silicides.
In my mind, the significance of these papers is two-fold. The first is, the standard explanation that Earth's water and biogenetic material came from carbonaceous chondrites must be wrong. The ruthenium isotope analysis falsifies the theory that so much water arrived from such chondrites. If they did, the ruthenium on our surface would be different. The second is the standard theory of planetary formation, in which dust accreted to planetesimals, these collided to form embryos, which in turn formed oligarchs or protoplanets (Mars sized objects) and these collided to form planets must be wrong. The reason is that if they did collide like that, they would do a lot of bouncing around and everything would get well-mixed. Standard computer simulations argue that Earth would have formed from a distribution of matter from further out than Mars to inside Mercury's orbit. The fact that the isotope ratios are so equivalent to enstatite chondrites shows the material that formed Earth came from a relatively narrow zone that at some stage had been very strongly heated. That, of course, is why Earth has such a large iron core, and Mars does not. At Mars, much of the iron remained as the oxide.
In my mind, this work shows that such oligarchic growth is wrong and that the alternative, monarchic growth, which has been largely abandoned, is in fact correct. But that raises the question, why are the planets where they are, and why are there such large gaps? My answer is simple: the initial accretion was chemically based, and certain temperature zones favoured specific reactions. It was only in these zones that accretion occurred at a sufficient rate to form large bodies. That, in turn, is why the various planets have different compositions, and why Earth has so much water and is the biggest rocky planet: it was in a zone that was favourable to the formation of a cement, and water from the disk gases set it. If anyone is interested, my ebook "Planetary Formation and Biogenesis" explains this in more detail, and a review of over 600 references explains why. As far as I am aware, the theory outlined there is the only one that requires the results of those papers. So, every now and again, something good happens! It feels good to know you could actually be correct where others are not.
So, will these two papers cause a change of thinking. In my opinion, it may not change anything because scientists not directly involved probably do not care, and scientists deeply involved are not going to change their beliefs. Why do I think that? Well, there was a more convincing paper back in 2002 (Drake and Righter, Nature 416
: 39-44) that came to exactly the same conclusions. Instead of ruthenium isotopes, it used osmium isotopes, but you see the point. I doubt these two papers will be the straw that broke the camel's back, but I could be wrong. However, experience in this field shows that scientists prefer to ignore evidence that falsifies their cherished beliefs than change their minds. As a further example, neither of these papers cited the Drake and Righter paper. They did not want to admit they were confirming a previous conclusion, which is perhaps indicative they really do not wish to change people's minds, let alone acknowledge previous work that is directly relevant.
Posted by Ian Miller on Feb 5, 2017 9:41 PM GMT |
187ad91935b927e1 | World Library
Flag as Inappropriate
Email this Article
Quantum harmonic oscillator
Article Id: WHEBN0000050719
Reproduction Date:
Title: Quantum harmonic oscillator
Author: World Heritage Encyclopedia
Language: English
Subject: Landau quantization, Quantum mechanics, Fractional Schrödinger equation, Wave function, Anti-symmetric operator
Collection: Quantum Mechanics, Quantum Models
Publisher: World Heritage Encyclopedia
Quantum harmonic oscillator
Some trajectories of a harmonic oscillator according to Newton's laws of classical mechanics (A-B), and according to the Schrödinger equation of quantum mechanics (C-H). In A-B, the particle (represented as a ball attached to a spring) oscillates back and forth. In C-H, some solutions to the Schrödinger Equation are shown, where the horizontal axis is position, and the vertical axis is the real part (blue) or imaginary part (red) of the wavefunction. C,D,E,F, but not G,H, are energy eigenstates. H is a coherent state—a quantum state that approximates the classical trajectory.
The quantum harmonic oscillator is the quantum-mechanical analog of the classical harmonic oscillator. Because an arbitrary potential can usually be approximated as a harmonic potential at the vicinity of a stable equilibrium point, it is one of the most important model systems in quantum mechanics. Furthermore, it is one of the few quantum-mechanical systems for which an exact, analytical solution is known.[1][2][3]
• One-dimensional harmonic oscillator 1
• Hamiltonian and energy eigenstates 1.1
• Ladder operator method 1.2
• Natural length and energy scales 1.3
• Phase space solutions 1.4
• N-dimensional harmonic oscillator 2
• Example: 3D isotropic harmonic oscillator 2.1
• Harmonic oscillators lattice: phonons 3
• Applications 4
• See also 5
• References 6
• External links 7
One-dimensional harmonic oscillator
Hamiltonian and energy eigenstates
Wavefunction representations for the first eight bound eigenstates, n = 0 to 7. The horizontal axis shows the position x. Note: The graphs are not normalized, and the signs of some of the functions differ from those given in the text.
Corresponding probability densities.
The Hamiltonian of the particle is:
\hat H = \frac \cdot \left(\frac{m\omega}{\pi \hbar}\right)^{1/4} \cdot e^{ - \frac{m\omega x^2}{2 \hbar}} \cdot H_n\left(\sqrt{\frac{m\omega}{\hbar}} x \right), \qquad n = 0,1,2,\ldots.
The functions Hn are the Hermite polynomials,
The corresponding energy levels are
This energy spectrum is noteworthy for three reasons. First, the energies are quantized, meaning that only discrete energy values (integer-plus-half multiples of ħω) are possible; this is a general feature of quantum-mechanical systems when a particle is confined. Second, these discrete energy levels are equally spaced, unlike in the Bohr model of the atom, or the particle in a box. Third, the lowest achievable energy (the energy of the n = 0 state, called the ground state) is not equal to the minimum of the potential well, but ħω/2 above it; this is called zero-point energy. Because of the zero-point energy, the position and momentum of the oscillator in the ground state are not fixed (as they would be in a classical oscillator), but have a small range of variance, in accordance with the Heisenberg uncertainty principle. This zero-point energy further has important implications in quantum field theory and quantum gravity.
Note that the ground state probability density is concentrated at the origin. This means the particle spends most of its time at the bottom of the potential well, as we would expect for a state with little energy. As the energy increases, the probability density becomes concentrated at the classical "turning points", where the state's energy coincides with the potential energy. This is consistent with the classical harmonic oscillator, in which the particle spends most of its time (and is therefore most likely to be found) at the turning points, where it is the slowest. The correspondence principle is thus satisfied. Moreover, special nondispersive wave packets, with minimum uncertainty, called coherent states in fact oscillate very much like classical objects, as illustrated in the figure; they are not eigenstates of the Hamiltonian.
Ladder operator method
Probability densities |ψn(x)|2 for the bound eigenstates, beginning with the ground state (n = 0) at the bottom and increasing in energy toward the top. The horizontal axis shows the position x, and brighter colors represent higher probability densities.
The spectral method solution, though straightforward, is rather tedious. The "ladder operator" method, developed by Paul Dirac, allows us to extract the energy eigenvalues without directly solving the differential equation. Furthermore, it is readily generalizable to more complicated problems, notably in quantum field theory. Following this approach, we define the operators a and its adjoint a,
\begin{align} a &=\sqrt{m\omega \over 2\hbar} \left(\hat x + {i \over m \omega} \hat p \right) \\ a^{\dagger} &=\sqrt{m \omega \over 2\hbar} \left(\hat x - {i \over m \omega} \hat p \right) \end{align}
This leads to the useful representation of \hat x and \hat p,
The operator a is not Hermitian, since itself and its adjoint a are not equal. Yet the energy eigenstates |n⟩, when operated on by these ladder operators, give
a^\dagger|n\rangle = \sqrt{n+1}\,|n+1\rangle
a|n\rangle = \sqrt{n}\,|n-1\rangle .
It is then evident that a, in essence, appends a single quantum of energy to the oscillator, while a removes a quantum. For this reason, they are sometimes referred to as "creation" and "annihilation" operators.
From the relations above, we can also define a number operator N, which has the following property:
N = a^{\dagger}a
N\left| n \right\rangle =n\left| n \right\rangle .
The following commutators can be easily obtained by substituting the canonical commutation relation,
And the Hamilton operator can be expressed as
so the eigenstate of N is also the eigenstate of energy.
The commutation property yields
\begin{align} Na^{\dagger}|n\rangle&=\left(a^{\dagger}N+[N,a^{\dagger}]\right)|n\rangle\\&=\left(a^{\dagger}N+a^{\dagger}\right)|n\rangle\\&=(n+1)a^{\dagger}|n\rangle, \end{align}
and similarly,
This means that a acts on |n⟩ to produce, up to a multiplicative constant, |n–1⟩, and a acts on |n⟩ to produce |n+1⟩. For this reason, a is called a "lowering operator", and a a "raising operator". The two operators together are called ladder operators. In quantum field theory, a and a are alternatively called "annihilation" and "creation" operators because they destroy and create particles, which correspond to our quanta of energy.
Given any energy eigenstate, we can act on it with the lowering operator, a, to produce another eigenstate with ħω less energy. By repeated application of the lowering operator, it seems that we can produce energy eigenstates down to E = −∞. However, since
n=\langle n|N|n\rangle=\langle n|a^{\dagger}a|n\rangle=\left(a|n\rangle\right)^{\dagger}a|n\rangle\geqslant 0,
the smallest eigen-number is 0, and
a \left| 0 \right\rangle = 0 .
In this case, subsequent applications of the lowering operator will just produce zero kets, instead of additional energy eigenstates. Furthermore, we have shown above that
H \left|0\right\rangle = \frac{\hbar\omega}{2} \left|0\right\rangle
Finally, by acting on |0⟩ with the raising operator and multiplying by suitable normalization factors, we can produce an infinite set of energy eigenstates
\left\{\left| 0 \right \rangle, \left| 1 \right \rangle, \left| 2 \right \rangle, ... , \left| n \right \rangle, ...\right\},
such that
H \left|n\right\rangle = \hbar\omega \left(n +\frac{1}{2} \right) \left|n\right\rangle ,
which matches the energy spectrum given in the preceding section.
Arbitrary eigenstates can be expressed in terms of |0⟩,
|n\rangle=\frac{\left(a^{\dagger}\right)^{n}}{\sqrt{n!}}|0\rangle .
\begin{align} \langle n|aa^{\dagger}|n\rangle&=\langle n|\left([a,a^{\dagger}]+a^{\dagger}a\right)|n\rangle=\langle n|\left(N+1\right)|n\rangle=n+1\\\Rightarrow a^{\dagger}|n\rangle&=\sqrt{n+1}|n+1\rangle\\\Rightarrow|n\rangle&=\frac{a^{\dagger}}{\sqrt{n}}|n-1\rangle=\frac{\left(a^{\dagger}\right)^{2}}{\sqrt{n(n-1)}}|n-2\rangle=\cdots=\frac{\left(a^{\dagger}\right)^{n}}{\sqrt{n!}}|0\rangle. \end{align}
The ground state |0⟩ in the position representation is determined by a |0⟩ = 0,
\begin{align} &\left\langle x\left|a \right| 0 \right\rangle = 0~~~~~~~~~~\Longrightarrow\\ &\left(x + \frac{\hbar}{m\omega}\frac{d}{dx}\right)\left\langle x|0\right\rangle = 0~~~~~~\Longrightarrow\\ &\left\langle x|0\right\rangle = \left(\frac{m\omega}{\pi\hbar}\right)^{\frac{1}{4}}\exp\left(-\frac{m\omega}{2\hbar}x^{2}\right)=\psi_0 ~, \end{align}
and hence
\langle x| a^{\dagger} |0\rangle =\psi_1 ~,
and so on, as in the previous section.
Natural length and energy scales
The quantum harmonic oscillator possesses natural scales for length and energy, which can be used to simplify the problem. These can be found by nondimensionalization.
The result is that, if we measure energy in units of ħω and distance in units of ħ/(), then the Hamiltonian simplifies to
H = -\tfrac{1}{2} {d^2 \over dx^2 } +\tfrac{1}{2} x^2 ,
while the energy eigenfunctions and eigenvalues simplify to
\psi_n(x)\equiv \left\langle x | n \right\rangle = {1 \over \sqrt{2^n n!}}~ \pi^{-1/4} ~\hbox{exp} (-x^2 / 2) H_n(x),
E_n = n + \tfrac{1}{2},
where Hn(x) are the Hermite polynomials.
To avoid confusion, we will not adopt these "natural units" in this article. However, they frequently come in handy when performing calculations, by bypassing clutter.
For example, the fundamental solution (propagator) of H−i∂t, the time-dependent Schroedinger operator for this oscillator, simply boils down to the Mehler kernel,[4][5]
\langle x| \exp (-itH) |y \rangle \equiv K(x,y;t)= \frac{1}{\sqrt{2\pi i \sin t}} \exp \left(\frac{i}{2\sin t}\left ((x^2+y^2)\cos t - 2xy\right )\right )~,
where K(x,y;0) =δ(x−y). The most general solution for a given initial configuration ψ(x,0) then is simply
\psi(x,t)=\int dy~ K(x,y;t) \psi(y,0) ~.
Phase space solutions
In the phase space formulation of quantum mechanics, solutions to the quantum harmonic oscillator in several different representations of the quasiprobability distribution can be written in closed form. The most widely used of these is for the Wigner quasiprobability distribution, which has the solution
F_n(u) = \frac{(-1)^n}{\pi \hbar} L_n\left(4\frac{u}{\hbar \omega}\right) e^{-2u/\hbar \omega} ~,
u=\frac{1}{2} m \omega^2 x^2 + \frac{p^2}{2m}
and Ln are the Laguerre polynomials.
This example illustrates how the Hermite and Laguerre polynomials are linked through the Wigner map.
N-dimensional harmonic oscillator
The one-dimensional harmonic oscillator is readily generalizable to N dimensions, where N = 1, 2, 3, ... . In one dimension, the position of the particle was specified by a single coordinate, x. In N dimensions, this is replaced by N position coordinates, which we label x1, ..., xN. Corresponding to each position coordinate is a momentum; we label these p1, ..., pN. The canonical commutation relations between these operators are
\begin{matrix} \left[x_i , p_j \right] &=& i\hbar\delta_{i,j} \\ \left[x_i , x_j \right] &=& 0 \\ \left[p_i , p_j \right] &=& 0 \end{matrix}
The Hamiltonian for this system is
H = \sum_{i=1}^N \left( {p_i^2 \over 2m} + {1\over 2} m \omega^2 x_i^2 \right).
As the form of this Hamiltonian makes clear, the N-dimensional harmonic oscillator is exactly analogous to N independent one-dimensional harmonic oscillators with the same mass and spring constant. In this case, the quantities x1, ..., xN would refer to the positions of each of the N particles. This is a convenient property of the r^2 potential, which allows the potential energy to be separated into terms depending on one coordinate each.
This observation makes the solution straightforward. For a particular set of quantum numbers {n} the energy eigenfunctions for the N-dimensional oscillator are expressed in terms of the 1-dimensional eigenfunctions as:
\langle \mathbf{x}|\psi_{\{n\}}\rangle =\prod_{i=1}^N\langle x_i|\psi_{n_i}\rangle
In the ladder operator method, we define N sets of ladder operators,
\begin{matrix} a_i &=& \sqrt{m\omega \over 2\hbar} \left(x_i + {i \over m \omega} p_i \right) \\ a^{\dagger}_i &=& \sqrt{m \omega \over 2\hbar} \left( x_i - {i \over m \omega} p_i \right) \end{matrix}.
By a procedure analogous to the one-dimensional case, we can then show that each of the ai and ai operators lower and raise the energy by ℏω respectively. The Hamiltonian is
H = \hbar \omega \, \sum_{i=1}^N \left(a_i^\dagger \,a_i + \frac{1}{2}\right).
This Hamiltonian is invariant under the dynamic symmetry group U(N) (the unitary group in N dimensions), defined by
U\, a_i^\dagger \,U^\dagger = \sum_{j=1}^N a_j^\dagger\,U_{ji}\quad\hbox{for all}\quad U \in U(N),
where U_{ji} is an element in the defining matrix representation of U(N).
The energy levels of the system are
E = \hbar \omega \left[(n_1 + \cdots + n_N) + {N\over 2}\right].
n_i = 0, 1, 2, \dots \quad (\hbox{the energy level in dimension } i).
As in the one-dimensional case, the energy is quantized. The ground state energy is N times the one-dimensional energy, as we would expect using the analogy to N independent one-dimensional oscillators. There is one further difference: in the one-dimensional case, each energy level corresponds to a unique quantum state. In N-dimensions, except for the ground state, the energy levels are degenerate, meaning there are several states with the same energy.
The degeneracy can be calculated relatively easily. As an example, consider the 3-dimensional case: Define n = n1 + n2 + n3. All states with the same n will have the same energy. For a given n, we choose a particular n1. Then n2 + n3 = n − n1. There are n − n1 + 1 possible pairs {n2n3}. n2 can take on the values 0 to n − n1, and for each n2 the value of n3 is fixed. The degree of degeneracy therefore is:
g_n = \sum_{n_1=0}^n n - n_1 + 1 = \frac{(n+1)(n+2)}{2}
Formula for general N and n [gn being the dimension of the symmetric irreducible nth power representation of the unitary group U(N)]:
g_n = \binom{N+n-1}{n}
The special case N = 3, given above, follows directly from this general equation. This is however, only true for distinguishable particles, or one particle in N dimensions (as dimensions are distinguishable). For the case of N bosons in a one-dimension harmonic trap, the degeneracy scales as the number of ways to partition an integer n using integers less than or equal to N.
g_n = p(N_{-},n)
This arises due to the constraint of putting N quanta into a state ket where \sum_{k=0}^\infty k n_k = n and \sum_{k=0}^\infty n_k = N , which are the same constraints as in integer partition.
Example: 3D isotropic harmonic oscillator
The Schrödinger equation of a spherically-symmetric three-dimensional harmonic oscillator can be solved explicitly by separation of variables; see this article for the present case. This procedure is analogous to the separation performed in the hydrogen-like atom problem, but with the spherically symmetric potential
V(r) = {1\over 2} \mu \omega^2 r^2,
where μ is the mass of the problem. (Because m will be used below for the magnetic quantum number, mass is indicated by μ, instead of m, as earlier in this article.)
The solution reads
\psi_{klm}(r,\theta,\phi) = N_{kl} r^{l}e^{-\nu r^2}{L_k}^{(l+{1\over 2})}(2\nu r^2) Y_{lm}(\theta,\phi)
N_{kl}=\sqrt{\sqrt{\frac{2\nu ^{3}}{\pi }}\frac{2^{k+2l+3}\;k!\;\nu ^{l}}{ (2k+2l+1)!!}}~~ is a normalization constant; \nu \equiv {\mu \omega \over 2 \hbar}~;
are generalized Laguerre polynomials; The order k of the polynomial is a non-negative integer;
Y_{lm}(\theta,\phi)\, is a spherical harmonic function;
ħ is the reduced Planck constant: \hbar\equiv\frac{h}{2\pi}~.
The energy eigenvalue is
E=\hbar \omega \left(2k+l+\frac{3}{2}\right) ~.
The energy is usually described by the single quantum number
n\equiv 2k+l ~.
Because k is a non-negative integer, for every even n we have ℓ = 0,2,...,n−2,n and for every odd n we have ℓ =1,3,...,n−2,n . The magnetic quantum number m is an integer satisfying -ℓ ≤ m ≤ℓ, so for every n and ℓ there are 2ℓ+1 different quantum states, labeled by m . Thus, the degeneracy at level n is
\sum_{l=\ldots,n-2,n} (2l+1) = {(n+1)(n+2)\over 2} ~,
where the sum starts from 0 or 1, according to whether n is even or odd. This result is in accordance with the dimension formula above, and amounts to the dimensionality of a symmetric representation of SU(3), the relevant degeneracy group.
Harmonic oscillators lattice: phonons
We can extend the notion of a harmonic oscillator to a one lattice of many particles. Consider a one-dimensional quantum mechanical harmonic chain of N identical atoms. This is the simplest quantum mechanical model of a lattice, and we will see how phonons arise from it. The formalism that we will develop for this model is readily generalizable to two and three dimensions.
As in the previous section, we denote the positions of the masses by x1,x2,..., as measured from their equilibrium positions (i.e. xi = 0 if the particle i is at its equilibrium position.) In two or more dimensions, the xi are vector quantities. The Hamiltonian for this system is
where m is the (assumed uniform) mass of each atom, and xi and pi are the position and momentum operators for the i th atom and the sum is made over the nearest neighbors (nn). However, it is customary to rewrite the Hamiltonian in terms of the normal modes of the wavevector rather than in terms of the particle coordinates so that one can work in the more convenient Fourier space.
We introduce, then, a set of N "normal coordinates" Qk, defined as the discrete Fourier transforms of the xs, and N "conjugate momenta" Π defined as the Fourier transforms of the ps,
Q_k = {1\over\sqrt{N}} \sum_{l} e^{ikal} x_l
\Pi_{k} = {1\over\sqrt{N}} \sum_{l} e^{-ikal} p_l ~.
The quantity kn will turn out to be the wave number of the phonon, i.e. 2π divided by the wavelength. It takes on quantized values, because the number of atoms is finite.
This preserves the desired commutation relations in either real space or wave vector space
\begin{align} \left[x_l , p_m \right]&=i\hbar\delta_{l,m} \\ \left[ Q_k , \Pi_{k'} \right] &={1\over N} \sum_{l,m} e^{ikal} e^{-ik'am} [x_l , p_m ] \\ &= {i \hbar\over N} \sum_{m} e^{iam\left(k-k'\right)} = i\hbar\delta_{k,k'} \\ \left[ Q_k , Q_{k'} \right] &= \left[ \Pi_k , \Pi_{k'} \right] = 0 ~. \end{align}
From the general result
\begin{align} \sum_{l}x_l x_{l+m}&={1\over N}\sum_{kk'}Q_k Q_{k'}\sum_{l} e^{ial\left(k+k'\right)}e^{iamk'}= \sum_{k}Q_k Q_{-k}e^{iamk} \\ \sum_{l}{p_l}^2 &= \sum_{k}\Pi_k \Pi_{-k} ~, \end{align}
it is easy to show, through elementary trigonometry, that the potential energy term is
{1\over 2} m \omega^2 \sum_{j} (x_j - x_{j+1})^2= {1\over 2}\omega^2\sum_{k}Q_k Q_{-k}(2-e^{ika}-e^{-ika})= {1\over 2} \sum_{k}{\omega_k}^2Q_k Q_{-k} ~ ,
The Hamiltonian may be written in wave vector space as
\mathbf{H} = {1\over {2m}}\sum_k \left( { \Pi_k\Pi_{-k} } + m^2 \omega_k^2 Q_k Q_{-k} \right) ~.
Note that the couplings between the position variables have been transformed away; if the Qs and Πs were hermitian(which they are not), the transformed Hamiltonian would describe N uncoupled harmonic oscillators.
k=k_n = {2n\pi \over Na} \quad \hbox{for}\ n = 0, \pm1, \pm2, ... , \pm {N \over 2}.\
The upper bound to n comes from the minimum wavelength, which is twice the lattice spacing a, as discussed above.
The harmonic oscillator eigenvalues or energy levels for the mode ωk are
E_n = \left({1\over2}+n\right)\hbar\omega_k \quad\quad\quad n=0,1,2,3 ......
If we ignore the zero-point energy then the levels are evenly spaced at
ħω, 2ħω, 3ħω, ...
So an exact amount of energy ħω, must be supplied to the harmonic oscillator lattice to push it to the next energy level. In comparison to the photon case when the electromagnetic field is quantised, the quantum of vibrational energy is called a phonon.
All quantum systems show wave-like and particle-like properties. The particle-like properties of the phonon are best understood using the methods of second quantization and operator techniques described later.[6]
• The vibrations of a diatomic molecule are an example of a two-body version of the quantum harmonic oscillator. In this case, the angular frequency is given by
\omega = \sqrt{\frac{k}{\mu}}
where μ = m1m2/(m1+m2) is the reduced mass and is determined by the masses m1, m2 of the two atoms.[7]
• The Hooke's atom is a simple model of the helium atom using the quantum harmonic oscillator
• Modelling phonons, as discussed above
• A charge, q, with mass, m, in a uniform magnetic field, B, is an example of a one-dimensional quantum harmonic oscillator: the Landau quantization.
See also
1. ^
2. ^
3. ^
4. ^ Pauli, W. (2000), Wave Mechanics: Volume 5 of Pauli Lectures on Physics (Dover Books on Physics). ISBN 978-0486414621 ; Section 44.
5. ^ Condon, E. U. (1937). "Immersion of the Fourier transform in a continuous group of functional transformations", Proc. Nat. Acad. Sci. USA 23, 158–164. online
6. ^ Mahan, GD (1981). many particle physics. New York: springer.
7. ^ "Quantum Harmonic Oscillator". Hyperphysics. Retrieved 24 September 2009.
External links
• Quantum Harmonic Oscillator
• Calculation using a noncommutative free monoid: (mathematical version) / (abbreviated version)
• Rationale for choosing the ladder operators
• Live 3D intensity plots of quantum harmonic oscillator
• Driven and damped quantum harmonic oscillator (lecture notes of course "quantum optics in electric circuits")
|
a6b7383751715d3d | Cluster decay
From Wikipedia, the free encyclopedia
Jump to navigation Jump to search
Cluster decay, also named heavy particle radioactivity or heavy ion radioactivity, is a rare type of nuclear decay in which an atomic nucleus emits a small "cluster" of neutrons and protons, more than in an alpha particle, but less than a typical binary fission fragment. Ternary fission into three fragments also produces products in the cluster size. The loss of protons from the parent nucleus changes it to the nucleus of a different element, the daughter, with a mass number Ad = AAe and atomic number Zd = ZZe, where Ae = Ne + Ze.[1] For example:
+ 209
This type of rare decay mode was observed in radioisotopes that decay predominantly by alpha emission, and it occurs only in a small percentage of the decays for all such isotopes.[2]
The branching ratio with respect to alpha decay is rather small (see the Table below).
Ta and Tc are the half-lives of the parent nucleus relative to alpha decay and cluster radioactivity, respectively.
Cluster decay, like alpha decay, is a quantum tunneling process: in order to be emitted, the cluster must penetrate a potential barrier. This is a different process than the more random nuclear disintegration that precedes light fragment emission in ternary fission, which may be a result of a nuclear reaction, but can also be a type of spontaneous radioactive decay in certain nuclides, demonstrating that input energy is not necessarily needed for fission, which remains a fundamentally different process mechanistically.
Theoretically, any nucleus with Z > 40 for which the released energy (Q value) is a positive quantity, can be a cluster-emitter. In practice, observations are severely restricted to limitations imposed by currently available experimental techniques which require a sufficiently short half-life, Tc < 1032 s, and a sufficiently large branching ratio B > 10−17.
In the absence of any energy loss for fragment deformation and excitation, as in cold fission phenomena or in alpha decay, the total kinetic energy is equal to the Q-value and is divided between the particles in inverse proportion with their masses, as required by conservation of linear momentum
where Ad is the mass number of the daughter, Ad = AAe.
Cluster decay exists in an intermediate position between alpha decay (in which a nucleus spits out a 4He nucleus), and spontaneous fission, in which a heavy nucleus splits into two (or more) large fragments and an assorted number of neutrons. Spontaneous fission ends up with a probabilistic distribution of daughter products, which sets it apart from cluster decay. In cluster decay for a given radioisotope, the emitted particle is a light nucleus and the decay method always emits this same particle. For heavier emitted clusters, there is otherwise practically no qualitative difference between cluster decay and spontaneous cold fission.
The first information about the atomic nucleus was obtained at the beginning of the 20th century by studying radioactivity. For a long period of time only three kinds of nuclear decay modes (alpha, beta, and gamma) were known. They illustrate three of the fundamental interactions in nature: strong, weak, and electromagnetic. Spontaneous fission became better studied soon after its discovery in 1940 by Konstantin Petrzhak and Georgy Flyorov because of both the military and the peaceful applications of induced fission. This was discovered circa 1939 by Otto Hahn, Lise Meitner, and Fritz Strassmann.
There are many other kinds of radioactivity, e.g. cluster decay, proton decay, various beta-delayed decay modes (p, 2p, 3p, n, 2n, 3n, 4n, d, t, alpha, f), fission isomers, particle accompanied (ternary) fission, etc. The height of the potential barrier, mainly of Coulomb nature, for emission of the charged particles is much higher than the observed kinetic energy of the emitted particles. The spontaneous decay can only be explained by quantum tunneling in a similar way to the first application of the Quantum Mechanics to Nuclei given by G. Gamow for alpha decay.
Usually the theory explains an already experimentally observed phenomenon. Cluster decay is one of the rare examples of phenomena predicted before experimental discovery. Theoretical predictions were made in 1980,[4] four years before experimental discovery.[5]
Four theoretical approaches were used: fragmentation theory by solving a Schrödinger equation with mass asymmetry as a variable to obtain the mass distributions of fragments; penetrability calculations similar to those used in traditional theory of alpha decay, and superasymmetric fission models, numerical (NuSAF) and analytical (ASAF). Superasymmetric fission models are based on the macroscopic-microscopic approach[6] using the asymmetrical two-center shell model[7][8] level energies as input data for the shell and pairing corrections. Either the liquid drop model[9] or the Yukawa-plus-exponential model[10] extended to different charge-to-mass ratios[11] have been used to calculate the macroscopic deformation energy.
Penetrability theory predicted eight decay modes: 14C, 24Ne, 28Mg, 32,34Si, 46Ar, and 48,50Ca from the following parent nuclei: 222,224Ra, 230,232Th, 236,238U, 244,246Pu, 248,250Cm, 250,252Cf, 252,254Fm, and 252,254No.
The first experimental report was published in 1984, when physicists at Oxford University discovered that 223Ra emits one 14C nucleus among every billion (109) decays by alpha emission.
The quantum tunneling may be calculated either by extending fission theory to a larger mass asymmetry or by heavier emitted particle from alpha decay theory.[12]
Both fission-like and alpha-like approaches are able to express the decay constant = ln 2 / Tc, as a product of three model-dependent quantities
where is the frequency of assaults on the barrier per second, S is the preformation probability of the cluster at the nuclear surface, and Ps is the penetrability of the external barrier. In alpha-like theories S is an overlap integral of the wave function of the three partners (parent, daughter, and emitted cluster). In a fission theory the preformation probability is the penetrability of the internal part of the barrier from the initial turning point Ri to the touching point Rt.[13] Very frequently it is calculated by using the Wentzel-Kramers-Brillouin (WKB) approximation.
A very large number, of the order 105, of parent-emitted cluster combinations were considered in a systematic search for new decay modes. The large amount of computations could be performed in a reasonable time by using the ASAF model developed by Dorin N Poenaru, Walter Greiner, et al. The model was the first to be used to predict measurable quantities in cluster decay. More than 150 cluster decay modes have been predicted before any other kind of half-lives calculations have been reported. Comprehensive tables of half-lives, branching ratios, and kinetic energies have been published, e.g.[14] .[15] Potential barrier shapes similar to that considered within the ASAF model have been calculated by using the macroscopic-microscopic method.[16]
Previously[17] it was shown that even alpha decay may be considered a particular case of cold fission. The ASAF model may be used to describe in a unified manner cold alpha decay, cluster decay, and cold fission (see figure 6.7, p. 287 of the Ref. [2]).
One can obtain with good approximation one universal curve (UNIV) for any kind of cluster decay mode with a mass number Ae, including alpha decay
In a logarithmic scale the equation log T = f(log Ps) represents a single straight line which can be conveniently used to estimate the half-life. A single universal curve for alpha decay and cluster decay modes results by expressing log T + log S = f(log Ps).[18] The experimental data on cluster decay in three groups of even-even, even-odd, and odd-even parent nuclei are reproduced with comparable accuracy by both types of universal curves, fission-like UNIV and UDL[19] derived using alpha-like R-matrix theory.
In order to find the released energy
one can use the compilation of measured masses[20] M, Md, and Me of the parent, daughter, and emitted nuclei, c is the light velocity. The mass excess is transformed into energy according to the Einstein's formula E = mc2.
The main experimental difficulty in observing cluster decay comes from the need to identify a few rare events against a background of alpha particles. The quantities experimentally determined are the partial half life, Tc, and the kinetic energy of the emitted cluster Ek. There is also a need to identify the emitted particle.
Detection of radiations is based on their interactions with matter, leading mainly to ionizations. Using a semiconductor telescope and conventional electronics to identify the 14C ions, the Rose and Jones's experiment was running for about six months in order to get 11 useful events.
With modern magnetic spectrometers (SOLENO and Enge-split pole), at Orsay and Argonne National Laboratory (see ch. 7 in Ref. [2] pp. 188–204), a very strong source could be used, so that results were obtained in a run of few hours.
Solid state nuclear track detectors (SSNTD) insensitive to alpha particles and magnetic spectrometers in which alpha particles are deflected by a strong magnetic field have been used to overcome this difficulty. SSNTD are cheap and handy but they need chemical etching and microscope scanning.
A key role in experiments on cluster decay modes performed in Berkeley, Orsay, Dubna, and Milano was played by P. Buford Price, Eid Hourany, Michel Hussonnois, Svetlana Tretyakova, A. A. Ogloblin, Roberto Bonetti, and their coworkers.
The main region of 20 emitters experimentally observed until 2010 is above Z=86: 221Fr, 221-224,226Ra, 223,225Ac, 228,230Th, 231Pa, 230,232-236U, 236,238Pu, and 242Cm. Only upper limits could be detected in the following cases: 12C decay of 114Ba, 15N decay of 223Ac, 18O decay of 226Th, 24,26Ne decays of 232Th and of 236U, 28Mg decays of 232,233,235U, 30Mg decay of 237Np, and 34Si decay of 240Pu and of 241Am.
Some of the cluster emitters are members of the three natural radioactive families. Others should be produced by nuclear reactions. Up to now no odd-odd emitter has been observed.
From many decay modes with half-lives and branching ratios relative to alpha decay predicted with the analytical superasymmetric fission (ASAF) model, the following 11 have been experimentally confirmed: 14C, 20O, 23F, 22,24-26Ne, 28,30Mg, and 32,34Si. The experimental data are in good agreement with predicted values. A strong shell effect can be seen: as a rule the shortest value of the half-life is obtained when the daughter nucleus has a magic number of neutrons (Nd = 126) and/or protons (Zd = 82).
The known cluster emissions as of 2010 are as follows:[21][22][23]
Isotope Emitted particle Branching ratio log T(s) Q (MeV)
114Ba 12C < 3.4×10−5 > 4.10 18.985
221Fr 14C 8.14×10−13 14.52 31.290
221Ra 14C 1.15×10−12 13.39 32.394
222Ra 14C 3.7×10−10 11.01 33.049
223Ra 14C 8.9×10−10 15.04 31.829
224Ra 14C 4.3×10−11 15.86 30.535
223Ac 14C 3.2×10−11 12.96 33.064
225Ac 14C 4.5×10−12 17.28 30.476
226Ra 14C 3.2×10−11 21.19 28.196
228Th 20O 1.13×10−13 20.72 44.723
230Th 24Ne 5.6×10−13 24.61 57.758
231Pa 23F 9.97×10−15 26.02 51.844
24Ne 1.34×10−11 22.88 60.408
232U 24Ne 9.16×10−12 20.40 62.309
28Mg < 1.18×10−13 > 22.26 74.318
233U 24Ne 7.2×10−13 24.84 60.484
25Ne 60.776
28Mg <1.3×10−15 > 27.59 74.224
234U 28Mg 1.38×10−13 25.14 74.108
24Ne 9.9×10−14 25.88 58.825
26Ne 59.465
235U 24Ne 8.06×10−12 27.42 57.361
25Ne 57.756
28Mg < 1.8×10−12 > 28.09 72.162
29Mg 72.535
236U 24Ne < 9.2×10−12 > 25.90 55.944
26Ne 56.753
28Mg 2×10−13 27.58 70.560
30Mg 72.299
236Pu 28Mg 2.7×10−14 21.52 79.668
237Np 30Mg < 1.8×10−14 > 27.57 74.814
238Pu 32Si 1.38×10−16 25.27 91.188
28Mg 5.62×10−17 25.70 75.910
30Mg 76.822
240Pu 34Si < 6×10−15 > 25.52 91.026
241Am 34Si < 7.4×10−16 > 25.26 93.923
242Cm 34Si 1×10−16 23.15 96.508
Fine structure[edit]
The fine structure in 14C radioactivity of 223Ra was discussed for the first time by M. Greiner and W. Scheid in 1986.[24] The superconducting spectrometer SOLENO of IPN Orsay has been used since 1984 to identify 14C clusters emitted from 222-224,226Ra nuclei. Moreover, it was used to discover[25][26] the fine structure observing transitions to excited states of the daughter. A transition with an excited state of 14C predicted in Ref. [24] was not yet observed.
Surprisingly, the experimentalists had seen a transition to the first excited state of the daughter stronger than that to the ground state. The transition is favoured if the uncoupled nucleon is left in the same state in both parent and daughter nuclei. Otherwise the difference in nuclear structure leads to a large hindrance.
The interpretation[27] was confirmed: the main spherical component of the deformed parent wave function has an i11/2 character, i.e. the main component is spherical.
1. ^ Dorin N Poenaru, Walter Greiner (2011). Cluster Radioactivity, Ch. 1 of Clusters in Nuclei I. Lecture Notes in Physics 818. Springer, Berlin. pp. 1–56. ISBN 978-3-642-13898-0.
2. ^ Poenaru, D. N.; Greiner W. (1996). Nuclear Decay Modes. Institute of Physics Publishing, Bristol. pp. 1–577. ISBN 978-0-7503-0338-5.
3. ^ Encyclopædia Britannica Online. 2011.
4. ^ Sandulescu, A.; Poenaru, D. N. & Greiner W. "New type of decay of heavy nuclei intermediate between fission and alpha-decay". Sov. J. Part. Nucl. 11: 528–541.
5. ^ Rose, H. J.; Jones, G. A. (1984-01-19). "A new kind of natural radioactivity". Nature. 307 (5948): 245–247. Bibcode:1984Natur.307..245R. doi:10.1038/307245a0.
6. ^ Strutinski, V. M. (1967). "Shell effects in nuclear masses and deformation energies". Nucl. Phys. A. 95 (2): 420–442. Bibcode:1967NuPhA..95..420S. doi:10.1016/0375-9474(67)90510-6.
7. ^ Maruhn, J. A.; Greiner, W. (1972). "The asymmetric two-center shell model". Z. Phys. 251 (5): 431–457. Bibcode:1972ZPhy..251..431M. doi:10.1007/BF01391737.
8. ^ Gherghescu, R. A. (2003). "Deformed two center shell model". Phys. Rev. C. 67 (1): 014309. arXiv:nucl-th/0210064. Bibcode:2003PhRvC..67a4309G. doi:10.1103/PhysRevC.67.014309.
9. ^ Myers, W. D.; Swiatecki, W. J. (1966). "Nuclear masses and deformations". Nucl. Phys. A. 81: 1–60. doi:10.1016/0029-5582(66)90639-0.
10. ^ Krappe, H. J.; Nix, J. R. & Sierk, A. J. (1979). "Unified nuclear potential for heavy-ion elastic scattering, fusion, fission, and ground-state masses and deformations". Phys. Rev. C. 20 (3): 992–1013. Bibcode:1979PhRvC..20..992K. doi:10.1103/PhysRevC.20.992.
11. ^ D. N. Poenaru, D. N.; Ivascu, M. & Mazilu, D. (1980). "Folded Yukawa-plus-exponential model PES for nuclei with different charge densities". Computer Phys. Communic. 19 (2): 205–214. Bibcode:1980CoPhC..19..205P. doi:10.1016/0010-4655(80)90051-X.
12. ^ Blendowske, R.; Fliessbach, T.; Walliser, H. (1996). in Nuclear Decay Modes. Institute of Physics Publishing, Bristol. pp. 337–349. ISBN 978-0-7503-0338-5.
13. ^ Poenaru, D. N.; Greiner W. (1991). "Cluster Preformation as Barrier Penetrability". Physica Scripta. 44 (5): 427–429. Bibcode:1991PhyS...44..427P. doi:10.1088/0031-8949/44/5/004.
14. ^ Poenaru, D. N.; Ivascu, M.; Sandulescu, A. & Greiner, W. (1984). "Spontaneous emission of heavy clusters". J. Phys. G: Nucl. Phys. 10 (8): L183–L189. Bibcode:1984JPhG...10L.183P. doi:10.1088/0305-4616/10/8/004.
15. ^ Poenaru, D. N.; Schnabel, D.; Greiner, W.; Mazilu, D. & Gherghescu, R. (1991). "Nuclear Lifetimes for Cluster Radioactivities". Atomic Data and Nuclear Data Tables. 48 (2): 231–327. Bibcode:1991ADNDT..48..231P. doi:10.1016/0092-640X(91)90008-R.
16. ^ Poenaru, D. N.; Gherghescu, R.A. & Greiner, W. (2006). "Potential energy surfaces for cluster emitting nuclei". Phys. Rev. C. 73 (1): 014608. arXiv:nucl-th/0509073. Bibcode:2006PhRvC..73a4608P. doi:10.1103/PhysRevC.73.014608.
17. ^ Poenaru, D. N.; Ivascu, M. & Sandulescu, A. (1979). "Alpha-decay as a fission-like process". J. Phys. G: Nucl. Phys. 5 (10): L169–L173. Bibcode:1979JPhG....5L.169P. doi:10.1088/0305-4616/5/10/005.
18. ^ Poenaru, D. N.; Gherghescu, R.A. & Greiner, W. (2011). "Single universal curve for cluster radioactivities and alpha decay". Phys. Rev. C. 83 (1): 014601. Bibcode:2011PhRvC..83a4601P. doi:10.1103/PhysRevC.83.014601.
19. ^ Qi, C.; Xu, F. R.; Liotta, R. J. & Wyss, R (2009). "Universal Decay Law in Charged-Particle Emission and Exotic Cluster Radioactivity". Phys. Rev. Lett. 103 (7): 072501. arXiv:0909.4492. Bibcode:2009PhRvL.103g2501Q. doi:10.1103/PhysRevLett.103.072501. PMID 19792636.
20. ^ Audi, G.; Wapstra, A. H. & Thibault, C. (2003). "The AME2003 atomic mass evaluation". Nucl. Phys. A. 729 (1): 337–676. Bibcode:2003NuPhA.729..337A. doi:10.1016/j.nuclphysa.2003.11.003.
21. ^ Baum, E. M.; et al. (2002). Nuclides and Isotopes: Chart of the nuclides 16th ed. Knolls Atomic Power Laboratory (Lockheed Martin).
22. ^ Bonetti, R.; Guglielmetti, A. (2007). "Cluster radioactivity: an overview after twenty years" (PDF). Romanian Reports in Physics. 59: 301–310.
23. ^ Guglielmetti, A.; et al. (2008). "Carbon radioactivity of 223Ac and a search for nitrogen emission". Journal of Physics: Conference Series. 111 (1): 012050. Bibcode:2008JPhCS.111a2050G. doi:10.1088/1742-6596/111/1/012050.
24. ^ a b Greiner, M.; Scheid, W. (1986). "Radioactive decay into excited states via heavy ion emission". J. Phys. G: Nucl. Phys. 12 (10): L229–L234. Bibcode:1986JPhG...12L.229G. doi:10.1088/0305-4616/12/10/003.
25. ^ Brillard, L., Elayi, A. G., Hourani, E., Hussonnois, M., Le Du, J. F. Rosier, L. H., and Stab, L. (1989). "Mise en evidence d'une structure fine dans la radioactivite 14C". C. R. Acad. Sci. Paris. 309: 1105–1110.CS1 maint: multiple names: authors list (link)
26. ^ Hourany, E.; et al. (1995). "223Ra Nuclear Spectroscopy in 14C Radioactivity". Phys. Rev. 52 (1): 267–270. Bibcode:1995PhRvC..52..267H. doi:10.1103/physrevc.52.267.
27. ^ Sheline, R. K.; Ragnarsson, I. (1991). "Interpretation of the fine structure in the 14C radioactive decay of 223Ra". Phys. Rev. C. 43 (3): 1476–1479. Bibcode:1991PhRvC..43.1476S. doi:10.1103/PhysRevC.43.1476.
External links[edit] |
b0d056a61cbc6c3e | Sean Carroll makes the case for the Many-worlds interpretation of quantum mechanics
Sean Carroll has posted a passionate defense of the Many-world interpretation to quantum mechanics.
I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from From Eternity to Here, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is not completely insane, and indeed quite natural. The harder part is explaining why it actually works, which I’ll get to in another post.
Carroll’s description is well done, and I recommend reading the full post. My only concern is his characterization of the Many-worlds interpretation as inevitable. As I’ve written here before, I personally view Many-worlds interpretation as a candidate for reality, but I remain unconvinced that we’ve reached the point where we can move it from candidate to settled.
The conclusion, therefore, is that multiple worlds automatically occur in quantum mechanics. They are an inevitable part of the formalism. The only remaining question is: what are you going to do about it? There are three popular strategies on the market: anger, denial, and acceptance.
Or we can simply admit that we don’t yet have unique evidence for any of the interpretations. No anger or denial necessary, yet.
There are other silly objections to EQM, of course. The most popular is probably the complaint that it’s not falsifiable. That truly makes no sense. It’s trivial to falsify EQM — just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes.
I think spreading superposition that we can’t observe, is an assumption. True, if we falsify the Schrodinger equation or superposition, we’ve falsified the Many-worlds interpretation, but haven’t we also falsified every other interpretation? To consider Many-worlds falsifiable, don’t we need to be able to, at least in principle, uniquely falsify it?
Occam’s razor often surfaces in these discussions. But Occam’s razor seems like a tough call on these interpretations. Is mathematical parsimony the same thing as ontological parsimony? I’m not sure of the answer, but dismissing the concern as silly seems unjustified.
A part of me wonders if there’s any real harm in people concluding that the Many-world interpretation is true, since its truth or falsity seems to have no bearing on the rest of the world. But this comes back to what should count as settled science, and I can’t see how a concept that is not testable in any foreseeable manner should count. And, as other physicists have said, there is a danger of accepting this interpretation and prematurely ceasing to look for what might be the real explanation, one that might turn out to open new doors.
7 thoughts on “Sean Carroll makes the case for the Many-worlds interpretation of quantum mechanics
Agree 100%. But, I would like to go one step further.
Interpretation by definition does not alter the validity of the subject topic, that is, the issue of right or wrong of an interpretation is not superbly important. But, there are good, bad, better and the best interpretations. Only when one interpretation becomes the ‘gateway’ to new physics, it will become a better or the best interpretation.
Currently, there are many different interpretations for this quantum-measurement issue. As this post is about discussing Sean Carroll’s view, I will thus talk about only two interpretations: Copenhagen interpretation (CI) and Many worlds interpretation (MWI).
First, what are the essences of these two interpretations?
One, Copenhagen interpretation (CI): this is in fact a ‘two-worlds’ interpretation. That is, there are two different worlds: the classic world and the quantum world. When this two world collides, the quantum world ‘collapses’. CI admits that these two worlds are not unified.
Two, Many worlds interpretation (MWI): this is in fact a ‘single-world’ interpretation. In addition to the fact that quantum particle is in a quantum world, the apparatus which does the measurement is also sitting inside of this quantum world (also in a superposition state). That is, both parts of the measurement sit in the same world, and this is a view that classic/quantum are unified. But, without giving an actual ‘unification’ mechanism, this is a ‘faked’ unification, without beef. The fact is that this MWI is not a ‘gateway’ to a new physics. Although Sean is advocating the Multiverse, the MWI has nothing to do the Multiverse and cannot provide any help for it.
The MWI is a great attempt to provide a classic/quantum unification image, but no cigar. Hilary Putnam (a prominent philosopher) discussed this quantum-measurement issue at his blog. One view is a true ‘single-world’ interpretation. That is, the electron (quantum particle) is as a solid marble similar to the apparatus, while both of them sit in a ‘single-quantum-world’ (see, ).
Is this new interpretation any better than the two above? This can be decided by a ‘beauty-contest’. Is this new interpretation a ‘gateway’ to new physics? Can it provide answers for any known open-questions of today in physics?
1. Thanks Tienzen! I fear most of that discussion at Putnam’s blog was hopelessly above my head, but I didn’t realize he had started a blog. Just subscribed to it.
I agree that some interpretations are better than others, however I tend to think the obviously bad ones have fallen by the wayside over time. The ones we still talk about are the ones that are hard to dismiss.
I do agree with Putnam that settling on one interpretation is premature. Until we have evidence unique to one interpretation, physicists should continue thinking about this and looking for ways to test these interpretations. Until then, which interpretation you prefer is a philosophical conclusion.
2. I observe that he writes down equations that talk about “multiple states” and then after a few paragraphs starts using the phrase “multiple worlds” and pretends not to notice what he’s done.
Liked by 1 person
3. SelfAwarePatterns: “… and I can’t see how a concept that is not testable in any foreseeable manner should count.”
I must disagree with you on this one.
In recent years, many people try to go around the falsifiability issue or simply abandon it for variety of reasons, such as, the failure of M-string theory, the failure of SUSY and the failure of multiverses. I strongly disagreed with their cases because of their using the wrong arguments.
Falsifiability ‘was’ a great ‘tool’ for physics for over 400 years, but it is wrong in ‘principle’ as the ‘true truth’ cannot be falsified by ‘definition’. In order to overcome this ‘definition’ issue, anti-realism arose from two directions (pathways).
School one, from the notion that ‘final truth’ is all elusive to the conclusion that the ‘final truth’ is a non-reality. This school is based on the falsifiability ‘principle’. If a thing is not testable, it is not a reality. The fallacy of this argument is taking the falsifiability as a ‘principle’ while the ‘falsifiability’ is the issue of being inquired; that is, proven the issue with the issue in question. This school calls themselves as ‘model builders’. They are not searching for ‘truth’, and that all they do is building the models. (see ).
School two, with a BIV (brain in a vat) argument, this is a much more powerful argument. BIV attempts to crash the long established solid foundation of “cogito, ergo sum”. This BIV attempts to create a ‘mystery space (MS)’. And in this MS, no objective knowledge can be attained. Thus, if no objective knowledge is attainable, then a reality (if any) is meaningless.
This BIV argument can be voided with two steps.
1. To show that everything in a (any) MS is totally knowable if there is ‘one’ external point outside of that MS.
2. To show that there is no totally ‘isolated’ MS (without an ‘external’ point) in ‘this’ universe.
The school one (the model only school) can be voided by showing some ‘solid’ good physics which are not the results of falsifiability-tool. Facing with this kind of solid ‘examples’, Hilary Putnam (a prominent philosopher on philosophy of science himself) again quoted from Quine’s saying {much good science is untestable}, (see, ).
While I disagree with some wrong reasons for abandoning the falsifiability-tool, I must say that the falsifiability is wrong in ‘principle’ as a check for the true-truth. While I agree with the fact that falsifiability-tool has done a superb job in the past 400 years in physics, it is no longer useful when we have gained the solid knowledge to such height in physics as we have now. We now can do a lot not-testable good physics. I am saying this, not as a principle nor as a philosophy. Many solid examples are on the table for everyone to see now.
The wind direction has changed. The tide has changed.
1. Tienzen, I actually reject both of the schools you describe. For falsifiability, I agree that it’s far too strong a statement to say that if it’s not falsifiable, it’s not reality. But I think it’s entirely reasonable to say, if it’s not falsifiable, we can’t know whether or not it is reality.
Put aside the falsifiability language for a moment and think of it this way. What about the observable world would be different between the theory being true and it being false? If the answer is nothing, then if it is false and we think it true, how would we ever discover we were wrong?
We can’t build any technology on top of such a theory. (Success or failure of such a technology would provide falsifiability.) If we considered the theory settled science, we could attempt to add other theories on top of it, but unless those theories made a difference in the observable world, we’d simply be building a house of cards that never leads us to useful knowledge. It is essentially a dead end.
I’m not saying that every theory should be required to state how it could be falsified in order to be explored. Speculation is too crucial a component of scientific discovery to hobble it with that. But I am saying that a theory should make some observable difference in the world before we consider it settled science.
One last point, falsifiability is a different criteria than verification (which is what Putnam was discussing). The first only requires that it be possible, in principle, to demonstrate that a theory is wrong. The second requires positive empirical evidence for the theory. Karl Popper rejected verification as far too stringent because it excluded too much valid science. (Technically, given the problem of induction, no scientific theory really meets verification.) Popper proposed falsifiability instead.
1. SelfAwarePatterns: “Tienzen, I actually reject both of the schools you describe. …”
This is the most important first step. The ‘model building’ school was the most powerful one for the past 50 years. Fortunately, it has lost some of its previous glory now.
“… how would we ever discover we were wrong? …”
This is truly the most important question that every physicist should ask. This is so big an issue and can never be wholly discussed in a short comment. But, there are some prerequisite issues which must be addressed first. The following three can be the beginning of a short list.
One, totally understand the subjective/objective issue (SOI), as this is very important to know the difference between the ‘physics of nature’ and the ‘physics of human’. Yet, this SOI itself can be analyzed ‘philosophically’.
Two, do we now have some (or any) ‘established’ knowledge (such as the Standard Model, although not complete)? Are they able to become anchors as the check points for any new theory?
Question: when a framework (I prefer not to use the word ‘theory’) makes contacts to ‘all’ known anchors (whatever they are, per our consensus), why is it not ‘make a difference in the observable world’ while no other framework is able to make such a contact?
Three, the nature universe is a given, and by now we should have some knowledge about it (the anchors). If we can ‘design’ a universe and derive a set of laws for this ‘designed-universe’, can we make a beauty-contest between these two (nature universe vs designed-universe)? Can we get some knowledge from this kind of beauty-contest?
These are just a few quick ideas of ‘how would we ever discover we were wrong?’ If a framework cannot meet the simple requisites above, it is obviously ‘wrong’. For example, the M-string theory has failed to meet its mission of ‘string-unification’, and thus it cannot be a right framework for describing the ‘nature’ (although it can be something great about some mathematical structures).
This is a big issue and will not be resolved in a comment. I will put it aside for now but will definitely revisit it.
1. Thanks Tienzen. It is indeed a big issue. Whole books have been written on it. I think falsifiability remains a useful criteria for settled science, but like so much in this arena, judging whether or not a notion meets it isn’t always easy.
Your thoughts?
You are commenting using your account. Log Out / Change )
Google photo
Twitter picture
Facebook photo
Connecting to %s
|
94d321eca605a801 | @article{9005, abstract = {Studies on the experimental realization of two-dimensional anyons in terms of quasiparticles have been restricted, so far, to only anyons on the plane. It is known, however, that the geometry and topology of space can have significant effects on quantum statistics for particles moving on it. Here, we have undertaken the first step toward realizing the emerging fractional statistics for particles restricted to move on the sphere instead of on the plane. We show that such a model arises naturally in the context of quantum impurity problems. In particular, we demonstrate a setup in which the lowest-energy spectrum of two linear bosonic or fermionic molecules immersed in a quantum many-particle environment can coincide with the anyonic spectrum on the sphere. This paves the way toward the experimental realization of anyons on the sphere using molecular impurities. Furthermore, since a change in the alignment of the molecules corresponds to the exchange of the particles on the sphere, such a realization reveals a novel type of exclusion principle for molecular impurities, which could also be of use as a powerful technique to measure the statistics parameter. Finally, our approach opens up a simple numerical route to investigate the spectra of many anyons on the sphere. Accordingly, we present the spectrum of two anyons on the sphere in the presence of a Dirac monopole field.}, author = {Brooks, Morris and Lemeshko, Mikhail and Lundholm, D. and Yakaboylu, Enderalp}, issn = {10797114}, journal = {Physical Review Letters}, number = {1}, publisher = {American Physical Society}, title = {{Molecular impurities as a realization of anyons on the two-sphere}}, doi = {10.1103/PhysRevLett.126.015301}, volume = {126}, year = {2021}, } @article{7968, abstract = {Organic materials are known to feature long spin-diffusion times, originating in a generally small spin–orbit coupling observed in these systems. From that perspective, chiral molecules acting as efficient spin selectors pose a puzzle that attracted a lot of attention in recent years. Here, we revisit the physical origins of chiral-induced spin selectivity (CISS) and propose a simple analytic minimal model to describe it. The model treats a chiral molecule as an anisotropic wire with molecular dipole moments aligned arbitrarily with respect to the wire’s axes and is therefore quite general. Importantly, it shows that the helical structure of the molecule is not necessary to observe CISS and other chiral nonhelical molecules can also be considered as potential candidates for the CISS effect. We also show that the suggested simple model captures the main characteristics of CISS observed in the experiment, without the need for additional constraints employed in the previous studies. The results pave the way for understanding other related physical phenomena where the CISS effect plays an essential role.}, author = {Ghazaryan, Areg and Paltiel, Yossi and Lemeshko, Mikhail}, issn = {1932-7447}, journal = {The Journal of Physical Chemistry C}, number = {21}, pages = {11716--11721}, publisher = {American Chemical Society}, title = {{Analytic model of chiral-induced spin selectivity}}, doi = {10.1021/acs.jpcc.0c02584}, volume = {124}, year = {2020}, } @article{8170, abstract = {Alignment of OCS, CS2, and I2 molecules embedded in helium nanodroplets is measured as a function of time following rotational excitation by a nonresonant, comparatively weak ps laser pulse. The distinct peaks in the power spectra, obtained by Fourier analysis, are used to determine the rotational, B, and centrifugal distortion, D, constants. For OCS, B and D match the values known from IR spectroscopy. For CS2 and I2, they are the first experimental results reported. The alignment dynamics calculated from the gas-phase rotational Schrödinger equation, using the experimental in-droplet B and D values, agree in detail with the measurement for all three molecules. The rotational spectroscopy technique for molecules in helium droplets introduced here should apply to a range of molecules and complexes.}, author = {Chatterley, Adam S. and Christiansen, Lars and Schouder, Constant A. and Jørgensen, Anders V. and Shepperson, Benjamin and Cherepanov, Igor and Bighin, Giacomo and Zillich, Robert E. and Lemeshko, Mikhail and Stapelfeldt, Henrik}, issn = {10797114}, journal = {Physical Review Letters}, number = {1}, publisher = {American Physical Society}, title = {{Rotational coherence spectroscopy of molecules in Helium nanodroplets: Reconciling the time and the frequency domains}}, doi = {10.1103/PhysRevLett.125.013001}, volume = {125}, year = {2020}, } @article{8587, abstract = {Inspired by the possibility to experimentally manipulate and enhance chemical reactivity in helium nanodroplets, we investigate the effective interaction and the resulting correlations between two diatomic molecules immersed in a bath of bosons. By analogy with the bipolaron, we introduce the biangulon quasiparticle describing two rotating molecules that align with respect to each other due to the effective attractive interaction mediated by the excitations of the bath. We study this system in different parameter regimes and apply several theoretical approaches to describe its properties. Using a Born–Oppenheimer approximation, we investigate the dependence of the effective intermolecular interaction on the rotational state of the two molecules. In the strong-coupling regime, a product-state ansatz shows that the molecules tend to have a strong alignment in the ground state. To investigate the system in the weak-coupling regime, we apply a one-phonon excitation variational ansatz, which allows us to access the energy spectrum. In comparison to the angulon quasiparticle, the biangulon shows shifted angulon instabilities and an additional spectral instability, where resonant angular momentum transfer between the molecules and the bath takes place. These features are proposed as an experimentally observable signature for the formation of the biangulon quasiparticle. Finally, by using products of single angulon and bare impurity wave functions as basis states, we introduce a diagonalization scheme that allows us to describe the transition from two separated angulons to a biangulon as a function of the distance between the two molecules.}, author = {Li, Xiang and Yakaboylu, Enderalp and Bighin, Giacomo and Schmidt, Richard and Lemeshko, Mikhail and Deuchert, Andreas}, issn = {0021-9606}, journal = {The Journal of Chemical Physics}, keywords = {Physical and Theoretical Chemistry, General Physics and Astronomy}, number = {16}, publisher = {AIP Publishing}, title = {{Intermolecular forces and correlations mediated by a phonon bath}}, doi = {10.1063/1.5144759}, volume = {152}, year = {2020}, } @article{8588, abstract = {Dipolar (or spatially indirect) excitons (IXs) in semiconductor double quantum well (DQW) subjected to an electric field are neutral species with a dipole moment oriented perpendicular to the DQW plane. Here, we theoretically study interactions between IXs in stacked DQW bilayers, where the dipolar coupling can be either attractive or repulsive depending on the relative positions of the particles. By using microscopic band structure calculations to determine the electronic states forming the excitons, we show that the attractive dipolar interaction between stacked IXs deforms their electronic wave function, thereby increasing the inter-DQW interaction energy and making the IX even more electrically polarizable. Many-particle interaction effects are addressed by considering the coupling between a single IX in one of the DQWs to a cloud of IXs in the other DQW, which is modeled either as a closed-packed lattice or as a continuum IX fluid. We find that the lattice model yields IX interlayer binding energies decreasing with increasing lattice density. This behavior is due to the dominating role of the intra-DQW dipolar repulsion, which prevents more than one exciton from entering the attractive region of the inter-DQW coupling. Finally, both models shows that the single IX distorts the distribution of IXs in the adjacent DQW, thus inducing the formation of an IX dipolar polaron (dipolaron). While the interlayer binding energy reduces with IX density for lattice dipolarons, the continuous polaron model predicts a nonmonotonous dependence on density in semiquantitative agreement with a recent experimental study [cf. Hubert et al., Phys. Rev. X 9, 021026 (2019)].}, author = {Hubert, C. and Cohen, K. and Ghazaryan, Areg and Lemeshko, Mikhail and Rapaport, R. and Santos, P. V.}, issn = {2469-9950}, journal = {Physical Review B}, number = {4}, publisher = {American Physical Society}, title = {{Attractive interactions, molecular complexes, and polarons in coupled dipolar exciton fluids}}, doi = {10.1103/physrevb.102.045307}, volume = {102}, year = {2020}, } @article{8652, abstract = {Nature creates electrons with two values of the spin projection quantum number. In certain applications, it is important to filter electrons with one spin projection from the rest. Such filtering is not trivial, since spin-dependent interactions are often weak, and cannot lead to any substantial effect. Here we propose an efficient spin filter based upon scattering from a two-dimensional crystal, which is made of aligned point magnets. The polarization of the outgoing electron flux is controlled by the crystal, and reaches maximum at specific values of the parameters. In our scheme, polarization increase is accompanied by higher reflectivity of the crystal. High transmission is feasible in scattering from a quantum cavity made of two crystals. Our findings can be used for studies of low-energy spin-dependent scattering from two-dimensional ordered structures made of magnetic atoms or aligned chiral molecules.}, author = {Ghazaryan, Areg and Lemeshko, Mikhail and Volosniev, Artem}, issn = {2399-3650}, journal = {Communications Physics}, publisher = {Springer Nature}, title = {{Filtering spins by scattering from a lattice of point magnets}}, doi = {10.1038/s42005-020-00445-8}, volume = {3}, year = {2020}, } @article{8769, abstract = {One of the hallmarks of quantum statistics, tightly entwined with the concept of topological phases of matter, is the prediction of anyons. Although anyons are predicted to be realized in certain fractional quantum Hall systems, they have not yet been unambiguously detected in experiment. Here we introduce a simple quantum impurity model, where bosonic or fermionic impurities turn into anyons as a consequence of their interaction with the surrounding many-particle bath. A cloud of phonons dresses each impurity in such a way that it effectively attaches fluxes or vortices to it and thereby converts it into an Abelian anyon. The corresponding quantum impurity model, first, provides a different approach to the numerical solution of the many-anyon problem, along with a concrete perspective of anyons as emergent quasiparticles built from composite bosons or fermions. More importantly, the model paves the way toward realizing anyons using impurities in crystal lattices as well as ultracold gases. In particular, we consider two heavy electrons interacting with a two-dimensional lattice crystal in a magnetic field, and show that when the impurity-bath system is rotated at the cyclotron frequency, impurities behave as anyons as a consequence of the angular momentum exchange between the impurities and the bath. A possible experimental realization is proposed by identifying the statistics parameter in terms of the mean-square distance of the impurities and the magnetization of the impurity-bath system, both of which are accessible to experiment. Another proposed application is impurities immersed in a two-dimensional weakly interacting Bose gas.}, author = {Yakaboylu, Enderalp and Ghazaryan, Areg and Lundholm, D. and Rougerie, N. and Lemeshko, Mikhail and Seiringer, Robert}, issn = {2469-9950}, journal = {Physical Review B}, number = {14}, publisher = {American Physical Society}, title = {{Quantum impurity model for anyons}}, doi = {10.1103/physrevb.102.144109}, volume = {102}, year = {2020}, } @article{7933, abstract = {We study a mobile quantum impurity, possessing internal rotational degrees of freedom, confined to a ring in the presence of a many-particle bosonic bath. By considering the recently introduced rotating polaron problem, we define the Hamiltonian and examine the energy spectrum. The weak-coupling regime is studied by means of a variational ansatz in the truncated Fock space. The corresponding spectrum indicates that there emerges a coupling between the internal and orbital angular momenta of the impurity as a consequence of the phonon exchange. We interpret the coupling as a phonon-mediated spin-orbit coupling and quantify it by using a correlation function between the internal and the orbital angular momentum operators. The strong-coupling regime is investigated within the Pekar approach, and it is shown that the correlation function of the ground state shows a kink at a critical coupling, that is explained by a sharp transition from the noninteracting state to the states that exhibit strong interaction with the surroundings. The results might find applications in such fields as spintronics or topological insulators where spin-orbit coupling is of crucial importance.}, author = {Maslov, Mikhail and Lemeshko, Mikhail and Yakaboylu, Enderalp}, issn = {24699969}, journal = {Physical Review B}, number = {18}, publisher = {American Physical Society}, title = {{Synthetic spin-orbit coupling mediated by a bosonic environment}}, doi = {10.1103/PhysRevB.101.184104}, volume = {101}, year = {2020}, } @article{7396, abstract = {The angular momentum of molecules, or, equivalently, their rotation in three-dimensional space, is ideally suited for quantum control. Molecular angular momentum is naturally quantized, time evolution is governed by a well-known Hamiltonian with only a few accurately known parameters, and transitions between rotational levels can be driven by external fields from various parts of the electromagnetic spectrum. Control over the rotational motion can be exerted in one-, two-, and many-body scenarios, thereby allowing one to probe Anderson localization, target stereoselectivity of bimolecular reactions, or encode quantum information to name just a few examples. The corresponding approaches to quantum control are pursued within separate, and typically disjoint, subfields of physics, including ultrafast science, cold collisions, ultracold gases, quantum information science, and condensed-matter physics. It is the purpose of this review to present the various control phenomena, which all rely on the same underlying physics, within a unified framework. To this end, recall the Hamiltonian for free rotations, assuming the rigid rotor approximation to be valid, and summarize the different ways for a rotor to interact with external electromagnetic fields. These interactions can be exploited for control—from achieving alignment, orientation, or laser cooling in a one-body framework, steering bimolecular collisions, or realizing a quantum computer or quantum simulator in the many-body setting.}, author = {Koch, Christiane P. and Lemeshko, Mikhail and Sugny, Dominique}, issn = {0034-6861}, journal = {Reviews of Modern Physics}, number = {3}, publisher = {APS}, title = {{Quantum control of molecular rotation}}, doi = {10.1103/revmodphys.91.035005}, volume = {91}, year = {2019}, } @article{5886, abstract = {Problems involving quantum impurities, in which one or a few particles are interacting with a macroscopic environment, represent a pervasive paradigm, spanning across atomic, molecular, and condensed-matter physics. In this paper we introduce new variational approaches to quantum impurities and apply them to the Fröhlich polaron–a quasiparticle formed out of an electron (or other point-like impurity) in a polar medium, and to the angulon–a quasiparticle formed out of a rotating molecule in a bosonic bath. We benchmark these approaches against established theories, evaluating their accuracy as a function of the impurity-bath coupling.}, author = {Li, Xiang and Bighin, Giacomo and Yakaboylu, Enderalp and Lemeshko, Mikhail}, issn = {00268976}, journal = {Molecular Physics}, publisher = {Taylor and Francis}, title = {{Variational approaches to quantum impurities: from the Fröhlich polaron to the angulon}}, doi = {10.1080/00268976.2019.1567852}, year = {2019}, } @article{6092, abstract = {In 1915, Einstein and de Haas and Barnett demonstrated that changing the magnetization of a magnetic material results in mechanical rotation and vice versa. At the microscopic level, this effect governs the transfer between electron spin and orbital angular momentum, and lattice degrees of freedom, understanding which is key for molecular magnets, nano-magneto-mechanics, spintronics, and ultrafast magnetism. Until now, the timescales of electron-to-lattice angular momentum transfer remain unclear, since modeling this process on a microscopic level requires the addition of an infinite amount of quantum angular momenta. We show that this problem can be solved by reformulating it in terms of the recently discovered angulon quasiparticles, which results in a rotationally invariant quantum many-body theory. In particular, we demonstrate that nonperturbative effects take place even if the electron-phonon coupling is weak and give rise to angular momentum transfer on femtosecond timescales.}, author = {Mentink, Johann H and Katsnelson, Mikhail and Lemeshko, Mikhail}, journal = {Physical Review B}, number = {6}, publisher = {APS}, title = {{Quantum many-body dynamics of the Einstein-de Haas effect}}, doi = {10.1103/PhysRevB.99.064428}, volume = {99}, year = {2019}, } @article{6786, abstract = {Dipolar coupling plays a fundamental role in the interaction between electrically or magnetically polarized species such as magnetic atoms and dipolar molecules in a gas or dipolar excitons in the solid state. Unlike Coulomb or contactlike interactions found in many atomic, molecular, and condensed-matter systems, this interaction is long-ranged and highly anisotropic, as it changes from repulsive to attractive depending on the relative positions and orientation of the dipoles. Because of this unique property, many exotic, symmetry-breaking collective states have been recently predicted for cold dipolar gases, but only a few have been experimentally detected and only in dilute atomic dipolar Bose-Einstein condensates. Here, we report on the first observation of attractive dipolar coupling between excitonic dipoles using a new design of stacked semiconductor bilayers. We show that the presence of a dipolar exciton fluid in one bilayer modifies the spatial distribution and increases the binding energy of excitonic dipoles in a vertically remote layer. The binding energy changes are explained using a many-body polaron model describing the deformation of the exciton cloud due to its interaction with a remote dipolar exciton. The surprising nonmonotonic dependence on the cloud density indicates the important role of dipolar correlations, which is unique to dense, strongly interacting dipolar solid-state systems. Our concept provides a route for the realization of dipolar lattices with strong anisotropic interactions in semiconductor systems, which open the way for the observation of theoretically predicted new and exotic collective phases, as well as for engineering and sensing their collective excitations.}, author = {Hubert, Colin and Baruchi, Yifat and Mazuz-Harpaz, Yotam and Cohen, Kobi and Biermann, Klaus and Lemeshko, Mikhail and West, Ken and Pfeiffer, Loren and Rapaport, Ronen and Santos, Paulo}, issn = {2160-3308}, journal = {Physical Review X}, number = {2}, publisher = {APS}, title = {{Attractive dipolar coupling between stacked exciton fluids}}, doi = {10.1103/PhysRevX.9.021026}, volume = {9}, year = {2019}, } @article{195, abstract = {We demonstrate that identical impurities immersed in a two-dimensional many-particle bath can be viewed as flux-tube-charged-particle composites described by fractional statistics. In particular, we find that the bath manifests itself as an external magnetic flux tube with respect to the impurities, and hence the time-reversal symmetry is broken for the effective Hamiltonian describing the impurities. The emerging flux tube acts as a statistical gauge field after a certain critical coupling. This critical coupling corresponds to the intersection point between the quasiparticle state and the phonon wing, where the angular momentum is transferred from the impurity to the bath. This amounts to a novel configuration with emerging anyons. The proposed setup paves the way to realizing anyons using electrons interacting with superfluid helium or lattice phonons, as well as using atomic impurities in ultracold gases.}, author = {Yakaboylu, Enderalp and Lemeshko, Mikhail}, journal = {Physical Review B - Condensed Matter and Materials Physics}, number = {4}, publisher = {American Physical Society}, title = {{Anyonic statistics of quantum impurities in two dimensions}}, doi = {10.1103/PhysRevB.98.045402}, volume = {98}, year = {2018}, } @article{5794, abstract = {We present an approach to interacting quantum many-body systems based on the notion of quantum groups, also known as q-deformed Lie algebras. In particular, we show that, if the symmetry of a free quantum particle corresponds to a Lie group G, in the presence of a many-body environment this particle can be described by a deformed group, Gq. Crucially, the single deformation parameter, q, contains all the information about the many-particle interactions in the system. We exemplify our approach by considering a quantum rotor interacting with a bath of bosons, and demonstrate that extracting the value of q from closed-form solutions in the perturbative regime allows one to predict the behavior of the system for arbitrary values of the impurity-bath coupling strength, in good agreement with nonperturbative calculations. Furthermore, the value of the deformation parameter allows one to predict at which coupling strengths rotor-bath interactions result in a formation of a stable quasiparticle. The approach based on quantum groups does not only allow for a drastic simplification of impurity problems, but also provides valuable insights into hidden symmetries of interacting many-particle systems.}, author = {Yakaboylu, Enderalp and Shkolnikov, Mikhail and Lemeshko, Mikhail}, issn = {00319007}, journal = {Physical Review Letters}, number = {25}, publisher = {American Physical Society}, title = {{Quantum groups as hidden symmetries of quantum impurities}}, doi = {10.1103/PhysRevLett.121.255302}, volume = {121}, year = {2018}, } @article{5983, abstract = {We study a quantum impurity possessing both translational and internal rotational degrees of freedom interacting with a bosonic bath. Such a system corresponds to a “rotating polaron,” which can be used to model, e.g., a rotating molecule immersed in an ultracold Bose gas or superfluid helium. We derive the Hamiltonian of the rotating polaron and study its spectrum in the weak- and strong-coupling regimes using a combination of variational, diagrammatic, and mean-field approaches. We reveal how the coupling between linear and angular momenta affects stable quasiparticle states, and demonstrate that internal rotation leads to an enhanced self-localization in the translational degrees of freedom.}, author = {Yakaboylu, Enderalp and Midya, Bikashkali and Deuchert, Andreas and Leopold, Nikolai K and Lemeshko, Mikhail}, issn = {2469-9950}, journal = {Physical Review B}, number = {22}, publisher = {American Physical Society}, title = {{Theory of the rotating polaron: Spectrum and self-localization}}, doi = {10.1103/physrevb.98.224506}, volume = {98}, year = {2018}, } @article{6339, abstract = {We introduce a diagrammatic Monte Carlo approach to angular momentum properties of quantum many-particle systems possessing a macroscopic number of degrees of freedom. The treatment is based on a diagrammatic expansion that merges the usual Feynman diagrams with the angular momentum diagrams known from atomic and nuclear structure theory, thereby incorporating the non-Abelian algebra inherent to quantum rotations. Our approach is applicable at arbitrary coupling, is free of systematic errors and of finite-size effects, and naturally provides access to the impurity Green function. We exemplify the technique by obtaining an all-coupling solution of the angulon model; however, the method is quite general and can be applied to a broad variety of systems in which particles exchange quantum angular momentum with their many-body environment.}, author = {Bighin, Giacomo and Tscherbul, Timur and Lemeshko, Mikhail}, journal = {Physical Review Letters}, number = {16}, publisher = {APS}, title = {{Diagrammatic Monte Carlo approach to angular momentum in quantum many-particle systems}}, doi = {10.1103/physrevlett.121.165301}, volume = {121}, year = {2018}, } @article{415, abstract = {Recently it was shown that a molecule rotating in a quantum solvent can be described in terms of the “angulon” quasiparticle [M. Lemeshko, Phys. Rev. Lett. 118, 095301 (2017)]. Here we extend the angulon theory to the case of molecules possessing an additional spin-1/2 degree of freedom and study the behavior of the system in the presence of a static magnetic field. We show that exchange of angular momentum between the molecule and the solvent can be altered by the field, even though the solvent itself is non-magnetic. In particular, we demonstrate a possibility to control resonant emission of phonons with a given angular momentum using a magnetic field.}, author = {Rzadkowski, Wojciech and Lemeshko, Mikhail}, journal = {The Journal of Chemical Physics}, number = {10}, publisher = {AIP}, title = {{Effect of a magnetic field on molecule–solvent angular momentum transfer}}, doi = {10.1063/1.5017591}, volume = {148}, year = {2018}, } @article{417, abstract = {We introduce a Diagrammatic Monte Carlo (DiagMC) approach to complex molecular impurities with rotational degrees of freedom interacting with a many-particle environment. The treatment is based on the diagrammatic expansion that merges the usual Feynman diagrams with the angular momentum diagrams known from atomic and nuclear structure theory, thereby incorporating the non-Abelian algebra inherent to quantum rotations. Our approach works at arbitrary coupling, is free of systematic errors and of finite size effects, and naturally provides access to the impurity Green function. We exemplify the technique by obtaining an all-coupling solution of the angulon model, however, the method is quite general and can be applied to a broad variety of quantum impurities possessing angular momentum degrees of freedom. }, author = {Bighin, Giacomo and Tscherbul, Timur and Lemeshko, Mikhail}, journal = {Physical Review Letters}, number = {16}, publisher = {APS Physics}, title = {{Diagrammatic Monte Carlo approach to rotating molecular impurities}}, doi = {10.1103/PhysRevLett.121.165301}, volume = {121}, year = {2018}, } @inbook{604, abstract = {In several settings of physics and chemistry one has to deal with molecules interacting with some kind of an external environment, be it a gas, a solution, or a crystal surface. Understanding molecular processes in the presence of such a many-particle bath is inherently challenging, and usually requires large-scale numerical computations. Here, we present an alternative approach to the problem, based on the notion of the angulon quasiparticle. We show that molecules rotating inside superfluid helium nanodroplets and Bose–Einstein condensates form angulons, and therefore can be described by straightforward solutions of a simple microscopic Hamiltonian. Casting the problem in the language of angulons allows us not only to greatly simplify it, but also to gain insights into the origins of the observed phenomena and to make predictions for future experimental studies.}, author = {Lemeshko, Mikhail and Schmidt, Richard}, booktitle = {Cold Chemistry: Molecular Scattering and Reactivity Near Absolute Zero }, editor = {Dulieu, Oliver and Osterwalder, Andreas}, issn = {20413181}, pages = {444 -- 495}, publisher = {The Royal Society of Chemistry}, title = {{Molecular impurities interacting with a many-particle environment: From ultracold gases to helium nanodroplets}}, doi = {10.1039/9781782626800-00444}, volume = {11}, year = {2017}, } @article{1109, abstract = {Rotation of molecules embedded in He nanodroplets is explored by a combination of fs laser-induced alignment experiments and angulon quasiparticle theory. We demonstrate that at low fluence of the fs alignment pulse, the molecule and its solvation shell can be set into coherent collective rotation lasting long enough to form revivals. With increasing fluence, however, the revivals disappear -- instead, rotational dynamics as rapid as for an isolated molecule is observed during the first few picoseconds. Classical calculations trace this phenomenon to transient decoupling of the molecule from its He shell. Our results open novel opportunities for studying non-equilibrium solute-solvent dynamics and quantum thermalization. }, author = {Shepperson, Benjamin and Søndergaard, Anders and Christiansen, Lars and Kaczmarczyk, Jan and Zillich, Robert and Lemeshko, Mikhail and Stapelfeldt, Henrik}, journal = {Physical Review Letters}, number = {20}, publisher = {American Physical Society}, title = {{Laser-induced rotation of iodine molecules in helium nanodroplets: Revivals and breaking-free}}, doi = {10.1103/PhysRevLett.118.203203}, volume = {118}, year = {2017}, } |
5a2cedac1c355778 | @article{9212, abstract = {Plant fitness is largely dependent on the root, the underground organ, which, besides its anchoring function, supplies the plant body with water and all nutrients necessary for growth and development. To exploit the soil effectively, roots must constantly integrate environmental signals and react through adjustment of growth and development. Important components of the root management strategy involve a rapid modulation of the root growth kinetics and growth direction, as well as an increase of the root system radius through formation of lateral roots (LRs). At the molecular level, such a fascinating growth and developmental flexibility of root organ requires regulatory networks that guarantee stability of the developmental program but also allows integration of various environmental inputs. The plant hormone auxin is one of the principal endogenous regulators of root system architecture by controlling primary root growth and formation of LR. In this review, we discuss recent progress in understanding molecular networks where auxin is one of the main players shaping the root system and acting as mediator between endogenous cues and environmental factors.}, author = {Cavallari, Nicola and Artner, Christina and Benková, Eva}, issn = {1943-0264}, journal = {Cold Spring Harbor Perspectives in Biology}, publisher = {Cold Spring Harbor Laboratory Press}, title = {{Auxin-regulated lateral root organogenesis}}, doi = {10.1101/cshperspect.a039941}, year = {2021}, } @phdthesis{8934, abstract = {In this thesis, we consider several of the most classical and fundamental problems in static analysis and formal verification, including invariant generation, reachability analysis, termination analysis of probabilistic programs, data-flow analysis, quantitative analysis of Markov chains and Markov decision processes, and the problem of data packing in cache management. We use techniques from parameterized complexity theory, polyhedral geometry, and real algebraic geometry to significantly improve the state-of-the-art, in terms of both scalability and completeness guarantees, for the mentioned problems. In some cases, our results are the first theoretical improvements for the respective problems in two or three decades.}, author = {Goharshady, Amir Kafshdar}, issn = {2663-337X}, pages = {278}, publisher = {IST Austria}, title = {{Parameterized and algebro-geometric advances in static program analysis}}, doi = {10.15479/AT:ISTA:8934}, year = {2021}, } @article{7956, abstract = {When short-range attractions are combined with long-range repulsions in colloidal particle systems, complex microphases can emerge. Here, we study a system of isotropic particles, which can form lamellar structures or a disordered fluid phase when temperature is varied. We show that, at equilibrium, the lamellar structure crystallizes, while out of equilibrium, the system forms a variety of structures at different shear rates and temperatures above melting. The shear-induced ordering is analyzed by means of principal component analysis and artificial neural networks, which are applied to data of reduced dimensionality. Our results reveal the possibility of inducing ordering by shear, potentially providing a feasible route to the fabrication of ordered lamellar structures from isotropic particles.}, author = {Pȩkalski, J. and Rzadkowski, Wojciech and Panagiotopoulos, A. Z.}, issn = {10897690}, journal = {The Journal of chemical physics}, number = {20}, publisher = {AIP}, title = {{Shear-induced ordering in systems with competing interactions: A machine learning study}}, doi = {10.1063/5.0005194}, volume = {152}, year = {2020}, } @article{7957, abstract = {Neurodevelopmental disorders (NDDs) are a class of disorders affecting brain development and function and are characterized by wide genetic and clinical variability. In this review, we discuss the multiple factors that influence the clinical presentation of NDDs, with particular attention to gene vulnerability, mutational load, and the two-hit model. Despite the complex architecture of mutational events associated with NDDs, the various proteins involved appear to converge on common pathways, such as synaptic plasticity/function, chromatin remodelers and the mammalian target of rapamycin (mTOR) pathway. A thorough understanding of the mechanisms behind these pathways will hopefully lead to the identification of candidates that could be targeted for treatment approaches.}, author = {Parenti, Ilaria and Garcia Rabaneda, Luis E and Schön, Hanna and Novarino, Gaia}, issn = {1878108X}, journal = {Trends in Neurosciences}, number = {8}, pages = {608--621}, publisher = {Elsevier}, title = {{Neurodevelopmental disorders: From genetics to functional pathways}}, doi = {10.1016/j.tins.2020.05.004}, volume = {43}, year = {2020}, } @article{7960, abstract = {Let A={A1,…,An} be a family of sets in the plane. For 0≤i2b be integers. We prove that if each k-wise or (k+1)-wise intersection of sets from A has at most b path-connected components, which all are open, then fk+1=0 implies fk≤cfk−1 for some positive constant c depending only on b and k. These results also extend to two-dimensional compact surfaces.}, author = {Kalai, Gil and Patakova, Zuzana}, issn = {14320444}, journal = {Discrete and Computational Geometry}, pages = {304--323}, publisher = {Springer Nature}, title = {{Intersection patterns of planar sets}}, doi = {10.1007/s00454-020-00205-z}, volume = {64}, year = {2020}, } @article{7962, abstract = {A string graph is the intersection graph of a family of continuous arcs in the plane. The intersection graph of a family of plane convex sets is a string graph, but not all string graphs can be obtained in this way. We prove the following structure theorem conjectured by Janson and Uzzell: The vertex set of almost all string graphs on n vertices can be partitioned into five cliques such that some pair of them is not connected by any edge (n→∞). We also show that every graph with the above property is an intersection graph of plane convex sets. As a corollary, we obtain that almost all string graphs on n vertices are intersection graphs of plane convex sets.}, author = {Pach, János and Reed, Bruce and Yuditsky, Yelena}, issn = {14320444}, journal = {Discrete and Computational Geometry}, number = {4}, pages = {888--917}, publisher = {Springer Nature}, title = {{Almost all string graphs are intersection graphs of plane convex sets}}, doi = {10.1007/s00454-020-00213-z}, volume = {63}, year = {2020}, } @inproceedings{7966, abstract = {For 1≤m≤n, we consider a natural m-out-of-n multi-instance scenario for a public-key encryption (PKE) scheme. An adversary, given n independent instances of PKE, wins if he breaks at least m out of the n instances. In this work, we are interested in the scaling factor of PKE schemes, SF, which measures how well the difficulty of breaking m out of the n instances scales in m. That is, a scaling factor SF=ℓ indicates that breaking m out of n instances is at least ℓ times more difficult than breaking one single instance. A PKE scheme with small scaling factor hence provides an ideal target for mass surveillance. In fact, the Logjam attack (CCS 2015) implicitly exploited, among other things, an almost constant scaling factor of ElGamal over finite fields (with shared group parameters). For Hashed ElGamal over elliptic curves, we use the generic group model to argue that the scaling factor depends on the scheme's granularity. In low granularity, meaning each public key contains its independent group parameter, the scheme has optimal scaling factor SF=m; In medium and high granularity, meaning all public keys share the same group parameter, the scheme still has a reasonable scaling factor SF=√m. Our findings underline that instantiating ElGamal over elliptic curves should be preferred to finite fields in a multi-instance scenario. As our main technical contribution, we derive new generic-group lower bounds of Ω(√(mp)) on the difficulty of solving both the m-out-of-n Gap Discrete Logarithm and the m-out-of-n Gap Computational Diffie-Hellman problem over groups of prime order p, extending a recent result by Yun (EUROCRYPT 2015). We establish the lower bound by studying the hardness of a related computational problem which we call the search-by-hypersurface problem.}, author = {Auerbach, Benedikt and Giacon, Federico and Kiltz, Eike}, booktitle = {Advances in Cryptology – EUROCRYPT 2020}, isbn = {9783030457266}, issn = {0302-9743}, pages = {475--506}, publisher = {Springer Nature}, title = {{Everybody’s a target: Scalability in public-key encryption}}, doi = {10.1007/978-3-030-45727-3_16}, volume = {12107}, year = {2020}, } @article{7968, abstract = {Organic materials are known to feature long spin-diffusion times, originating in a generally small spin–orbit coupling observed in these systems. From that perspective, chiral molecules acting as efficient spin selectors pose a puzzle that attracted a lot of attention in recent years. Here, we revisit the physical origins of chiral-induced spin selectivity (CISS) and propose a simple analytic minimal model to describe it. The model treats a chiral molecule as an anisotropic wire with molecular dipole moments aligned arbitrarily with respect to the wire’s axes and is therefore quite general. Importantly, it shows that the helical structure of the molecule is not necessary to observe CISS and other chiral nonhelical molecules can also be considered as potential candidates for the CISS effect. We also show that the suggested simple model captures the main characteristics of CISS observed in the experiment, without the need for additional constraints employed in the previous studies. The results pave the way for understanding other related physical phenomena where the CISS effect plays an essential role.}, author = {Ghazaryan, Areg and Paltiel, Yossi and Lemeshko, Mikhail}, issn = {1932-7447}, journal = {The Journal of Physical Chemistry C}, number = {21}, pages = {11716--11721}, publisher = {American Chemical Society}, title = {{Analytic model of chiral-induced spin selectivity}}, doi = {10.1021/acs.jpcc.0c02584}, volume = {124}, year = {2020}, } @article{7971, abstract = {Multilayer graphene lattices allow for an additional tunability of the band structure by the strong perpendicular electric field. In particular, the emergence of the new multiple Dirac points in ABA stacked trilayer graphene subject to strong transverse electric fields was proposed theoretically and confirmed experimentally. These new Dirac points dubbed “gullies” emerge from the interplay between strong electric field and trigonal warping. In this work, we first characterize the properties of new emergent Dirac points and show that the electric field can be used to tune the distance between gullies in the momentum space. We demonstrate that the band structure has multiple Lifshitz transitions and higher-order singularity of “monkey saddle” type. Following the characterization of the band structure, we consider the spectrum of Landau levels and structure of their wave functions. In the limit of strong electric fields when gullies are well separated in momentum space, they give rise to triply degenerate Landau levels. In the second part of this work, we investigate how degeneracy between three gully Landau levels is lifted in the presence of interactions. Within the Hartree-Fock approximation we show that the symmetry breaking state interpolates between the fully gully polarized state that breaks C3 symmetry at high displacement field and the gully symmetric state when the electric field is decreased. The discontinuous transition between these two states is driven by enhanced intergully tunneling and exchange. We conclude by outlining specific experimental predictions for the existence of such a symmetry-breaking state.}, author = {Rao, Peng and Serbyn, Maksym}, issn = {2469-9950}, journal = {Physical Review B}, number = {24}, publisher = {American Physical Society}, title = {{Gully quantum Hall ferromagnetism in biased trilayer graphene}}, doi = {10.1103/physrevb.101.245411}, volume = {101}, year = {2020}, } @article{7985, abstract = {The goal of limiting global warming to 1.5 °C requires a drastic reduction in CO2 emissions across many sectors of the world economy. Batteries are vital to this endeavor, whether used in electric vehicles, to store renewable electricity, or in aviation. Present lithium-ion technologies are preparing the public for this inevitable change, but their maximum theoretical specific capacity presents a limitation. Their high cost is another concern for commercial viability. Metal–air batteries have the highest theoretical energy density of all possible secondary battery technologies and could yield step changes in energy storage, if their practical difficulties could be overcome. The scope of this review is to provide an objective, comprehensive, and authoritative assessment of the intensive work invested in nonaqueous rechargeable metal–air batteries over the past few years, which identified the key problems and guides directions to solve them. We focus primarily on the challenges and outlook for Li–O2 cells but include Na–O2, K–O2, and Mg–O2 cells for comparison. Our review highlights the interdisciplinary nature of this field that involves a combination of materials chemistry, electrochemistry, computation, microscopy, spectroscopy, and surface science. The mechanisms of O2 reduction and evolution are considered in the light of recent findings, along with developments in positive and negative electrodes, electrolytes, electrocatalysis on surfaces and in solution, and the degradative effect of singlet oxygen, which is typically formed in Li–O2 cells.}, author = {Kwak, WJ and Sharon, D and Xia, C and Kim, H and Johnson, LR and Bruce, PG and Nazar, LF and Sun, YK and Frimer, AA and Noked, M and Freunberger, Stefan Alexander and Aurbach, D}, issn = {0009-2665}, journal = {Chemical Reviews}, number = {14}, pages = {6626--6683}, publisher = {American Chemical Society}, title = {{Lithium-oxygen batteries and related systems: Potential, status, and future}}, doi = {10.1021/acs.chemrev.9b00609}, volume = {120}, year = {2020}, } @inproceedings{7989, abstract = {We prove general topological Radon-type theorems for sets in ℝ^d, smooth real manifolds or finite dimensional simplicial complexes. Combined with a recent result of Holmsen and Lee, it gives fractional Helly theorem, and consequently the existence of weak ε-nets as well as a (p,q)-theorem. More precisely: Let X be either ℝ^d, smooth real d-manifold, or a finite d-dimensional simplicial complex. Then if F is a finite, intersection-closed family of sets in X such that the ith reduced Betti number (with ℤ₂ coefficients) of any set in F is at most b for every non-negative integer i less or equal to k, then the Radon number of F is bounded in terms of b and X. Here k is the smallest integer larger or equal to d/2 - 1 if X = ℝ^d; k=d-1 if X is a smooth real d-manifold and not a surface, k=0 if X is a surface and k=d if X is a d-dimensional simplicial complex. Using the recent result of the author and Kalai, we manage to prove the following optimal bound on fractional Helly number for families of open sets in a surface: Let F be a finite family of open sets in a surface S such that the intersection of any subfamily of F is either empty, or path-connected. Then the fractional Helly number of F is at most three. This also settles a conjecture of Holmsen, Kim, and Lee about an existence of a (p,q)-theorem for open subsets of a surface.}, author = {Patakova, Zuzana}, booktitle = {36th International Symposium on Computational Geometry}, isbn = {9783959771436}, issn = {18688969}, location = {Zürich, Switzerland}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Bounding radon number via Betti numbers}}, doi = {10.4230/LIPIcs.SoCG.2020.61}, volume = {164}, year = {2020}, } @inproceedings{7990, abstract = {Given a finite point set P in general position in the plane, a full triangulation is a maximal straight-line embedded plane graph on P. A partial triangulation on P is a full triangulation of some subset P' of P containing all extreme points in P. A bistellar flip on a partial triangulation either flips an edge, removes a non-extreme point of degree 3, or adds a point in P ⧵ P' as vertex of degree 3. The bistellar flip graph has all partial triangulations as vertices, and a pair of partial triangulations is adjacent if they can be obtained from one another by a bistellar flip. The goal of this paper is to investigate the structure of this graph, with emphasis on its connectivity. For sets P of n points in general position, we show that the bistellar flip graph is (n-3)-connected, thereby answering, for sets in general position, an open questions raised in a book (by De Loera, Rambau, and Santos) and a survey (by Lee and Santos) on triangulations. This matches the situation for the subfamily of regular triangulations (i.e., partial triangulations obtained by lifting the points and projecting the lower convex hull), where (n-3)-connectivity has been known since the late 1980s through the secondary polytope (Gelfand, Kapranov, Zelevinsky) and Balinski’s Theorem. Our methods also yield the following results (see the full version [Wagner and Welzl, 2020]): (i) The bistellar flip graph can be covered by graphs of polytopes of dimension n-3 (products of secondary polytopes). (ii) A partial triangulation is regular, if it has distance n-3 in the Hasse diagram of the partial order of partial subdivisions from the trivial subdivision. (iii) All partial triangulations are regular iff the trivial subdivision has height n-3 in the partial order of partial subdivisions. (iv) There are arbitrarily large sets P with non-regular partial triangulations, while every proper subset has only regular triangulations, i.e., there are no small certificates for the existence of non-regular partial triangulations (answering a question by F. Santos in the unexpected direction).}, author = {Wagner, Uli and Welzl, Emo}, booktitle = {36th International Symposium on Computational Geometry}, isbn = {9783959771436}, issn = {18688969}, location = {Zürich, Switzerland}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Connectivity of triangulation flip graphs in the plane (Part II: Bistellar flips)}}, doi = {10.4230/LIPIcs.SoCG.2020.67}, volume = {164}, year = {2020}, } @inproceedings{7991, abstract = {We define and study a discrete process that generalizes the convex-layer decomposition of a planar point set. Our process, which we call homotopic curve shortening (HCS), starts with a closed curve (which might self-intersect) in the presence of a set P⊂ ℝ² of point obstacles, and evolves in discrete steps, where each step consists of (1) taking shortcuts around the obstacles, and (2) reducing the curve to its shortest homotopic equivalent. We find experimentally that, if the initial curve is held fixed and P is chosen to be either a very fine regular grid or a uniformly random point set, then HCS behaves at the limit like the affine curve-shortening flow (ACSF). This connection between HCS and ACSF generalizes the link between "grid peeling" and the ACSF observed by Eppstein et al. (2017), which applied only to convex curves, and which was studied only for regular grids. We prove that HCS satisfies some properties analogous to those of ACSF: HCS is invariant under affine transformations, preserves convexity, and does not increase the total absolute curvature. Furthermore, the number of self-intersections of a curve, or intersections between two curves (appropriately defined), does not increase. Finally, if the initial curve is simple, then the number of inflection points (appropriately defined) does not increase.}, author = {Avvakumov, Sergey and Nivasch, Gabriel}, booktitle = {36th International Symposium on Computational Geometry}, isbn = {9783959771436}, issn = {18688969}, location = {Zürich, Switzerland}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Homotopic curve shortening and the affine curve-shortening flow}}, doi = {10.4230/LIPIcs.SoCG.2020.12}, volume = {164}, year = {2020}, } @inproceedings{7992, abstract = {Let K be a convex body in ℝⁿ (i.e., a compact convex set with nonempty interior). Given a point p in the interior of K, a hyperplane h passing through p is called barycentric if p is the barycenter of K ∩ h. In 1961, Grünbaum raised the question whether, for every K, there exists an interior point p through which there are at least n+1 distinct barycentric hyperplanes. Two years later, this was seemingly resolved affirmatively by showing that this is the case if p=p₀ is the point of maximal depth in K. However, while working on a related question, we noticed that one of the auxiliary claims in the proof is incorrect. Here, we provide a counterexample; this re-opens Grünbaum’s question. It follows from known results that for n ≥ 2, there are always at least three distinct barycentric cuts through the point p₀ ∈ K of maximal depth. Using tools related to Morse theory we are able to improve this bound: four distinct barycentric cuts through p₀ are guaranteed if n ≥ 3.}, author = {Patakova, Zuzana and Tancer, Martin and Wagner, Uli}, booktitle = {36th International Symposium on Computational Geometry}, isbn = {9783959771436}, issn = {18688969}, location = {Zürich, Switzerland}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Barycentric cuts through a convex body}}, doi = {10.4230/LIPIcs.SoCG.2020.62}, volume = {164}, year = {2020}, } @inproceedings{7994, abstract = {In the recent study of crossing numbers, drawings of graphs that can be extended to an arrangement of pseudolines (pseudolinear drawings) have played an important role as they are a natural combinatorial extension of rectilinear (or straight-line) drawings. A characterization of the pseudolinear drawings of K_n was found recently. We extend this characterization to all graphs, by describing the set of minimal forbidden subdrawings for pseudolinear drawings. Our characterization also leads to a polynomial-time algorithm to recognize pseudolinear drawings and construct the pseudolines when it is possible.}, author = {Arroyo Guevara, Alan M and Bensmail, Julien and Bruce Richter, R.}, booktitle = {36th International Symposium on Computational Geometry}, isbn = {9783959771436}, issn = {18688969}, location = {Zürich, Switzerland}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Extending drawings of graphs to arrangements of pseudolines}}, doi = {10.4230/LIPIcs.SoCG.2020.9}, volume = {164}, year = {2020}, } @article{7995, abstract = {When divergent populations are connected by gene flow, the establishment of complete reproductive isolation usually requires the joint action of multiple barrier effects. One example where multiple barrier effects are coupled consists of a single trait that is under divergent natural selection and also mediates assortative mating. Such multiple‐effect traits can strongly reduce gene flow. However, there are few cases where patterns of assortative mating have been described quantitatively and their impact on gene flow has been determined. Two ecotypes of the coastal marine snail, Littorina saxatilis , occur in North Atlantic rocky‐shore habitats dominated by either crab predation or wave action. There is evidence for divergent natural selection acting on size, and size‐assortative mating has previously been documented. Here, we analyze the mating pattern in L. saxatilis with respect to size in intensively sampled transects across boundaries between the habitats. We show that the mating pattern is mostly conserved between ecotypes and that it generates both assortment and directional sexual selection for small male size. Using simulations, we show that the mating pattern can contribute to reproductive isolation between ecotypes but the barrier to gene flow is likely strengthened more by sexual selection than by assortment.}, author = {Perini, Samuel and Rafajlović, Marina and Westram, Anja M and Johannesson, Kerstin and Butlin, Roger K.}, issn = {15585646}, journal = {Evolution}, number = {7}, pages = {1482--1497}, publisher = {Wiley}, title = {{Assortative mating, sexual selection, and their consequences for gene flow in Littorina}}, doi = {10.1111/evo.14027}, volume = {74}, year = {2020}, } @phdthesis{7996, abstract = {Quantum computation enables the execution of algorithms that have exponential complexity. This might open the path towards the synthesis of new materials or medical drugs, optimization of transport or financial strategies etc., intractable on even the fastest classical computers. A quantum computer consists of interconnected two level quantum systems, called qubits, that satisfy DiVincezo’s criteria. Worldwide, there are ongoing efforts to find the qubit architecture which will unite quantum error correction compatible single and two qubit fidelities, long distance qubit to qubit coupling and calability. Superconducting qubits have gone the furthest in this race, demonstrating an algorithm running on 53 coupled qubits, but still the fidelities are not even close to those required for realizing a single logical qubit. emiconductor qubits offer extremely good characteristics, but they are currently investigated across different platforms. Uniting those good characteristics into a single platform might be a big step towards the quantum computer realization. Here we describe the implementation of a hole spin qubit hosted in a Ge hut wire double quantum dot. The high and tunable spin-orbit coupling together with a heavy hole state character is expected to allow fast spin manipulation and long coherence times. Furthermore large lever arms, for hut wire devices, should allow good coupling to superconducting resonators enabling efficient long distance spin to spin coupling and a sensitive gate reflectometry spin readout. The developed cryogenic setup (printed circuit board sample holders, filtering, high-frequency wiring) enabled us to perform low temperature spin dynamics experiments. Indeed, we measured the fastest single spin qubit Rabi frequencies reported so far, reaching 140 MHz, while the dephasing times of 130 ns oppose the long decoherence predictions. In order to further investigate this, a double quantum dot gate was connected directly to a lumped element resonator which enabled gate reflectometry readout. The vanishing inter-dot transition signal, for increasing external magnetic field, revealed the spin nature of the measured quantity.}, author = {Kukucka, Josip}, issn = {2663-337X}, pages = {178}, publisher = {IST Austria}, title = {{Implementation of a hole spin qubit in Ge hut wires and dispersive spin sensing}}, doi = {10.15479/AT:ISTA:7996}, year = {2020}, } @article{7999, abstract = {Linking epigenetic marks to clinical outcomes improves insight into molecular processes, disease prediction, and therapeutic target identification. Here, a statistical approach is presented to infer the epigenetic architecture of complex disease, determine the variation captured by epigenetic effects, and estimate phenotype-epigenetic probe associations jointly. Implicitly adjusting for probe correlations, data structure (cell-count or relatedness), and single-nucleotide polymorphism (SNP) marker effects, improves association estimates and in 9,448 individuals, 75.7% (95% CI 71.70–79.3) of body mass index (BMI) variation and 45.6% (95% CI 37.3–51.9) of cigarette consumption variation was captured by whole blood methylation array data. Pathway-linked probes of blood cholesterol, lipid transport and sterol metabolism for BMI, and xenobiotic stimuli response for smoking, showed >1.5 times larger associations with >95% posterior inclusion probability. Prediction accuracy improved by 28.7% for BMI and 10.2% for smoking over a LASSO model, with age-, and tissue-specificity, implying associations are a phenotypic consequence rather than causal. }, author = {Trejo Banos, D and McCartney, DL and Patxot, M and Anchieri, L and Battram, T and Christiansen, C and Costeira, R and Walker, RM and Morris, SW and Campbell, A and Zhang, Q and Porteous, DJ and McRae, AF and Wray, NR and Visscher, PM and Haley, CS and Evans, KL and Deary, IJ and McIntosh, AM and Hemani, G and Bell, JT and Marioni, RE and Robinson, Matthew Richard}, issn = {2041-1723}, journal = {Nature Communications}, publisher = {Springer Nature}, title = {{Bayesian reassessment of the epigenetic architecture of complex traits}}, doi = {10.1038/s41467-020-16520-1}, volume = {11}, year = {2020}, } @article{8001, abstract = {Post-tetanic potentiation (PTP) is an attractive candidate mechanism for hippocampus-dependent short-term memory. Although PTP has a uniquely large magnitude at hippocampal mossy fiber-CA3 pyramidal neuron synapses, it is unclear whether it can be induced by natural activity and whether its lifetime is sufficient to support short-term memory. We combined in vivo recordings from granule cells (GCs), in vitro paired recordings from mossy fiber terminals and postsynaptic CA3 neurons, and “flash and freeze” electron microscopy. PTP was induced at single synapses and showed a low induction threshold adapted to sparse GC activity in vivo. PTP was mainly generated by enlargement of the readily releasable pool of synaptic vesicles, allowing multiplicative interaction with other plasticity forms. PTP was associated with an increase in the docked vesicle pool, suggesting formation of structural “pool engrams.” Absence of presynaptic activity extended the lifetime of the potentiation, enabling prolonged information storage in the hippocampal network.}, author = {Vandael, David H and Borges Merjane, Carolina and Zhang, Xiaomin and Jonas, Peter M}, issn = {10974199}, journal = {Neuron}, number = {3}, pages = {509--521}, publisher = {Elsevier}, title = {{Short-term plasticity at hippocampal mossy fiber synapses is induced by natural activity patterns and associated with vesicle pool engram formation}}, doi = {10.1016/j.neuron.2020.05.013}, volume = {107}, year = {2020}, } @article{8002, abstract = {Wound healing in plant tissues, consisting of rigid cell wall-encapsulated cells, represents a considerable challenge and occurs through largely unknown mechanisms distinct from those in animals. Owing to their inability to migrate, plant cells rely on targeted cell division and expansion to regenerate wounds. Strict coordination of these wound-induced responses is essential to ensure efficient, spatially restricted wound healing. Single-cell tracking by live imaging allowed us to gain mechanistic insight into the wound perception and coordination of wound responses after laser-based wounding in Arabidopsis root. We revealed a crucial contribution of the collapse of damaged cells in wound perception and detected an auxin increase specific to cells immediately adjacent to the wound. This localized auxin increase balances wound-induced cell expansion and restorative division rates in a dose-dependent manner, leading to tumorous overproliferation when the canonical TIR1 auxin signaling is disrupted. Auxin and wound-induced turgor pressure changes together also spatially define the activation of key components of regeneration, such as the transcription regulator ERF115. Our observations suggest that the wound signaling involves the sensing of collapse of damaged cells and a local auxin signaling activation to coordinate the downstream transcriptional responses in the immediate wound vicinity.}, author = {Hörmayer, Lukas and Montesinos López, Juan C and Marhavá, Petra and Benková, Eva and Yoshida, Saiko and Friml, Jiří}, issn = {0027-8424}, journal = {Proceedings of the National Academy of Sciences}, number = {26}, publisher = {Proceedings of the National Academy of Sciences}, title = {{Wounding-induced changes in cellular pressure and localized auxin signalling spatially coordinate restorative divisions in roots}}, doi = {10.1073/pnas.2003346117}, volume = {117}, year = {2020}, } @article{8011, abstract = {Relaxation to a thermal state is the inevitable fate of nonequilibrium interacting quantum systems without special conservation laws. While thermalization in one-dimensional systems can often be suppressed by integrability mechanisms, in two spatial dimensions thermalization is expected to be far more effective due to the increased phase space. In this work we propose a general framework for escaping or delaying the emergence of the thermal state in two-dimensional arrays of Rydberg atoms via the mechanism of quantum scars, i.e., initial states that fail to thermalize. The suppression of thermalization is achieved in two complementary ways: by adding local perturbations or by adjusting the driving Rabi frequency according to the local connectivity of the lattice. We demonstrate that these mechanisms allow us to realize robust quantum scars in various two-dimensional lattices, including decorated lattices with nonconstant connectivity. In particular, we show that a small decrease of the Rabi frequency at the corners of the lattice is crucial for mitigating the strong boundary effects in two-dimensional systems. Our results identify synchronization as an important tool for future experiments on two-dimensional quantum scars.}, author = {Michailidis, Alexios and Turner, C. J. and Papić, Z. and Abanin, D. A. and Serbyn, Maksym}, issn = {2643-1564}, journal = {Physical Review Research}, number = {2}, publisher = {American Physical Society}, title = {{Stabilizing two-dimensional quantum scars by deformation and synchronization}}, doi = {10.1103/physrevresearch.2.022065}, volume = {2}, year = {2020}, } @inproceedings{8012, abstract = {Asynchronous programs are notoriously difficult to reason about because they spawn computation tasks which take effect asynchronously in a nondeterministic way. Devising inductive invariants for such programs requires understanding and stating complex relationships between an unbounded number of computation tasks in arbitrarily long executions. In this paper, we introduce inductive sequentialization, a new proof rule that sidesteps this complexity via a sequential reduction, a sequential program that captures every behavior of the original program up to reordering of coarse-grained commutative actions. A sequential reduction of a concurrent program is easy to reason about since it corresponds to a simple execution of the program in an idealized synchronous environment, where processes act in a fixed order and at the same speed. We have implemented and integrated our proof rule in the CIVL verifier, allowing us to provably derive fine-grained implementations of asynchronous programs. We have successfully applied our proof rule to a diverse set of message-passing protocols, including leader election protocols, two-phase commit, and Paxos.}, author = {Kragl, Bernhard and Enea, Constantin and Henzinger, Thomas A and Mutluergil, Suha Orhun and Qadeer, Shaz}, booktitle = {Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation}, isbn = {9781450376136}, location = {London, United Kingdom}, pages = {227--242}, publisher = {Association for Computing Machinery}, title = {{Inductive sequentialization of asynchronous programs}}, doi = {10.1145/3385412.3385980}, year = {2020}, } @phdthesis{8032, abstract = {Algorithms in computational 3-manifold topology typically take a triangulation as an input and return topological information about the underlying 3-manifold. However, extracting the desired information from a triangulation (e.g., evaluating an invariant) is often computationally very expensive. In recent years this complexity barrier has been successfully tackled in some cases by importing ideas from the theory of parameterized algorithms into the realm of 3-manifolds. Various computationally hard problems were shown to be efficiently solvable for input triangulations that are sufficiently “tree-like.” In this thesis we focus on the key combinatorial parameter in the above context: we consider the treewidth of a compact, orientable 3-manifold, i.e., the smallest treewidth of the dual graph of any triangulation thereof. By building on the work of Scharlemann–Thompson and Scharlemann–Schultens–Saito on generalized Heegaard splittings, and on the work of Jaco–Rubinstein on layered triangulations, we establish quantitative relations between the treewidth and classical topological invariants of a 3-manifold. In particular, among other results, we show that the treewidth of a closed, orientable, irreducible, non-Haken 3-manifold is always within a constant factor of its Heegaard genus.}, author = {Huszár, Kristóf}, isbn = {978-3-99078-006-0}, issn = {2663-337X}, pages = {xviii+120}, publisher = {IST Austria}, title = {{Combinatorial width parameters for 3-dimensional manifolds}}, doi = {10.15479/AT:ISTA:8032}, year = {2020}, } @article{8036, abstract = {When tiny soft ferromagnetic particles are placed along a liquid interface and exposed to a vertical magnetic field, the balance between capillary attraction and magnetic repulsion leads to self-organization into well-defined patterns. Here, we demonstrate experimentally that precessing magnetic fields induce metachronal waves on the periphery of these assemblies, similar to the ones observed in ciliates and some arthropods. The outermost layer of particles behaves like an array of cilia or legs whose sequential movement causes a net and controllable locomotion. This bioinspired many-particle swimming strategy is effective even at low Reynolds number, using only spatially uniform fields to generate the waves.}, author = {Collard, Ylona and Grosjean, Galien M and Vandewalle, Nicolas}, issn = {23993650}, journal = {Communications Physics}, publisher = {Springer Nature}, title = {{Magnetically powered metachronal waves induce locomotion in self-assemblies}}, doi = {10.1038/s42005-020-0380-9}, volume = {3}, year = {2020}, } @article{8037, abstract = {Genetic perturbations that affect bacterial resistance to antibiotics have been characterized genome-wide, but how do such perturbations interact with subsequent evolutionary adaptation to the drug? Here, we show that strong epistasis between resistance mutations and systematically identified genes can be exploited to control spontaneous resistance evolution. We evolved hundreds of Escherichia coli K-12 mutant populations in parallel, using a robotic platform that tightly controls population size and selection pressure. We find a global diminishing-returns epistasis pattern: strains that are initially more sensitive generally undergo larger resistance gains. However, some gene deletion strains deviate from this general trend and curtail the evolvability of resistance, including deletions of genes for membrane transport, LPS biosynthesis, and chaperones. Deletions of efflux pump genes force evolution on inferior mutational paths, not explored in the wild type, and some of these essentially block resistance evolution. This effect is due to strong negative epistasis with resistance mutations. The identified genes and cellular functions provide potential targets for development of adjuvants that may block spontaneous resistance evolution when combined with antibiotics.}, author = {Lukacisinova, Marta and Fernando, Booshini and Bollenbach, Mark Tobias}, issn = {20411723}, journal = {Nature Communications}, publisher = {Springer Nature}, title = {{Highly parallel lab evolution reveals that epistasis can curb the evolution of antibiotic resistance}}, doi = {10.1038/s41467-020-16932-z}, volume = {11}, year = {2020}, } @article{8038, abstract = {Microelectromechanical systems and integrated photonics provide the basis for many reliable and compact circuit elements in modern communication systems. Electro-opto-mechanical devices are currently one of the leading approaches to realize ultra-sensitive, low-loss transducers for an emerging quantum information technology. Here we present an on-chip microwave frequency converter based on a planar aluminum on silicon nitride platform that is compatible with slot-mode coupled photonic crystal cavities. We show efficient frequency conversion between two propagating microwave modes mediated by the radiation pressure interaction with a metalized dielectric nanobeam oscillator. We achieve bidirectional coherent conversion with a total device efficiency of up to ~60%, a dynamic range of 2 × 10^9 photons/s and an instantaneous bandwidth of up to 1.7 kHz. A high fidelity quantum state transfer would be possible if the drive dependent output noise of currently ~14 photons s^−1 Hz^−1 is further reduced. Such a silicon nitride based transducer is in situ reconfigurable and could be used for on-chip classical and quantum signal routing and filtering, both for microwave and hybrid microwave-optical applications.}, author = {Fink, Johannes M and Kalaee, M. and Norte, R. and Pitanti, A. and Painter, O.}, issn = {20589565}, journal = {Quantum Science and Technology}, number = {3}, publisher = {IOP Publishing}, title = {{Efficient microwave frequency conversion mediated by a photonics compatible silicon nitride nanobeam oscillator}}, doi = {10.1088/2058-9565/ab8dce}, volume = {5}, year = {2020}, } @article{8039, abstract = {In the present work, we report a solution-based strategy to produce crystallographically textured SnSe bulk nanomaterials and printed layers with optimized thermoelectric performance in the direction normal to the substrate. Our strategy is based on the formulation of a molecular precursor that can be continuously decomposed to produce a SnSe powder or printed into predefined patterns. The precursor formulation and decomposition conditions are optimized to produce pure phase 2D SnSe nanoplates. The printed layer and the bulk material obtained after hot press displays a clear preferential orientation of the crystallographic domains, resulting in an ultralow thermal conductivity of 0.55 W m–1 K–1 in the direction normal to the substrate. Such textured nanomaterials present highly anisotropic properties with the best thermoelectric performance in plane, i.e., in the directions parallel to the substrate, which coincide with the crystallographic bc plane of SnSe. This is an unfortunate characteristic because thermoelectric devices are designed to create/harvest temperature gradients in the direction normal to the substrate. We further demonstrate that this limitation can be overcome with the introduction of small amounts of tellurium in the precursor. The presence of tellurium allows one to reduce the band gap and increase both the charge carrier concentration and the mobility, especially the cross plane, with a minimal decrease of the Seebeck coefficient. These effects translate into record out of plane ZT values at 800 K.}, author = {Zhang, Yu and Liu, Yu and Xing, Congcong and Zhang, Ting and Li, Mengyao and Pacios, Mercè and Yu, Xiaoting and Arbiol, Jordi and Llorca, Jordi and Cadavid, Doris and Ibáñez, Maria and Cabot, Andreu}, issn = {19448252}, journal = {ACS Applied Materials and Interfaces}, number = {24}, pages = {27104--27111}, publisher = {American Chemical Society}, title = {{Tin selenide molecular precursor for the solution processing of thermoelectric materials and devices}}, doi = {10.1021/acsami.0c04331}, volume = {12}, year = {2020}, } @article{8042, abstract = {We consider systems of N bosons in a box of volume one, interacting through a repulsive two-body potential of the form κN3β−1V(Nβx). For all 0<β<1, and for sufficiently small coupling constant κ>0, we establish the validity of Bogolyubov theory, identifying the ground state energy and the low-lying excitation spectrum up to errors that vanish in the limit of large N.}, author = {Boccato, Chiara and Brennecke, Christian and Cenatiempo, Serena and Schlein, Benjamin}, issn = {14359855}, journal = {Journal of the European Mathematical Society}, number = {7}, pages = {2331--2403}, publisher = {European Mathematical Society}, title = {{The excitation spectrum of Bose gases interacting through singular potentials}}, doi = {10.4171/JEMS/966}, volume = {22}, year = {2020}, } @article{8043, abstract = {With decreasing Reynolds number, Re, turbulence in channel flow becomes spatio-temporally intermittent and self-organises into solitary stripes oblique to the mean flow direction. We report here the existence of localised nonlinear travelling wave solutions of the Navier–Stokes equations possessing this obliqueness property. Such solutions are identified numerically using edge tracking coupled with arclength continuation. All solutions emerge in saddle-node bifurcations at values of Re lower than the non-localised solutions. Relative periodic orbit solutions bifurcating from branches of travelling waves have also been computed. A complete parametric study is performed, including their stability, the investigation of their large-scale flow, and the robustness to changes of the numerical domain.}, author = {Paranjape, Chaitanya S and Duguet, Yohann and Hof, Björn}, issn = {14697645}, journal = {Journal of Fluid Mechanics}, publisher = {Cambridge University Press}, title = {{Oblique stripe solutions of channel flow}}, doi = {10.1017/jfm.2020.322}, volume = {897}, year = {2020}, } @article{8057, abstract = {Water-in-salt electrolytes based on highly concentrated bis(trifluoromethyl)sulfonimide (TFSI) promise aqueous electrolytes with stabilities approaching 3 V. However, especially with an electrode approaching the cathodic (reductive) stability, cycling stability is insufficient. While stability critically relies on a solid electrolyte interphase (SEI), the mechanism behind the cathodic stability limit remains unclear. Here, we reveal two distinct reduction potentials for the chemical environments of ‘free’ and ‘bound’ water and that both contribute to SEI formation. Free-water is reduced ~1V above bound water in a hydrogen evolution reaction (HER) and responsible for SEI formation via reactive intermediates of the HER; concurrent LiTFSI precipitation/dissolution establishes a dynamic interface. The free-water population emerges, therefore, as the handle to extend the cathodic limit of aqueous electrolytes and the battery cycling stability.}, author = {Bouchal, Roza and Li, Zhujie and Bongu, Chandra and Le Vot, Steven and Berthelot, Romain and Rotenberg, Benjamin and Favier, Frederic and Freunberger, Stefan Alexander and Salanne, Mathieu and Fontaine, Olivier}, issn = {0044-8249}, journal = {Angewandte Chemie}, number = {37}, pages = {16047--16051}, publisher = {Wiley}, title = {{Competitive salt precipitation/dissolution during free‐water reduction in water‐in‐salt electrolyte}}, doi = {10.1002/ange.202005378}, volume = {132}, year = {2020}, } @unpublished{8063, abstract = {We present a generative model of images that explicitly reasons over the set of objects they show. Our model learns a structured latent representation that separates objects from each other and from the background; unlike prior works, it explicitly represents the 2D position and depth of each object, as well as an embedding of its segmentation mask and appearance. The model can be trained from images alone in a purely unsupervised fashion without the need for object masks or depth information. Moreover, it always generates complete objects, even though a significant fraction of training images contain occlusions. Finally, we show that our model can infer decompositions of novel images into their constituent objects, including accurate prediction of depth ordering and segmentation of occluded parts.}, author = {Anciukevicius, Titas and Lampert, Christoph and Henderson, Paul M}, booktitle = {arXiv}, title = {{Object-centric image generation with factored depths, locations, and appearances}}, year = {2020}, } @misc{8067, abstract = {With the lithium-ion technology approaching its intrinsic limit with graphite-based anodes, lithium metal is recently receiving renewed interest from the battery community as potential high capacity anode for next-generation rechargeable batteries. In this focus paper, we review the main advances in this field since the first attempts in the mid-1970s. Strategies for enabling reversible cycling and avoiding dendrite growth are thoroughly discussed, including specific applications in all-solid-state (polymeric and inorganic), Lithium-sulphur and Li-O2 (air) batteries. A particular attention is paid to review recent developments in regard of prototype manufacturing and current state-ofthe-art of these battery technologies with respect to the 2030 targets of the EU Integrated Strategic Energy Technology Plan (SET-Plan) Action 7.}, author = {Varzi, Alberto and Thanner, Katharina and Scipioni, Roberto and Di Lecce, Daniele and Hassoun, Jusef and Dörfler, Susanne and Altheus, Holger and Kaskel, Stefan and Prehal, Christian and Freunberger, Stefan Alexander}, issn = {2664-1690}, keywords = {Battery, Lithium metal, Lithium-sulphur, Lithium-air, All-solid-state}, pages = {63}, publisher = {IST Austria}, title = {{Current status and future perspectives of Lithium metal batteries}}, doi = {10.15479/AT:ISTA:8067}, year = {2020}, } @article{8077, abstract = {The projection methods with vanilla inertial extrapolation step for variational inequalities have been of interest to many authors recently due to the improved convergence speed contributed by the presence of inertial extrapolation step. However, it is discovered that these projection methods with inertial steps lose the Fejér monotonicity of the iterates with respect to the solution, which is being enjoyed by their corresponding non-inertial projection methods for variational inequalities. This lack of Fejér monotonicity makes projection methods with vanilla inertial extrapolation step for variational inequalities not to converge faster than their corresponding non-inertial projection methods at times. Also, it has recently been proved that the projection methods with vanilla inertial extrapolation step may provide convergence rates that are worse than the classical projected gradient methods for strongly convex functions. In this paper, we introduce projection methods with alternated inertial extrapolation step for solving variational inequalities. We show that the sequence of iterates generated by our methods converges weakly to a solution of the variational inequality under some appropriate conditions. The Fejér monotonicity of even subsequence is recovered in these methods and linear rate of convergence is obtained. The numerical implementations of our methods compared with some other inertial projection methods show that our method is more efficient and outperforms some of these inertial projection methods.}, author = {Shehu, Yekini and Iyiola, Olaniyi S.}, issn = {0168-9274}, journal = {Applied Numerical Mathematics}, pages = {315--337}, publisher = {Elsevier}, title = {{Projection methods with alternating inertial steps for variational inequalities: Weak and linear convergence}}, doi = {10.1016/j.apnum.2020.06.009}, volume = {157}, year = {2020}, } @unpublished{8081, abstract = {Here, we employ micro- and nanosized cellulose particles, namely paper fines and cellulose nanocrystals, to induce hierarchical organization over a wide length scale. After processing them into carbonaceous materials, we demonstrate that these hierarchically organized materials outperform the best materials for supercapacitors operating with organic electrolytes reported in literature in terms of specific energy/power (Ragone plot) while showing hardly any capacity fade over 4,000 cycles. The highly porous materials feature a specific surface area as high as 2500 m2ˑg-1 and exhibit pore sizes in the range of 0.5 to 200 nm as proven by scanning electron microscopy and N2 physisorption. The carbonaceous materials have been further investigated by X-ray photoelectron spectroscopy and RAMAN spectroscopy. Since paper fines are an underutilized side stream in any paper production process, they are a cheap and highly available feedstock to prepare carbonaceous materials with outstanding performance in electrochemical applications. }, author = {Hobisch, Mathias A. and Mourad, Eléonore and Fischer, Wolfgang J. and Prehal, Christian and Eyley, Samuel and Childress, Anthony and Zankel, Armin and Mautner, Andreas and Breitenbach, Stefan and Rao, Apparao M. and Thielemans, Wim and Freunberger, Stefan Alexander and Eckhart, Rene and Bauer, Wolfgang and Spirk, Stefan }, title = {{High specific capacitance supercapacitors from hierarchically organized all-cellulose composites}}, year = {2020}, } @article{8084, abstract = {Origin and functions of intermittent transitions among sleep stages, including brief awakenings and arousals, constitute a challenge to the current homeostatic framework for sleep regulation, focusing on factors modulating sleep over large time scales. Here we propose that the complex micro-architecture characterizing sleep on scales of seconds and minutes results from intrinsic non-equilibrium critical dynamics. We investigate θ- and δ-wave dynamics in control rats and in rats where the sleep-promoting ventrolateral preoptic nucleus (VLPO) is lesioned (male Sprague-Dawley rats). We demonstrate that bursts in θ and δ cortical rhythms exhibit complex temporal organization, with long-range correlations and robust duality of power-law (θ-bursts, active phase) and exponential-like (δ-bursts, quiescent phase) duration distributions, features typical of non-equilibrium systems self-organizing at criticality. We show that such non-equilibrium behavior relates to anti-correlated coupling between θ- and δ-bursts, persists across a range of time scales, and is independent of the dominant physiologic state; indications of a basic principle in sleep regulation. Further, we find that VLPO lesions lead to a modulation of cortical dynamics resulting in altered dynamical parameters of θ- and δ-bursts and significant reduction in θ–δ coupling. Our empirical findings and model simulations demonstrate that θ–δ coupling is essential for the emerging non-equilibrium critical dynamics observed across the sleep–wake cycle, and indicate that VLPO neurons may have dual role for both sleep and arousal/brief wake activation. The uncovered critical behavior in sleep- and wake-related cortical rhythms indicates a mechanism essential for the micro-architecture of spontaneous sleep-stage and arousal transitions within a novel, non-homeostatic paradigm of sleep regulation.}, author = {Lombardi, Fabrizio and Gómez-Extremera, Manuel and Bernaola-Galván, Pedro and Vetrivelan, Ramalingam and Saper, Clifford B. and Scammell, Thomas E. and Ivanov, Plamen Ch.}, issn = {0270-6474}, journal = {Journal of Neuroscience}, number = {1}, pages = {171--190}, publisher = {Society for Neuroscience}, title = {{Critical dynamics and coupling in bursts of cortical rhythms indicate non-homeostatic mechanism for sleep-stage transitions and dual role of VLPO neurons in both sleep and wake}}, doi = {10.1523/jneurosci.1278-19.2019}, volume = {40}, year = {2020}, } @article{8091, abstract = {In the setting of the fractional quantum Hall effect we study the effects of strong, repulsive two-body interaction potentials of short range. We prove that Haldane’s pseudo-potential operators, including their pre-factors, emerge as mathematically rigorous limits of such interactions when the range of the potential tends to zero while its strength tends to infinity. In a common approach the interaction potential is expanded in angular momentum eigenstates in the lowest Landau level, which amounts to taking the pre-factors to be the moments of the potential. Such a procedure is not appropriate for very strong interactions, however, in particular not in the case of hard spheres. We derive the formulas valid in the short-range case, which involve the scattering lengths of the interaction potential in different angular momentum channels rather than its moments. Our results hold for bosons and fermions alike and generalize previous results in [6], which apply to bosons in the lowest angular momentum channel. Our main theorem asserts the convergence in a norm-resolvent sense of the Hamiltonian on the whole Hilbert space, after appropriate energy scalings, to Hamiltonians with contact interactions in the lowest Landau level.}, author = {Seiringer, Robert and Yngvason, Jakob}, issn = {15729613}, journal = {Journal of Statistical Physics}, pages = {448--464}, publisher = {Springer}, title = {{Emergence of Haldane pseudo-potentials in systems with short-range interactions}}, doi = {10.1007/s10955-020-02586-0}, volume = {181}, year = {2020}, } @inbook{8092, abstract = {Image translation refers to the task of mapping images from a visual domain to another. Given two unpaired collections of images, we aim to learn a mapping between the corpus-level style of each collection, while preserving semantic content shared across the two domains. We introduce xgan, a dual adversarial auto-encoder, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions. We exploit ideas from the domain adaptation literature and define a semantic consistency loss which encourages the learned embedding to preserve semantics shared across domains. We report promising qualitative results for the task of face-to-cartoon translation. The cartoon dataset we collected for this purpose, “CartoonSet”, is also publicly available as a new benchmark for semantic style transfer at https://google.github.io/cartoonset/index.html.}, author = {Royer, Amélie and Bousmalis, Konstantinos and Gouws, Stephan and Bertsch, Fred and Mosseri, Inbar and Cole, Forrester and Murphy, Kevin}, booktitle = {Domain Adaptation for Visual Understanding}, editor = {Singh, Richa and Vatsa, Mayank and Patel, Vishal M. and Ratha, Nalini}, isbn = {9783030306717}, pages = {33--49}, publisher = {Springer Nature}, title = {{XGAN: Unsupervised image-to-image translation for many-to-many mappings}}, doi = {10.1007/978-3-030-30671-7_3}, year = {2020}, } @article{8093, author = {Hippe, Andreas and Braun, Stephan Alexander and Oláh, Péter and Gerber, Peter Arne and Schorr, Anne and Seeliger, Stephan and Holtz, Stephanie and Jannasch, Katharina and Pivarcsi, Andor and Buhren, Bettina and Schrumpf, Holger and Kislat, Andreas and Bünemann, Erich and Steinhoff, Martin and Fischer, Jens and Lira, Sérgio A. and Boukamp, Petra and Hevezi, Peter and Stoecklein, Nikolas Hendrik and Hoffmann, Thomas and Alves, Frauke and Sleeman, Jonathan and Bauer, Thomas and Klufa, Jörg and Amberg, Nicole and Sibilia, Maria and Zlotnik, Albert and Müller-Homey, Anja and Homey, Bernhard}, issn = {15321827}, journal = {British Journal of Cancer}, pages = {942--954}, publisher = {Springer Nature}, title = {{EGFR/Ras-induced CCL20 production modulates the tumour microenvironment}}, doi = {10.1038/s41416-020-0943-2}, volume = {123}, year = {2020}, } @misc{8097, abstract = {Antibiotics that interfere with translation, when combined, interact in diverse and difficult-to-predict ways. Here, we explain these interactions by "translation bottlenecks": points in the translation cycle where antibiotics block ribosomal progression. To elucidate the underlying mechanisms of drug interactions between translation inhibitors, we generate translation bottlenecks genetically using inducible control of translation factors that regulate well-defined translation cycle steps. These perturbations accurately mimic antibiotic action and drug interactions, supporting that the interplay of different translation bottlenecks causes these interactions. We further show that growth laws, combined with drug uptake and binding kinetics, enable the direct prediction of a large fraction of observed interactions, yet fail to predict suppression. However, varying two translation bottlenecks simultaneously supports that dense traffic of ribosomes and competition for translation factors account for the previously unexplained suppression. These results highlight the importance of "continuous epistasis" in bacterial physiology.}, author = {Kavcic, Bor}, keywords = {Escherichia coli, antibiotic combinations, translation, growth laws, drug interactions, bacterial physiology, translation inhibitors}, publisher = {IST Austria}, title = {{Analysis scripts and research data for the paper "Mechanisms of drug interactions between translation-inhibiting antibiotics"}}, doi = {10.15479/AT:ISTA:8097}, year = {2020}, } @article{8099, abstract = {Sewall Wright developed FST for describing population differentiation and it has since been extended to many novel applications, including the detection of homomorphic sex chromosomes. However, there has been confusion regarding the expected estimate of FST for a fixed difference between the X‐ and Y‐chromosome when comparing males and females. Here, we attempt to resolve this confusion by contrasting two common FST estimators and explain why they yield different estimates when applied to the case of sex chromosomes. We show that this difference is true for many allele frequencies, but the situation characterized by fixed differences between the X‐ and Y‐chromosome is among the most extreme. To avoid additional confusion, we recommend that all authors using FST clearly state which estimator of FST their work uses.}, author = {Gammerdinger, William J and Toups, Melissa A and Vicoso, Beatriz}, issn = {1755-098X}, journal = {Molecular Ecology Resources}, number = {6}, pages = {1517--1525}, publisher = {Wiley}, title = {{Disagreement in FST estimators: A case study from sex chromosomes}}, doi = {10.1111/1755-0998.13210}, volume = {20}, year = {2020}, } @article{8101, abstract = {By rigorously accounting for mesoscale spatial correlations in donor/acceptor surface properties, we develop a scale-spanning model for same-material tribocharging. We find that mesoscale correlations affect not only the magnitude of charge transfer but also the fluctuations—suppressing otherwise overwhelming charge-transfer variability that is not observed experimentally. We furthermore propose a generic theoretical mechanism by which the mesoscale features might emerge, which is qualitatively consistent with other proposals in the literature.}, author = {Grosjean, Galien M and Wald, Sebastian and Sobarzo Ponce, Juan Carlos A and Waitukaitis, Scott R}, journal = {Physical Review Materials}, keywords = {electric charge, tribocharging, soft matter, granular materials, polymers}, number = {8}, publisher = {American Physical Society}, title = {{Quantitatively consistent scale-spanning model for same-material tribocharging}}, doi = {10.1103/PhysRevMaterials.4.082602}, volume = {4}, year = {2020}, } @article{8105, abstract = {Physical and biological systems often exhibit intermittent dynamics with bursts or avalanches (active states) characterized by power-law size and duration distributions. These emergent features are typical of systems at the critical point of continuous phase transitions, and have led to the hypothesis that such systems may self-organize at criticality, i.e. without any fine tuning of parameters. Since the introduction of the Bak-Tang-Wiesenfeld (BTW) model, the paradigm of self-organized criticality (SOC) has been very fruitful for the analysis of emergent collective behaviors in a number of systems, including the brain. Although considerable effort has been devoted in identifying and modeling scaling features of burst and avalanche statistics, dynamical aspects related to the temporal organization of bursts remain often poorly understood or controversial. Of crucial importance to understand the mechanisms responsible for emergent behaviors is the relationship between active and quiet periods, and the nature of the correlations. Here we investigate the dynamics of active (θ-bursts) and quiet states (δ-bursts) in brain activity during the sleep-wake cycle. We show the duality of power-law (θ, active phase) and exponential-like (δ, quiescent phase) duration distributions, typical of SOC, jointly emerge with power-law temporal correlations and anti-correlated coupling between active and quiet states. Importantly, we demonstrate that such temporal organization shares important similarities with earthquake dynamics, and propose that specific power-law correlations and coupling between active and quiet states are distinctive characteristics of a class of systems with self-organization at criticality.}, author = {Lombardi, Fabrizio and Wang, Jilin W.J.L. and Zhang, Xiyun and Ivanov, Plamen Ch}, issn = {2100-014X}, journal = {EPJ Web of Conferences}, publisher = {EDP Sciences}, title = {{Power-law correlations and coupling of active and quiet states underlie a class of complex systems with self-organization at criticality}}, doi = {10.1051/epjconf/202023000005}, volume = {230}, year = {2020}, } @article{8112, author = {Barton, Nicholas H}, issn = {1471-2970}, journal = {Philosophical Transactions of the Royal Society. Series B: Biological Sciences}, number = {1806}, publisher = {The Royal Society}, title = {{On the completion of speciation}}, doi = {10.1098/rstb.2019.0530}, volume = {375}, year = {2020}, } @unpublished{8125, abstract = {Context, such as behavioral state, is known to modulate memory formation and retrieval, but is usually ignored in associative memory models. Here, we propose several types of contextual modulation for associative memory networks that greatly increase their performance. In these networks, context inactivates specific neurons and connections, which modulates the effective connectivity of the network. Memories are stored only by the active components, thereby reducing interference from memories acquired in other contexts. Such networks exhibit several beneficial characteristics, including enhanced memory capacity, high robustness to noise, increased robustness to memory overloading, and better memory retention during continual learning. Furthermore, memories can be biased to have different relative strengths, or even gated on or off, according to contextual cues, providing a candidate model for cognitive control of memory and efficient memory search. An external context-encoding network can dynamically switch the memory network to a desired state, which we liken to experimentally observed contextual signals in prefrontal cortex and hippocampus. Overall, our work illustrates the benefits of organizing memory around context, and provides an important link between behavioral studies of memory and mechanistic details of neural circuits.SIGNIFICANCEMemory is context dependent — both encoding and recall vary in effectiveness and speed depending on factors like location and brain state during a task. We apply this idea to a simple computational model of associative memory through contextual gating of neurons and synaptic connections. Intriguingly, this results in several advantages, including vastly enhanced memory capacity, better robustness, and flexible memory gating. Our model helps to explain (i) how gating and inhibition contribute to memory processes, (ii) how memory access dynamically changes over time, and (iii) how context representations, such as those observed in hippocampus and prefrontal cortex, may interact with and control memory processes.}, author = {Podlaski, William F. and Agnes, Everton J. and Vogels, Tim P}, booktitle = {bioRxiv}, pages = {30}, publisher = {Cold Spring Harbor Laboratory}, title = {{Context-modular memory networks support high-capacity, flexible, and robust associative memories}}, year = {2020}, } @article{8126, abstract = {Cortical areas comprise multiple types of inhibitory interneurons with stereotypical connectivity motifs, but their combined effect on postsynaptic dynamics has been largely unexplored. Here, we analyse the response of a single postsynaptic model neuron receiving tuned excitatory connections alongside inhibition from two plastic populations. Depending on the inhibitory plasticity rule, synapses remain unspecific (flat), become anti-correlated to, or mirror excitatory synapses. Crucially, the neuron’s receptive field, i.e., its response to presynaptic stimuli, depends on the modulatory state of inhibition. When both inhibitory populations are active, inhibition balances excitation, resulting in uncorrelated postsynaptic responses regardless of the inhibitory tuning profiles. Modulating the activity of a given inhibitory population produces strong correlations to either preferred or non-preferred inputs, in line with recent experimental findings showing dramatic context-dependent changes of neurons’ receptive fields. We thus confirm that a neuron’s receptive field doesn’t follow directly from the weight profiles of its presynaptic afferents.}, author = {Agnes, Everton J. and Luppi, Andrea I. and Vogels, Tim P}, issn = {1529-2401}, journal = {The Journal of Neuroscience}, number = {50}, pages = {9634--9649}, publisher = {Society for Neuroscience}, title = {{Complementary inhibitory weight profiles emerge from plasticity and allow attentional switching of receptive fields}}, doi = {10.1523/JNEUROSCI.0276-20.2020}, volume = {40}, year = {2020}, } @article{8127, abstract = {Mechanistic modeling in neuroscience aims to explain observed phenomena in terms of underlying causes. However, determining which model parameters agree with complex and stochastic neural data presents a significant challenge. We address this challenge with a machine learning tool which uses deep neural density estimators—trained using model simulations—to carry out Bayesian inference and retrieve the full space of parameters compatible with raw data or selected data features. Our method is scalable in parameters and data features and can rapidly analyze new data after initial training. We demonstrate the power and flexibility of our approach on receptive fields, ion channels, and Hodgkin–Huxley models. We also characterize the space of circuit configurations giving rise to rhythmic activity in the crustacean stomatogastric ganglion, and use these results to derive hypotheses for underlying compensation mechanisms. Our approach will help close the gap between data-driven and theory-driven models of neural dynamics.}, author = {Gonçalves, Pedro J. and Lueckmann, Jan-Matthis and Deistler, Michael and Nonnenmacher, Marcel and Öcal, Kaan and Bassetto, Giacomo and Chintaluri, Chaitanya and Podlaski, William F. and Haddad, Sara A. and Vogels, Tim P and Greenberg, David S. and Macke, Jakob H.}, issn = {2050-084X}, journal = {eLife}, publisher = {eLife Sciences Publications}, title = {{Training deep neural density estimators to identify mechanistic models of neural dynamics}}, doi = {10.7554/eLife.56261}, volume = {9}, year = {2020}, } @article{8130, abstract = {We study the dynamics of a system of N interacting bosons in a disc-shaped trap, which is realised by an external potential that confines the bosons in one spatial dimension to an interval of length of order ε. The interaction is non-negative and scaled in such a way that its scattering length is of order ε/N, while its range is proportional to (ε/N)β with scaling parameter β∈(0,1]. We consider the simultaneous limit (N,ε)→(∞,0) and assume that the system initially exhibits Bose–Einstein condensation. We prove that condensation is preserved by the N-body dynamics, where the time-evolved condensate wave function is the solution of a two-dimensional non-linear equation. The strength of the non-linearity depends on the scaling parameter β. For β∈(0,1), we obtain a cubic defocusing non-linear Schrödinger equation, while the choice β=1 yields a Gross–Pitaevskii equation featuring the scattering length of the interaction. In both cases, the coupling parameter depends on the confining potential.}, author = {Bossmann, Lea}, issn = {0003-9527}, journal = {Archive for Rational Mechanics and Analysis}, number = {11}, pages = {541--606}, publisher = {Springer Nature}, title = {{Derivation of the 2d Gross–Pitaevskii equation for strongly confined 3d Bosons}}, doi = {10.1007/s00205-020-01548-w}, volume = {238}, year = {2020}, } @article{8131, abstract = {The possibility to generate construct valid animal models enabled the development and testing of therapeutic strategies targeting the core features of autism spectrum disorders (ASDs). At the same time, these studies highlighted the necessity of identifying sensitive developmental time windows for successful therapeutic interventions. Animal and human studies also uncovered the possibility to stratify the variety of ASDs in molecularly distinct subgroups, potentially facilitating effective treatment design. Here, we focus on the molecular pathways emerging as commonly affected by mutations in diverse ASD-risk genes, on their role during critical windows of brain development and the potential treatments targeting these biological processes.}, author = {Basilico, Bernadette and Morandell, Jasmin and Novarino, Gaia}, issn = {18790380}, journal = {Current Opinion in Genetics and Development}, number = {12}, pages = {126--137}, publisher = {Elsevier}, title = {{Molecular mechanisms for targeted ASD treatments}}, doi = {10.1016/j.gde.2020.06.004}, volume = {65}, year = {2020}, } @article{8132, abstract = {The WAVE regulatory complex (WRC) is crucial for assembly of the peripheral branched actin network constituting one of the main drivers of eukaryotic cell migration. Here, we uncover an essential role of the hematopoietic-specific WRC component HEM1 for immune cell development. Germline-encoded HEM1 deficiency underlies an inborn error of immunity with systemic autoimmunity, at cellular level marked by WRC destabilization, reduced filamentous actin, and failure to assemble lamellipodia. Hem1−/− mice display systemic autoimmunity, phenocopying the human disease. In the absence of Hem1, B cells become deprived of extracellular stimuli necessary to maintain the strength of B cell receptor signaling at a level permissive for survival of non-autoreactive B cells. This shifts the balance of B cell fate choices toward autoreactive B cells and thus autoimmunity.}, author = {Salzer, Elisabeth and Zoghi, Samaneh and Kiss, Máté G. and Kage, Frieda and Rashkova, Christina and Stahnke, Stephanie and Haimel, Matthias and Platzer, René and Caldera, Michael and Ardy, Rico Chandra and Hoeger, Birgit and Block, Jana and Medgyesi, David and Sin, Celine and Shahkarami, Sepideh and Kain, Renate and Ziaee, Vahid and Hammerl, Peter and Bock, Christoph and Menche, Jörg and Dupré, Loïc and Huppa, Johannes B. and Sixt, Michael K and Lomakin, Alexis and Rottner, Klemens and Binder, Christoph J. and Stradal, Theresia E.B. and Rezaei, Nima and Boztug, Kaan}, issn = {24709468}, journal = {Science Immunology}, number = {49}, publisher = {AAAS}, title = {{The cytoskeletal regulator HEM1 governs B cell development and prevents autoimmunity}}, doi = {10.1126/sciimmunol.abc3979}, volume = {5}, year = {2020}, } @article{8133, abstract = {The molecular factors which control circulating levels of inflammatory proteins are not well understood. Furthermore, association studies between molecular probes and human traits are often performed by linear model-based methods which may fail to account for complex structure and interrelationships within molecular datasets.In this study, we perform genome- and epigenome-wide association studies (GWAS/EWAS) on the levels of 70 plasma-derived inflammatory protein biomarkers in healthy older adults (Lothian Birth Cohort 1936; n = 876; Olink® inflammation panel). We employ a Bayesian framework (BayesR+) which can account for issues pertaining to data structure and unknown confounding variables (with sensitivity analyses using ordinary least squares- (OLS) and mixed model-based approaches). We identified 13 SNPs associated with 13 proteins (n = 1 SNP each) concordant across OLS and Bayesian methods. We identified 3 CpG sites spread across 3 proteins (n = 1 CpG each) that were concordant across OLS, mixed-model and Bayesian analyses. Tagged genetic variants accounted for up to 45% of variance in protein levels (for MCP2, 36% of variance alone attributable to 1 polymorphism). Methylation data accounted for up to 46% of variation in protein levels (for CXCL10). Up to 66% of variation in protein levels (for VEGFA) was explained using genetic and epigenetic data combined. We demonstrated putative causal relationships between CD6 and IL18R1 with inflammatory bowel disease and between IL12B and Crohn’s disease. Our data may aid understanding of the molecular regulation of the circulating inflammatory proteome as well as causal relationships between inflammatory mediators and disease.}, author = {Hillary, Robert F. and Trejo-Banos, Daniel and Kousathanas, Athanasios and Mccartney, Daniel L. and Harris, Sarah E. and Stevenson, Anna J. and Patxot, Marion and Ojavee, Sven Erik and Zhang, Qian and Liewald, David C. and Ritchie, Craig W. and Evans, Kathryn L. and Tucker-Drob, Elliot M. and Wray, Naomi R. and Mcrae, Allan F. and Visscher, Peter M. and Deary, Ian J. and Robinson, Matthew Richard and Marioni, Riccardo E.}, issn = {1756994X}, journal = {Genome Medicine}, number = {1}, publisher = {Springer Nature}, title = {{Multi-method genome- and epigenome-wide studies of inflammatory protein levels in healthy older adults}}, doi = {10.1186/s13073-020-00754-1}, volume = {12}, year = {2020}, } @article{8134, abstract = {We prove an upper bound on the free energy of a two-dimensional homogeneous Bose gas in the thermodynamic limit. We show that for a2ρ ≪ 1 and βρ ≳ 1, the free energy per unit volume differs from the one of the non-interacting system by at most 4πρ2|lna2ρ|−1(2−[1−βc/β]2+) to leading order, where a is the scattering length of the two-body interaction potential, ρ is the density, β is the inverse temperature, and βc is the inverse Berezinskii–Kosterlitz–Thouless critical temperature for superfluidity. In combination with the corresponding matching lower bound proved by Deuchert et al. [Forum Math. Sigma 8, e20 (2020)], this shows equality in the asymptotic expansion.}, author = {Mayer, Simon and Seiringer, Robert}, issn = {00222488}, journal = {Journal of Mathematical Physics}, number = {6}, publisher = {AIP}, title = {{The free energy of the two-dimensional dilute Bose gas. II. Upper bound}}, doi = {10.1063/5.0005950}, volume = {61}, year = {2020}, } @inproceedings{8135, abstract = {Discrete Morse theory has recently lead to new developments in the theory of random geometric complexes. This article surveys the methods and results obtained with this new approach, and discusses some of its shortcomings. It uses simulations to illustrate the results and to form conjectures, getting numerical estimates for combinatorial, topological, and geometric properties of weighted and unweighted Delaunay mosaics, their dual Voronoi tessellations, and the Alpha and Wrap complexes contained in the mosaics.}, author = {Edelsbrunner, Herbert and Nikitenko, Anton and Ölsböck, Katharina and Synak, Peter}, booktitle = {Topological Data Analysis}, isbn = {9783030434076}, issn = {21978549}, pages = {181--218}, publisher = {Springer Nature}, title = {{Radius functions on Poisson–Delaunay mosaics and related complexes experimentally}}, doi = {10.1007/978-3-030-43408-3_8}, volume = {15}, year = {2020}, } @article{8138, abstract = {Directional transport of the phytohormone auxin is a versatile, plant-specific mechanism regulating many aspects of plant development. The recently identified plant hormones, strigolactones (SLs), are implicated in many plant traits; among others, they modify the phenotypic output of PIN-FORMED (PIN) auxin transporters for fine-tuning of growth and developmental responses. Here, we show in pea and Arabidopsis that SLs target processes dependent on the canalization of auxin flow, which involves auxin feedback on PIN subcellular distribution. D14 receptor- and MAX2 F-box-mediated SL signaling inhibits the formation of auxin-conducting channels after wounding or from artificial auxin sources, during vasculature de novo formation and regeneration. At the cellular level, SLs interfere with auxin effects on PIN polar targeting, constitutive PIN trafficking as well as clathrin-mediated endocytosis. Our results identify a non-transcriptional mechanism of SL action, uncoupling auxin feedback on PIN polarity and trafficking, thereby regulating vascular tissue formation and regeneration.}, author = {Zhang, J and Mazur, E and Balla, J and Gallei, Michelle C and Kalousek, P and Medveďová, Z and Li, Y and Wang, Y and Prat, Tomas and Vasileva, Mina K and Reinöhl, V and Procházka, S and Halouzka, R and Tarkowski, P and Luschnig, C and Brewer, PB and Friml, Jiří}, issn = {2041-1723}, journal = {Nature Communications}, number = {1}, pages = {3508}, publisher = {Springer Nature}, title = {{Strigolactones inhibit auxin feedback on PIN-dependent auxin transport canalization}}, doi = {10.1038/s41467-020-17252-y}, volume = {11}, year = {2020}, } @article{8139, abstract = {Clathrin-mediated endocytosis (CME) is a crucial cellular process implicated in many aspects of plant growth, development, intra- and inter-cellular signaling, nutrient uptake and pathogen defense. Despite these significant roles, little is known about the precise molecular details of how it functions in planta. In order to facilitate the direct quantitative study of plant CME, here we review current routinely used methods and present refined, standardized quantitative imaging protocols which allow the detailed characterization of CME at multiple scales in plant tissues. These include: (i) an efficient electron microscopy protocol for the imaging of Arabidopsis CME vesicles in situ, thus providing a method for the detailed characterization of the ultra-structure of clathrin-coated vesicles; (ii) a detailed protocol and analysis for quantitative live-cell fluorescence microscopy to precisely examine the temporal interplay of endocytosis components during single CME events; (iii) a semi-automated analysis to allow the quantitative characterization of global internalization of cargos in whole plant tissues; and (iv) an overview and validation of useful genetic and pharmacological tools to interrogate the molecular mechanisms and function of CME in intact plant samples.}, author = {Johnson, Alexander J and Gnyliukh, Nataliia and Kaufmann, Walter and Narasimhan, Madhumitha and Vert, G and Bednarek, SY and Friml, Jiří}, issn = {0021-9533}, journal = {Journal of Cell Science}, number = {15}, publisher = {The Company of Biologists}, title = {{Experimental toolbox for quantitative evaluation of clathrin-mediated endocytosis in the plant model Arabidopsis}}, doi = {10.1242/jcs.248062}, volume = {133}, year = {2020}, } @article{8142, abstract = {Cell production and differentiation for the acquisition of specific functions are key features of living systems. The dynamic network of cellular microtubules provides the necessary platform to accommodate processes associated with the transition of cells through the individual phases of cytogenesis. Here, we show that the plant hormone cytokinin fine‐tunes the activity of the microtubular cytoskeleton during cell differentiation and counteracts microtubular rearrangements driven by the hormone auxin. The endogenous upward gradient of cytokinin activity along the longitudinal growth axis in Arabidopsis thaliana roots correlates with robust rearrangements of the microtubule cytoskeleton in epidermal cells progressing from the proliferative to the differentiation stage. Controlled increases in cytokinin activity result in premature re‐organization of the microtubule network from transversal to an oblique disposition in cells prior to their differentiation, whereas attenuated hormone perception delays cytoskeleton conversion into a configuration typical for differentiated cells. Intriguingly, cytokinin can interfere with microtubules also in animal cells, such as leukocytes, suggesting that a cytokinin‐sensitive control pathway for the microtubular cytoskeleton may be at least partially conserved between plant and animal cells.}, author = {Montesinos López, Juan C and Abuzeineh, A and Kopf, Aglaja and Juanes Garcia, Alba and Ötvös, Krisztina and Petrášek, J and Sixt, Michael K and Benková, Eva}, issn = {0261-4189}, journal = {The Embo Journal}, number = {17}, publisher = {Embo Press}, title = {{Phytohormone cytokinin guides microtubule dynamics during cell progression from proliferative to differentiated stage}}, doi = {10.15252/embj.2019104238}, volume = {39}, year = {2020}, } @techreport{8151, abstract = {The main idea behind the Core Project is to teach first year students at IST scientific communication skills and let them practice by presenting their research within an interdisciplinary environment. Over the course of the first semester, students participated in seminars, where they shared their results with the colleagues from other fields and took part in discussions on relevant subjects. The main focus during this sessions was on delivering the information in a simplified and comprehensible way, going into the very basics of a subject if necessary. At the end, the students were asked to present their research in the written form to exercise their writing skills. The reports were gathered in this document. All of them were reviewed by the teaching assistants and write-ups illustrating unique stylistic features and, in general, an outstanding level of writing skills, were honorably mentioned in the section "Selected Reports".}, author = {Maslov, Mikhail and Kondrashov, Fyodor and Artner, Christina and Hennessey-Wesen, Mike and Kavcic, Bor and Machnik, Nick N and Satapathy, Roshan K and Tomanek, Isabella}, pages = {425}, publisher = {IST Austria}, title = {{Core Project Proceedings}}, year = {2020}, } @phdthesis{8155, abstract = {In the thesis we focus on the interplay of the biophysics and evolution of gene regulation. We start by addressing how the type of prokaryotic gene regulation – activation and repression – affects spurious binding to DNA, also known as transcriptional crosstalk. We propose that regulatory interference caused by excess regulatory proteins in the dense cellular medium – global crosstalk – could be a factor in determining which type of gene regulatory network is evolutionarily preferred. Next,we use a normative approach in eukaryotic gene regulation to describe minimal non-equilibrium enhancer models that optimize so-called regulatory phenotypes. We find a class of models that differ from standard thermodynamic equilibrium models by a single parameter that notably increases the regulatory performance. Next chapter addresses the question of genotype-phenotype-fitness maps of higher dimensional phenotypes. We show that our biophysically realistic approach allows us to understand how the mechanisms of promoter function constrain genotypephenotype maps, and how they affect the evolutionary trajectories of promoters. In the last chapter we ask whether the intrinsic instability of gene duplication and amplification provides a generic alternative to canonical gene regulation. Using mathematical modeling, we show that amplifications can tune gene expression in many environments, including those where transcription factor-based schemes are hard to evolve or maintain. }, author = {Grah, Rok}, issn = {2663-337X}, pages = {310}, publisher = {IST Austria}, title = {{Gene regulation across scales – how biophysical constraints shape evolution}}, doi = {10.15479/AT:ISTA:8155}, year = {2020}, } @phdthesis{8156, abstract = {We present solutions to several problems originating from geometry and discrete mathematics: existence of equipartitions, maps without Tverberg multiple points, and inscribing quadrilaterals. Equivariant obstruction theory is the natural topological approach to these type of questions. However, for the specific problems we consider it had yielded only partial or no results. We get our results by complementing equivariant obstruction theory with other techniques from topology and geometry.}, author = {Avvakumov, Sergey}, pages = {119}, publisher = {IST Austria}, title = {{Topological methods in geometry and discrete mathematics}}, doi = {10.15479/AT:ISTA:8156}, year = {2020}, } @article{8162, abstract = {In mammalian genomes, a subset of genes is regulated by genomic imprinting, resulting in silencing of one parental allele. Imprinting is essential for cerebral cortex development, but prevalence and functional impact in individual cells is unclear. Here, we determined allelic expression in cortical cell types and established a quantitative platform to interrogate imprinting in single cells. We created cells with uniparental chromosome disomy (UPD) containing two copies of either the maternal or the paternal chromosome; hence, imprinted genes will be 2-fold overexpressed or not expressed. By genetic labeling of UPD, we determined cellular phenotypes and transcriptional responses to deregulated imprinted gene expression at unprecedented single-cell resolution. We discovered an unexpected degree of cell-type specificity and a novel function of imprinting in the regulation of cortical astrocyte survival. More generally, our results suggest functional relevance of imprinted gene expression in glial astrocyte lineage and thus for generating cortical cell-type diversity.}, author = {Laukoter, Susanne and Pauler, Florian and Beattie, Robert J and Amberg, Nicole and Hansen, Andi H and Streicher, Carmen and Penz, Thomas and Bock, Christoph and Hippenmeyer, Simon}, issn = {0896-6273}, journal = {Neuron}, number = {6}, pages = {1160--1179.e9}, publisher = {Elsevier}, title = {{Cell-type specificity of genomic imprinting in cerebral cortex}}, doi = {10.1016/j.neuron.2020.06.031}, volume = {107}, year = {2020}, } @article{8163, abstract = {Fejes Tóth [3] studied approximations of smooth surfaces in three-space by piecewise flat triangular meshes with a given number of vertices on the surface that are optimal with respect to Hausdorff distance. He proves that this Hausdorff distance decreases inversely proportional with the number of vertices of the approximating mesh if the surface is convex. He also claims that this Hausdorff distance is inversely proportional to the square of the number of vertices for a specific non-convex surface, namely a one-sheeted hyperboloid of revolution bounded by two congruent circles. We refute this claim, and show that the asymptotic behavior of the Hausdorff distance is linear, that is the same as for convex surfaces.}, author = {Vegter, Gert and Wintraecken, Mathijs}, issn = {1588-2896}, journal = {Studia Scientiarum Mathematicarum Hungarica}, number = {2}, pages = {193--199}, publisher = {AKJournals}, title = {{Refutation of a claim made by Fejes Tóth on the accuracy of surface meshes}}, doi = {10.1556/012.2020.57.2.1454}, volume = {57}, year = {2020}, } @article{8167, abstract = {The evolution of strong reproductive isolation (RI) is fundamental to the origins and maintenance of biological diversity, especially in situations where geographical distributions of taxa broadly overlap. But what is the history behind strong barriers currently acting in sympatry? Using whole-genome sequencing and single nucleotide polymorphism genotyping, we inferred (i) the evolutionary relationships, (ii) the strength of RI, and (iii) the demographic history of divergence between two broadly sympatric taxa of intertidal snail. Despite being cryptic, based on external morphology, Littorina arcana and Littorina saxatilis differ in their mode of female reproduction (egg-laying versus brooding), which may generate a strong post-zygotic barrier. We show that egg-laying and brooding snails are closely related, but genetically distinct. Genotyping of 3092 snails from three locations failed to recover any recent hybrid or backcrossed individuals, confirming that RI is strong. There was, however, evidence for a very low level of asymmetrical introgression, suggesting that isolation remains incomplete. The presence of strong, asymmetrical RI was further supported by demographic analysis of these populations. Although the taxa are currently broadly sympatric, demographic modelling suggests that they initially diverged during a short period of geographical separation involving very low gene flow. Our study suggests that some geographical separation may kick-start the evolution of strong RI, facilitating subsequent coexistence of taxa in sympatry. The strength of RI needed to achieve sympatry and the subsequent effect of sympatry on RI remain open questions.}, author = {Stankowski, Sean and Westram, Anja M and Zagrodzka, Zuzanna B. and Eyres, Isobel and Broquet, Thomas and Johannesson, Kerstin and Butlin, Roger K.}, issn = {1471-2970}, journal = {Philosophical Transactions of the Royal Society. Series B: Biological Sciences}, number = {1806}, publisher = {The Royal Society}, title = {{The evolution of strong reproductive isolation between sympatric intertidal snails}}, doi = {10.1098/rstb.2019.0545}, volume = {375}, year = {2020}, } @article{8168, abstract = {Speciation, that is, the evolution of reproductive barriers eventually leading to complete isolation, is a crucial process generating biodiversity. Recent work has contributed much to our understanding of how reproductive barriers begin to evolve, and how they are maintained in the face of gene flow. However, little is known about the transition from partial to strong reproductive isolation (RI) and the completion of speciation. We argue that the evolution of strong RI is likely to involve different processes, or new interactions among processes, compared with the evolution of the first reproductive barriers. Transition to strong RI may be brought about by changing external conditions, for example, following secondary contact. However, the increasing levels of RI themselves create opportunities for new barriers to evolve and, and interaction or coupling among barriers. These changing processes may depend on genomic architecture and leave detectable signals in the genome. We outline outstanding questions and suggest more theoretical and empirical work, considering both patterns and processes associated with strong RI, is needed to understand how speciation is completed.}, author = {Kulmuni, Jonna and Butlin, Roger K. and Lucek, Kay and Savolainen, Vincent and Westram, Anja M}, issn = {1471-2970}, journal = {Philosophical Transactions of the Royal Society. Series B: Biological sciences}, number = {1806}, publisher = {The Royal Society}, title = {{Towards the completion of speciation: The evolution of reproductive isolation beyond the first barriers}}, doi = {10.1098/rstb.2019.0528}, volume = {375}, year = {2020}, } @article{8169, abstract = {Many recent studies have addressed the mechanisms operating during the early stages of speciation, but surprisingly few studies have tested theoretical predictions on the evolution of strong reproductive isolation (RI). To help address this gap, we first undertook a quantitative review of the hybrid zone literature for flowering plants in relation to reproductive barriers. Then, using Populus as an exemplary model group, we analysed genome-wide variation for phylogenetic tree topologies in both early- and late-stage speciation taxa to determine how these patterns may be related to the genomic architecture of RI. Our plant literature survey revealed variation in barrier complexity and an association between barrier number and introgressive gene flow. Focusing on Populus, our genome-wide analysis of tree topologies in speciating poplar taxa points to unusually complex genomic architectures of RI, consistent with earlier genome-wide association studies. These architectures appear to facilitate the ‘escape’ of introgressed genome segments from polygenic barriers even with strong RI, thus affecting their relationships with recombination rates. Placed within the context of the broader literature, our data illustrate how phylogenomic approaches hold great promise for addressing the evolution and temporary breakdown of RI during late stages of speciation.}, author = {Shang, Huiying and Hess, Jaqueline and Pickup, Melinda and Field, David and Ingvarsson, Pär K. and Liu, Jianquan and Lexer, Christian}, issn = {14712970}, journal = {Philosophical Transactions of the Royal Society. Series B: Biological Sciences}, number = {1806}, publisher = {The Royal Society}, title = {{Evolution of strong reproductive isolation in plants: Broad-scale patterns and lessons from a perennial model group}}, doi = {10.1098/rstb.2019.0544}, volume = {375}, year = {2020}, } @article{8170, abstract = {Alignment of OCS, CS2, and I2 molecules embedded in helium nanodroplets is measured as a function of time following rotational excitation by a nonresonant, comparatively weak ps laser pulse. The distinct peaks in the power spectra, obtained by Fourier analysis, are used to determine the rotational, B, and centrifugal distortion, D, constants. For OCS, B and D match the values known from IR spectroscopy. For CS2 and I2, they are the first experimental results reported. The alignment dynamics calculated from the gas-phase rotational Schrödinger equation, using the experimental in-droplet B and D values, agree in detail with the measurement for all three molecules. The rotational spectroscopy technique for molecules in helium droplets introduced here should apply to a range of molecules and complexes.}, author = {Chatterley, Adam S. and Christiansen, Lars and Schouder, Constant A. and Jørgensen, Anders V. and Shepperson, Benjamin and Cherepanov, Igor and Bighin, Giacomo and Zillich, Robert E. and Lemeshko, Mikhail and Stapelfeldt, Henrik}, issn = {10797114}, journal = {Physical Review Letters}, number = {1}, publisher = {American Physical Society}, title = {{Rotational coherence spectroscopy of molecules in Helium nanodroplets: Reconciling the time and the frequency domains}}, doi = {10.1103/PhysRevLett.125.013001}, volume = {125}, year = {2020}, } @inbook{8173, abstract = {Understanding how the activity of membrane receptors and cellular signaling pathways shapes cell behavior is of fundamental interest in basic and applied research. Reengineering receptors to react to light instead of their cognate ligands allows for generating defined signaling inputs with high spatial and temporal precision and facilitates the dissection of complex signaling networks. Here, we describe fundamental considerations in the design of light-regulated receptor tyrosine kinases (Opto-RTKs) and appropriate control experiments. We also introduce methods for transient receptor expression in HEK293 cells, quantitative assessment of signaling activity in reporter gene assays, semiquantitative assessment of (in)activation time courses through Western blot (WB) analysis, and easy to implement light stimulation hardware.}, author = {Kainrath, Stephanie and Janovjak, Harald L}, booktitle = {Photoswitching Proteins}, editor = {Niopek, Dominik}, issn = {19406029}, pages = {233--246}, publisher = {Springer Nature}, title = {{Design and application of light-regulated receptor tyrosine kinases}}, doi = {10.1007/978-1-0716-0755-8_16}, volume = {2173}, year = {2020}, } @misc{8181, author = {Hauschild, Robert}, publisher = {IST Austria}, title = {{Amplified centrosomes in dendritic cells promote immune cell effector functions}}, doi = {10.15479/AT:ISTA:8181}, year = {2020}, } @inproceedings{8186, abstract = {Numerous methods have been proposed for probabilistic generative modelling of 3D objects. However, none of these is able to produce textured objects, which renders them of limited use for practical tasks. In this work, we present the first generative model of textured 3D meshes. Training such a model would traditionally require a large dataset of textured meshes, but unfortunately, existing datasets of meshes lack detailed textures. We instead propose a new training methodology that allows learning from collections of 2D images without any 3D information. To do so, we train our model to explain a distribution of images by modelling each image as a 3D foreground object placed in front of a 2D background. Thus, it learns to generate meshes that when rendered, produce images similar to those in its training set. A well-known problem when generating meshes with deep networks is the emergence of self-intersections, which are problematic for many use-cases. As a second contribution we therefore introduce a new generation process for 3D meshes that guarantees no self-intersections arise, based on the physical intuition that faces should push one another out of the way as they move. We conduct extensive experiments on our approach, reporting quantitative and qualitative results on both synthetic data and natural images. These show our method successfully learns to generate plausible and diverse textured 3D samples for five challenging object classes.}, author = {Henderson, Paul M and Tsiminaki, Vagia and Lampert, Christoph}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, location = {Virtual}, pages = {7498--7507}, publisher = {CVF}, title = {{Leveraging 2D data to learn textured 3D mesh generation}}, year = {2020}, } @article{8189, abstract = {Direct ethanol fuel cells (DEFCs) show a huge potential to power future electric vehicles and portable electronics, but their deployment is currently limited by the unavailability of proper electrocatalysis for the ethanol oxidation reaction (EOR). In this work, we engineer a new electrocatalyst by incorporating phosphorous into a palladium-tin alloy and demonstrate a significant performance improvement toward EOR. We first detail a synthetic method to produce Pd2Sn:P nanocrystals that incorporate 35% of phosphorus. These nanoparticles are supported on carbon black and tested for EOR. Pd2Sn:P/C catalysts exhibit mass current densities up to 5.03 A mgPd−1, well above those of Pd2Sn/C, PdP2/C and Pd/C reference catalysts. Furthermore, a twofold lower Tafel slope and a much longer durability are revealed for the Pd2Sn:P/C catalyst compared with Pd/C. The performance improvement is rationalized with the aid of density functional theory (DFT) calculations considering different phosphorous chemical environments. Depending on its oxidation state, surface phosphorus introduces sites with low energy OH− adsorption and/or strongly influences the electronic structure of palladium and tin to facilitate the oxidation of the acetyl to acetic acid, which is considered the EOR rate limiting step. DFT calculations also points out that the durability improvement of Pd2Sn:P/C catalyst is associated to the promotion of OH adsorption that accelerates the oxidation of intermediate poisoning COads, reactivating the catalyst surface.}, author = {Yu, Xiaoting and Liu, Junfeng and Li, Junshan and Luo, Zhishan and Zuo, Yong and Xing, Congcong and Llorca, Jordi and Nasiou, Déspina and Arbiol, Jordi and Pan, Kai and Kleinhanns, Tobias and Xie, Ying and Cabot, Andreu}, issn = {2211-2855}, journal = {Nano Energy}, number = {11}, publisher = {Elsevier}, title = {{Phosphorous incorporation in Pd2Sn alloys for electrocatalytic ethanol oxidation}}, doi = {10.1016/j.nanoen.2020.105116}, volume = {77}, year = {2020}, } @inproceedings{8191, abstract = {There has been a significant amount of research on hardware and software support for efficient concurrent data structures; yet, the question of how to build correct, simple, and scalable data structures has not yet been definitively settled. In this paper, we revisit this question from a minimalist perspective, and ask: what is the smallest amount of synchronization required for correct and efficient concurrent search data structures, and how could this minimal synchronization support be provided in hardware? To address these questions, we introduce memory tagging, a simple hardware mechanism which enables the programmer to "tag" a dynamic set of memory locations, at cache-line granularity, and later validate whether the memory has been concurrently modified, with the possibility of updating one of the underlying locations atomically if validation succeeds. We provide several examples showing that this mechanism can enable fast and arguably simple concurrent data structure designs, such as lists, binary search trees, balanced search trees, range queries, and Software Transactional Memory (STM) implementations. We provide an implementation of memory tags in the Graphite multi-core simulator, showing that the mechanism can be implemented entirely at the level of L1 cache, and that it can enable non-trivial speedups versus existing implementations of the above data structures.}, author = {Alistarh, Dan-Adrian and Brown, Trevor A and Singhal, Nandini}, booktitle = {Annual ACM Symposium on Parallelism in Algorithms and Architectures}, isbn = {9781450369350}, location = {Virtual Event, United States}, number = {7}, pages = {37--49}, publisher = {ACM}, title = {{Memory tagging: Minimalist synchronization for scalable concurrent data structures}}, doi = {10.1145/3350755.3400213}, year = {2020}, } @inproceedings{8193, abstract = {Multiple-environment Markov decision processes (MEMDPs) are MDPs equipped with not one, but multiple probabilistic transition functions, which represent the various possible unknown environments. While the previous research on MEMDPs focused on theoretical properties for long-run average payoff, we study them with discounted-sum payoff and focus on their practical advantages and applications. MEMDPs can be viewed as a special case of Partially observable and Mixed observability MDPs: the state of the system is perfectly observable, but not the environment. We show that the specific structure of MEMDPs allows for more efficient algorithmic analysis, in particular for faster belief updates. We demonstrate the applicability of MEMDPs in several domains. In particular, we formalize the sequential decision-making approach to contextual recommendation systems as MEMDPs and substantially improve over the previous MDP approach.}, author = {Chatterjee, Krishnendu and Chmelik, Martin and Karkhanis, Deep and Novotný, Petr and Royer, Amélie}, booktitle = {Proceedings of the 30th International Conference on Automated Planning and Scheduling}, issn = {23340843}, location = {Nancy, France}, pages = {48--56}, publisher = {Association for the Advancement of Artificial Intelligence}, title = {{Multiple-environment Markov decision processes: Efficient analysis and applications}}, volume = {30}, year = {2020}, } @inproceedings{8194, abstract = {Fixed-point arithmetic is a popular alternative to floating-point arithmetic on embedded systems. Existing work on the verification of fixed-point programs relies on custom formalizations of fixed-point arithmetic, which makes it hard to compare the described techniques or reuse the implementations. In this paper, we address this issue by proposing and formalizing an SMT theory of fixed-point arithmetic. We present an intuitive yet comprehensive syntax of the fixed-point theory, and provide formal semantics for it based on rational arithmetic. We also describe two decision procedures for this theory: one based on the theory of bit-vectors and the other on the theory of reals. We implement the two decision procedures, and evaluate our implementations using existing mature SMT solvers on a benchmark suite we created. Finally, we perform a case study of using the theory we propose to verify properties of quantized neural networks.}, author = {Baranowski, Marek and He, Shaobo and Lechner, Mathias and Nguyen, Thanh Son and Rakamarić, Zvonimir}, booktitle = {Automated Reasoning}, isbn = {9783030510732}, issn = {16113349}, location = {Paris, France}, pages = {13--31}, publisher = {Springer Nature}, title = {{An SMT theory of fixed-point arithmetic}}, doi = {10.1007/978-3-030-51074-9_2}, volume = {12166}, year = {2020}, } @inproceedings{8195, abstract = {This paper presents a foundation for refining concurrent programs with structured control flow. The verification problem is decomposed into subproblems that aid interactive program development, proof reuse, and automation. The formalization in this paper is the basis of a new design and implementation of the Civl verifier.}, author = {Kragl, Bernhard and Qadeer, Shaz and Henzinger, Thomas A}, booktitle = {Computer Aided Verification}, isbn = {9783030532871}, issn = {0302-9743}, pages = {275--298}, publisher = {Springer Nature}, title = {{Refinement for structured concurrent programs}}, doi = {10.1007/978-3-030-53288-8_14}, volume = {12224}, year = {2020}, } @article{8196, abstract = {This paper aims to obtain a strong convergence result for a Douglas–Rachford splitting method with inertial extrapolation step for finding a zero of the sum of two set-valued maximal monotone operators without any further assumption of uniform monotonicity on any of the involved maximal monotone operators. Furthermore, our proposed method is easy to implement and the inertial factor in our proposed method is a natural choice. Our method of proof is of independent interest. Finally, some numerical implementations are given to confirm the theoretical analysis.}, author = {Shehu, Yekini and Dong, Qiao-Li and Liu, Lu-Lu and Yao, Jen-Chih}, issn = {1389-4420}, journal = {Optimization and Engineering}, publisher = {Springer Nature}, title = {{New strong convergence method for the sum of two maximal monotone operators}}, doi = {10.1007/s11081-020-09544-5}, year = {2020}, } @unpublished{8198, abstract = {In this work, we investigate how the critical driving amplitude at the Floquet MBL-to-ergodic phase transition differs between smooth and non-smooth driving over a wide range of driving frequencies. To this end, we study numerically a disordered spin-1/2 chain which is periodically driven by a sine or a square-wave drive, respectively. In both cases, the critical driving amplitude increases monotonically with the frequency, and at large frequencies, it is identical for the two drives in the appropriate normalization. However, at low and intermediate frequencies the critical amplitude of the square-wave drive depends strongly on the frequency, while the one of the cosine drive is almost constant in a wide frequency range. By analyzing the density of drive-induced resonance in a Fourier space perspective, we conclude that this difference is due to resonances induced by the higher harmonics which are present (absent) in the Fourier spectrum of the square-wave (sine) drive. Furthermore, we suggest a numerically efficient method to estimate the frequency dependence of the critical driving amplitudes for different drives, based on measuring the density of drive-induced resonances.}, author = {Diringer, Asaf A. and Gulden, Tobias}, booktitle = {arXiv}, publisher = {arXiv}, title = {{Robustness of the Floquet many-body localized phase in the presence of a smooth and a non-smooth drive}}, year = {2020}, } @article{8199, abstract = {We investigate a mechanism to transiently stabilize topological phenomena in long-lived quasi-steady states of isolated quantum many-body systems driven at low frequencies. We obtain an analytical bound for the lifetime of the quasi-steady states which is exponentially large in the inverse driving frequency. Within this lifetime, the quasi-steady state is characterized by maximum entropy subject to the constraint of fixed number of particles in the system's Floquet-Bloch bands. In such a state, all the non-universal properties of these bands are washed out, hence only the topological properties persist.}, author = {Gulden, Tobias and Berg, Erez and Rudner, Mark Spencer and Lindner, Netanel}, issn = {2542-4653}, journal = {SciPost Physics}, publisher = {SciPost Foundation}, title = {{Exponentially long lifetime of universal quasi-steady states in topological Floquet pumps}}, doi = {10.21468/scipostphys.9.1.015}, volume = {9}, year = {2020}, } @article{8203, abstract = {Using inelastic cotunneling spectroscopy we observe a zero field splitting within the spin triplet manifold of Ge hut wire quantum dots. The states with spin ±1 in the confinement direction are energetically favored by up to 55 μeV compared to the spin 0 triplet state because of the strong spin–orbit coupling. The reported effect should be observable in a broad class of strongly confined hole quantum-dot systems and might need to be considered when operating hole spin qubits.}, author = {Katsaros, Georgios and Kukucka, Josip and Vukušić, Lada and Watzinger, Hannes and Gao, Fei and Wang, Ting and Zhang, Jian-Jun and Held, Karsten}, issn = {1530-6984}, journal = {Nano Letters}, number = {7}, pages = {5201--5206}, publisher = {ACS Publications}, title = {{Zero field splitting of heavy-hole states in quantum dots}}, doi = {10.1021/acs.nanolett.0c01466}, volume = {20}, year = {2020}, } @article{8225, abstract = {Birch pollen allergy is among the most prevalent pollen allergies in Northern and Central Europe. This IgE-mediated disease can be treated with allergen immunotherapy (AIT), which typically gives rise to IgG antibodies inducing tolerance. Although the main mechanisms of allergen immunotherapy (AIT) are known, questions regarding possible Fc-mediated effects of IgG antibodies remain unanswered. This can mainly be attributed to the unavailability of appropriate tools, i.e., well-characterised recombinant antibodies (rAbs). We hereby aimed at providing human rAbs of several classes for mechanistic studies and as possible candidates for passive immunotherapy. We engineered IgE, IgG1, and IgG4 sharing the same variable region against the major birch pollen allergen Bet v 1 using Polymerase Incomplete Primer Extension (PIPE) cloning. We tested IgE functionality and IgG blocking capabilities using appropriate model cell lines. In vitro studies showed IgE engagement with FcεRI and CD23 and Bet v 1-dependent degranulation. Overall, we hereby present fully functional, human IgE, IgG1, and IgG4 sharing the same variable region against Bet v 1 and showcase possible applications in first mechanistic studies. Furthermore, our IgG antibodies might be useful candidates for passive immunotherapy of birch pollen allergy.}, author = {Köhler, Verena K. and Crescioli, Silvia and Fazekas-Singer, Judit and Bax, Heather J. and Hofer, Gerhard and Pranger, Christina L. and Hufnagl, Karin and Bianchini, Rodolfo and Flicker, Sabine and Keller, Walter and Karagiannis, Sophia N. and Jensen-Jarolim, Erika}, issn = {1422-0067}, journal = {International Journal of Molecular Sciences}, number = {16}, publisher = {MDPI}, title = {{Filling the antibody pipeline in allergy: PIPE cloning of IgE, IgG1 and IgG4 against the major birch pollen allergen Bet v 1}}, doi = {10.3390/ijms21165693}, volume = {21}, year = {2020}, } @article{8226, author = {Gotovina, Jelena and Bianchini, Rodolfo and Fazekas-Singer, Judit and Herrmann, Ina and Pellizzari, Giulia and Haidl, Ian D. and Hufnagl, Karin and Karagiannis, Sophia N. and Marshall, Jean S. and Jensen‐Jarolim, Erika}, issn = {0105-4538}, journal = {Allergy}, publisher = {Wiley}, title = {{Epinephrine drives human M2a allergic macrophages to a regulatory phenotype reducing mast cell degranulation in vitro}}, doi = {10.1111/all.14299}, year = {2020}, } @article{8248, abstract = {We consider the following setting: suppose that we are given a manifold M in Rd with positive reach. Moreover assume that we have an embedded simplical complex A without boundary, whose vertex set lies on the manifold, is sufficiently dense and such that all simplices in A have sufficient quality. We prove that if, locally, interiors of the projection of the simplices onto the tangent space do not intersect, then A is a triangulation of the manifold, that is, they are homeomorphic.}, author = {Boissonnat, Jean-Daniel and Dyer, Ramsay and Ghosh, Arijit and Lieutier, Andre and Wintraecken, Mathijs}, issn = {0179-5376}, journal = {Discrete and Computational Geometry}, publisher = {Springer Nature}, title = {{Local conditions for triangulating submanifolds of Euclidean space}}, doi = {10.1007/s00454-020-00233-9}, year = {2020}, } @article{8250, abstract = {Antibiotics that interfere with translation, when combined, interact in diverse and difficult-to-predict ways. Here, we explain these interactions by “translation bottlenecks”: points in the translation cycle where antibiotics block ribosomal progression. To elucidate the underlying mechanisms of drug interactions between translation inhibitors, we generate translation bottlenecks genetically using inducible control of translation factors that regulate well-defined translation cycle steps. These perturbations accurately mimic antibiotic action and drug interactions, supporting that the interplay of different translation bottlenecks causes these interactions. We further show that growth laws, combined with drug uptake and binding kinetics, enable the direct prediction of a large fraction of observed interactions, yet fail to predict suppression. However, varying two translation bottlenecks simultaneously supports that dense traffic of ribosomes and competition for translation factors account for the previously unexplained suppression. These results highlight the importance of “continuous epistasis” in bacterial physiology.}, author = {Kavcic, Bor and Tkačik, Gašper and Bollenbach, Tobias}, issn = {2041-1723}, journal = {Nature Communications}, publisher = {Springer Nature}, title = {{Mechanisms of drug interactions between translation-inhibiting antibiotics}}, doi = {10.1038/s41467-020-17734-z}, volume = {11}, year = {2020}, } @unpublished{8253, abstract = {Brains process information in spiking neural networks. Their intricate connections shape the diverse functions these networks perform. In comparison, the functional capabilities of models of spiking networks are still rudimentary. This shortcoming is mainly due to the lack of insight and practical algorithms to construct the necessary connectivity. Any such algorithm typically attempts to build networks by iteratively reducing the error compared to a desired output. But assigning credit to hidden units in multi-layered spiking networks has remained challenging due to the non-differentiable nonlinearity of spikes. To avoid this issue, one can employ surrogate gradients to discover the required connectivity in spiking network models. However, the choice of a surrogate is not unique, raising the question of how its implementation influences the effectiveness of the method. Here, we use numerical simulations to systematically study how essential design parameters of surrogate gradients impact learning performance on a range of classification problems. We show that surrogate gradient learning is robust to different shapes of underlying surrogate derivatives, but the choice of the derivative’s scale can substantially affect learning performance. When we combine surrogate gradients with a suitable activity regularization technique, robust information processing can be achieved in spiking networks even at the sparse activity limit. Our study provides a systematic account of the remarkable robustness of surrogate gradient learning and serves as a practical guide to model functional spiking neural networks.}, author = {Zenke, Friedemann and Vogels, Tim P}, booktitle = {bioRxiv}, title = {{The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks}}, year = {2020}, } @article{8261, abstract = {Dentate gyrus granule cells (GCs) connect the entorhinal cortex to the hippocampal CA3 region, but how they process spatial information remains enigmatic. To examine the role of GCs in spatial coding, we measured excitatory postsynaptic potentials (EPSPs) and action potentials (APs) in head-fixed mice running on a linear belt. Intracellular recording from morphologically identified GCs revealed that most cells were active, but activity level varied over a wide range. Whereas only ∼5% of GCs showed spatially tuned spiking, ∼50% received spatially tuned input. Thus, the GC population broadly encodes spatial information, but only a subset relays this information to the CA3 network. Fourier analysis indicated that GCs received conjunctive place-grid-like synaptic input, suggesting code conversion in single neurons. GC firing was correlated with dendritic complexity and intrinsic excitability, but not extrinsic excitatory input or dendritic cable properties. Thus, functional maturation may control input-output transformation and spatial code conversion.}, author = {Zhang, Xiaomin and Schlögl, Alois and Jonas, Peter M}, issn = {0896-6273}, journal = {Neuron}, number = {6}, pages = {1212--1225}, publisher = {Elsevier}, title = {{Selective routing of spatial information flow from input to output in hippocampal granule cells}}, doi = {10.1016/j.neuron.2020.07.006}, volume = {107}, year = {2020}, } @article{8268, abstract = {Modern scientific instruments produce vast amounts of data, which can overwhelm the processing ability of computer systems. Lossy compression of data is an intriguing solution, but comes with its own drawbacks, such as potential signal loss, and the need for careful optimization of the compression ratio. In this work, we focus on a setting where this problem is especially acute: compressive sensing frameworks for interferometry and medical imaging. We ask the following question: can the precision of the data representation be lowered for all inputs, with recovery guarantees and practical performance Our first contribution is a theoretical analysis of the normalized Iterative Hard Thresholding (IHT) algorithm when all input data, meaning both the measurement matrix and the observation vector are quantized aggressively. We present a variant of low precision normalized IHT that, under mild conditions, can still provide recovery guarantees. The second contribution is the application of our quantization framework to radio astronomy and magnetic resonance imaging. We show that lowering the precision of the data can significantly accelerate image recovery. We evaluate our approach on telescope data and samples of brain images using CPU and FPGA implementations achieving up to a 9x speedup with negligible loss of recovery quality.}, author = {Gurel, Nezihe Merve and Kara, Kaan and Stojanov, Alen and Smith, Tyler and Lemmin, Thomas and Alistarh, Dan-Adrian and Puschel, Markus and Zhang, Ce}, issn = {19410476}, journal = {IEEE Transactions on Signal Processing}, pages = {4268--4282}, publisher = {IEEE}, title = {{Compressive sensing using iterative hard thresholding with low precision data representation: Theory and applications}}, doi = {10.1109/TSP.2020.3010355}, volume = {68}, year = {2020}, } @article{8271, author = {He, Peng and Zhang, Yuzhou and Xiao, Guanghui}, issn = {17529867}, journal = {Molecular Plant}, number = {9}, pages = {1238--1240}, publisher = {Elsevier}, title = {{Origin of a subgenome and genome evolution of allotetraploid cotton species}}, doi = {10.1016/j.molp.2020.07.006}, volume = {13}, year = {2020}, } @inproceedings{8272, abstract = {We study turn-based stochastic zero-sum games with lexicographic preferences over reachability and safety objectives. Stochastic games are standard models in control, verification, and synthesis of stochastic reactive systems that exhibit both randomness as well as angelic and demonic non-determinism. Lexicographic order allows to consider multiple objectives with a strict preference order over the satisfaction of the objectives. To the best of our knowledge, stochastic games with lexicographic objectives have not been studied before. We establish determinacy of such games and present strategy and computational complexity results. For strategy complexity, we show that lexicographically optimal strategies exist that are deterministic and memory is only required to remember the already satisfied and violated objectives. For a constant number of objectives, we show that the relevant decision problem is in NP∩coNP , matching the current known bound for single objectives; and in general the decision problem is PSPACE -hard and can be solved in NEXPTIME∩coNEXPTIME . We present an algorithm that computes the lexicographically optimal strategies via a reduction to computation of optimal strategies in a sequence of single-objectives games. We have implemented our algorithm and report experimental results on various case studies.}, author = {Chatterjee, Krishnendu and Katoen, Joost P and Weininger, Maximilian and Winkler, Tobias}, booktitle = {International Conference on Computer Aided Verification}, isbn = {9783030532901}, issn = {16113349}, pages = {398--420}, publisher = {Springer Nature}, title = {{Stochastic games with lexicographic reachability-safety objectives}}, doi = {10.1007/978-3-030-53291-8_21}, volume = {12225}, year = {2020}, } @article{8283, abstract = {Drought and salt stress are the main environmental cues affecting the survival, development, distribution, and yield of crops worldwide. MYB transcription factors play a crucial role in plants’ biological processes, but the function of pineapple MYB genes is still obscure. In this study, one of the pineapple MYB transcription factors, AcoMYB4, was isolated and characterized. The results showed that AcoMYB4 is localized in the cell nucleus, and its expression is induced by low temperature, drought, salt stress, and hormonal stimulation, especially by abscisic acid (ABA). Overexpression of AcoMYB4 in rice and Arabidopsis enhanced plant sensitivity to osmotic stress; it led to an increase in the number stomata on leaf surfaces and lower germination rate under salt and drought stress. Furthermore, in AcoMYB4 OE lines, the membrane oxidation index, free proline, and soluble sugar contents were decreased. In contrast, electrolyte leakage and malondialdehyde (MDA) content increased significantly due to membrane injury, indicating higher sensitivity to drought and salinity stresses. Besides the above, both the expression level and activities of several antioxidant enzymes were decreased, indicating lower antioxidant activity in AcoMYB4 transgenic plants. Moreover, under osmotic stress, overexpression of AcoMYB4 inhibited ABA biosynthesis through a decrease in the transcription of genes responsible for ABA synthesis (ABA1 and ABA2) and ABA signal transduction factor ABI5. These results suggest that AcoMYB4 negatively regulates osmotic stress by attenuating cellular ABA biosynthesis and signal transduction pathways. }, author = {Chen, Huihuang and Lai, Linyi and Li, Lanxin and Liu, Liping and Jakada, Bello Hassan and Huang, Youmei and He, Qing and Chai, Mengnan and Niu, Xiaoping and Qin, Yuan}, issn = {14220067}, journal = {International Journal of Molecular Sciences}, number = {16}, publisher = {MDPI}, title = {{AcoMYB4, an Ananas comosus L. MYB transcription factor, functions in osmotic stress through negative regulation of ABA signaling}}, doi = {10.3390/ijms21165727}, volume = {21}, year = {2020}, } @article{8284, abstract = {Multiple resistance and pH adaptation (Mrp) antiporters are multi-subunit Na+ (or K+)/H+ exchangers representing an ancestor of many essential redox-driven proton pumps, such as respiratory complex I. The mechanism of coupling between ion or electron transfer and proton translocation in this large protein family is unknown. Here, we present the structure of the Mrp complex from Anoxybacillus flavithermus solved by cryo-EM at 3.0 Å resolution. It is a dimer of seven-subunit protomers with 50 trans-membrane helices each. Surface charge distribution within each monomer is remarkably asymmetric, revealing probable proton and sodium translocation pathways. On the basis of the structure we propose a mechanism where the coupling between sodium and proton translocation is facilitated by a series of electrostatic interactions between a cation and key charged residues. This mechanism is likely to be applicable to the entire family of redox proton pumps, where electron transfer to substrates replaces cation movements.}, author = {Steiner, Julia and Sazanov, Leonid A}, issn = {2050084X}, journal = {eLife}, publisher = {eLife Sciences Publications}, title = {{Structure and mechanism of the Mrp complex, an ancient cation/proton antiporter}}, doi = {10.7554/eLife.59407}, volume = {9}, year = {2020}, } @article{8285, abstract = {We demonstrate the utility of optical cavity generated spin-squeezed states in free space atomic fountain clocks in ensembles of 390 000 87Rb atoms. Fluorescence imaging, correlated to an initial quantum nondemolition measurement, is used for population spectroscopy after the atoms are released from a confining lattice. For a free fall time of 4 milliseconds, we resolve a single-shot phase sensitivity of 814(61) microradians, which is 5.8(0.6) decibels (dB) below the quantum projection limit. We observe that this squeezing is preserved as the cloud expands to a roughly 200 μm radius and falls roughly 300 μm in free space. Ramsey spectroscopy with 240 000 atoms at a 3.6 ms Ramsey time results in a single-shot fractional frequency stability of 8.4(0.2)×10−12, 3.8(0.2) dB below the quantum projection limit. The sensitivity and stability are limited by the technical noise in the fluorescence detection protocol and the microwave system, respectively.}, author = {Malia, Benjamin K. and Martínez-Rincón, Julián and Wu, Yunfan and Hosten, Onur and Kasevich, Mark A.}, issn = {10797114}, journal = {Physical Review Letters}, number = {4}, title = {{Free space Ramsey spectroscopy in rubidium with noise below the quantum projection limit}}, doi = {10.1103/PhysRevLett.125.043202}, volume = {125}, year = {2020}, } @inproceedings{8286, abstract = {We consider the following dynamic load-balancing process: given an underlying graph G with n nodes, in each step t≥ 0, one unit of load is created, and placed at a randomly chosen graph node. In the same step, the chosen node picks a random neighbor, and the two nodes balance their loads by averaging them. We are interested in the expected gap between the minimum and maximum loads at nodes as the process progresses, and its dependence on n and on the graph structure. Variants of the above graphical balanced allocation process have been studied previously by Peres, Talwar, and Wieder [Peres et al., 2015], and by Sauerwald and Sun [Sauerwald and Sun, 2015]. These authors left as open the question of characterizing the gap in the case of cycle graphs in the dynamic case, where weights are created during the algorithm’s execution. For this case, the only known upper bound is of 𝒪(n log n), following from a majorization argument due to [Peres et al., 2015], which analyzes a related graphical allocation process. In this paper, we provide an upper bound of 𝒪 (√n log n) on the expected gap of the above process for cycles of length n. We introduce a new potential analysis technique, which enables us to bound the difference in load between k-hop neighbors on the cycle, for any k ≤ n/2. We complement this with a "gap covering" argument, which bounds the maximum value of the gap by bounding its value across all possible subsets of a certain structure, and recursively bounding the gaps within each subset. We provide analytical and experimental evidence that our upper bound on the gap is tight up to a logarithmic factor. }, author = {Alistarh, Dan-Adrian and Nadiradze, Giorgi and Sabour, Amirmojtaba}, booktitle = {47th International Colloquium on Automata, Languages, and Programming}, isbn = {9783959771382}, issn = {18688969}, location = {Virtual, Online; Germany}, publisher = {Schloss Dagstuhl - Leibniz-Zentrum für Informatik}, title = {{Dynamic averaging load balancing on cycles}}, doi = {10.4230/LIPIcs.ICALP.2020.7}, volume = {168}, year = {2020}, } @inproceedings{8287, abstract = {Reachability analysis aims at identifying states reachable by a system within a given time horizon. This task is known to be computationally expensive for linear hybrid systems. Reachability analysis works by iteratively applying continuous and discrete post operators to compute states reachable according to continuous and discrete dynamics, respectively. In this paper, we enhance both of these operators and make sure that most of the involved computations are performed in low-dimensional state space. In particular, we improve the continuous-post operator by performing computations in high-dimensional state space only for time intervals relevant for the subsequent application of the discrete-post operator. Furthermore, the new discrete-post operator performs low-dimensional computations by leveraging the structure of the guard and assignment of a considered transition. We illustrate the potential of our approach on a number of challenging benchmarks.}, author = {Bogomolov, Sergiy and Forets, Marcelo and Frehse, Goran and Potomkin, Kostiantyn and Schilling, Christian}, booktitle = {Proceedings of the International Conference on Embedded Software}, keywords = {Reachability, Hybrid systems, Decomposition}, location = {Virtual }, title = {{Reachability analysis of linear hybrid systems via block decomposition}}, year = {2020}, } @misc{8294, abstract = {Automated root growth analysis and tracking of root tips. }, author = {Hauschild, Robert}, publisher = {IST Austria}, title = {{RGtracker}}, doi = {10.15479/AT:ISTA:8294}, year = {2020}, } @unpublished{8307, abstract = {Classic Byzantine fault-tolerant consensus protocols forfeit liveness in the face of asynchrony in order to preserve safety, whereas most deployed blockchain protocols forfeit safety in order to remain live. In this work, we achieve the best of both worlds by proposing a novel abstractions called the finality gadget. A finality gadget allows for transactions to always optimistically commit but informs the clients that these transactions might be unsafe. As a result, a blockchain can execute transactions optimistically and only commit them after they have been sufficiently and provably audited. In this work, we formally model the finality gadget abstraction, prove that it is impossible to solve it deterministically in full asynchrony (even though it is stronger than consensus) and provide a partially synchronous protocol which is currently securing a major blockchain. This way we show that the protocol designer can decouple safety and liveness in order to speed up recovery from failures. We believe that there can be other types of finality gadgets that provide weaker safety (e.g., probabilistic) in order to gain more efficiency and this can depend on the probability that the network is not in synchrony.}, author = {Stewart, Alistair and Kokoris Kogias, Eleftherios}, booktitle = {arXiv}, title = {{GRANDPA: A Byzantine finality gadget}}, year = {2020}, } @article{8308, abstract = {Many-body localization provides a mechanism to avoid thermalization in isolated interacting quantum systems. The breakdown of thermalization may be complete, when all eigenstates in the many-body spectrum become localized, or partial, when the so-called many-body mobility edge separates localized and delocalized parts of the spectrum. Previously, De Roeck et al. [Phys. Rev. B 93, 014203 (2016)] suggested a possible instability of the many-body mobility edge in energy density. The local ergodic regions—so-called “bubbles”—resonantly spread throughout the system, leading to delocalization. In order to study such instability mechanism, in this work we design a model featuring many-body mobility edge in particle density: the states at small particle density are localized, while increasing the density of particles leads to delocalization. Using numerical simulations with matrix product states, we demonstrate the stability of many-body localization with respect to small bubbles in large dilute systems for experimentally relevant timescales. In addition, we demonstrate that processes where the bubble spreads are favored over processes that lead to resonant tunneling, suggesting a possible mechanism behind the observed stability of many-body mobility edge. We conclude by proposing experiments to probe particle density mobility edge in the Bose-Hubbard model.}, author = {Brighi, Pietro and Abanin, Dmitry A. and Serbyn, Maksym}, issn = {2469-9969}, journal = {Physical Review B}, number = {6}, publisher = {American Physical Society}, title = {{Stability of mobility edges in disordered interacting systems}}, doi = {10.1103/physrevb.102.060202}, volume = {102}, year = {2020}, } @article{8318, abstract = {Complex I is the first and the largest enzyme of respiratory chains in bacteria and mitochondria. The mechanism which couples spatially separated transfer of electrons to proton translocation in complex I is not known. Here we report five crystal structures of T. thermophilus enzyme in complex with NADH or quinone-like compounds. We also determined cryo-EM structures of major and minor native states of the complex, differing in the position of the peripheral arm. Crystal structures show that binding of quinone-like compounds (but not of NADH) leads to a related global conformational change, accompanied by local re-arrangements propagating from the quinone site to the nearest proton channel. Normal mode and molecular dynamics analyses indicate that these are likely to represent the first steps in the proton translocation mechanism. Our results suggest that quinone binding and chemistry play a key role in the coupling mechanism of complex I.}, author = {Gutierrez-Fernandez, Javier and Kaszuba, Karol and Minhas, Gurdeep S. and Baradaran, Rozbeh and Tambalo, Margherita and Gallagher, David T. and Sazanov, Leonid A}, issn = {20411723}, journal = {Nature Communications}, number = {1}, publisher = {Springer Nature}, title = {{Key role of quinone in the mechanism of respiratory complex I}}, doi = {10.1038/s41467-020-17957-0}, volume = {11}, year = {2020}, } @article{8319, abstract = {We demonstrate that releasing atoms into free space from an optical lattice does not deteriorate cavity-generated spin squeezing for metrological purposes. In this work, an ensemble of 500000 spin-squeezed atoms in a high-finesse optical cavity with near-uniform atom-cavity coupling is prepared, released into free space, recaptured in the cavity, and probed. Up to ∼10 dB of metrologically relevant squeezing is retrieved for 700μs free-fall times, and decaying levels of squeezing are realized for up to 3 ms free-fall times. The degradation of squeezing results from loss of atom-cavity coupling homogeneity between the initial squeezed state generation and final collective state readout. A theoretical model is developed to quantify this degradation and this model is experimentally validated.}, author = {Wu, Yunfan and Krishnakumar, Rajiv and Martínez-Rincón, Julián and Malia, Benjamin K. and Hosten, Onur and Kasevich, Mark A.}, issn = {24699934}, journal = {Physical Review A}, number = {1}, publisher = {APS}, title = {{Retrieval of cavity-generated atomic spin squeezing after free-space release}}, doi = {10.1103/PhysRevA.102.012224}, volume = {102}, year = {2020}, } @article{8320, abstract = {The genetic code is considered to use five nucleic bases (adenine, guanine, cytosine, thymine and uracil), which form two pairs for encoding information in DNA and two pairs for encoding information in RNA. Nevertheless, in recent years several artificial base pairs have been developed in attempts to expand the genetic code. Employment of these additional base pairs increases the information capacity and variety of DNA sequences, and provides a platform for the site-specific, enzymatic incorporation of extra functional components into DNA and RNA. As a result, of the development of such expanded systems, many artificial base pairs have been synthesized and tested under various conditions. Following many stages of enhancement, unnatural base pairs have been modified to eliminate their weak points, qualifying them for specific research needs. Moreover, the first attempts to create a semi-synthetic organism containing DNA with unnatural base pairs seem to have been successful. This further extends the possible applications of these kinds of pairs. Herein, we describe the most significant qualities of unnatural base pairs and their actual applications.}, author = {Mukba, S. A. and Vlasov, Petr and Kolosov, P. M. and Shuvalova, E. Y. and Egorova, T. V. and Alkalaeva, E. Z.}, issn = {16083245}, journal = {Molecular Biology}, number = {4}, pages = {475--484}, publisher = {Springer Nature}, title = {{Expanding the genetic code: Unnatural base pairs in biological systems}}, doi = {10.1134/S0026893320040111}, volume = {54}, year = {2020}, } @article{8321, abstract = {The genetic code is considered to use five nucleic bases (adenine, guanine, cytosine, thymine and uracil), which form two pairs for encoding information in DNA and two pairs for encoding information in RNA. Nevertheless, in recent years several artificial base pairs have been developed in attempts to expand the genetic code. Employment of these additional base pairs increases the information capacity and variety of DNA sequences, and provides a platform for the site-specific, enzymatic incorporation of extra functional components into DNA and RNA. As a result, of the development of such expanded systems, many artificial base pairs have been synthesized and tested under various conditions. Following many stages of enhancement, unnatural base pairs have been modified to eliminate their weak points, qualifying them for specific research needs. Moreover, the first attempts to create a semi-synthetic organism containing DNA with unnatural base pairs seem to have been successful. This further extends the possible applications of these kinds of pairs. Herein, we describe the most significant qualities of unnatural base pairs and their actual applications.}, author = {Mukba, S. A. and Vlasov, Petr and Kolosov, P. M. and Shuvalova, E. Y. and Egorova, T. V. and Alkalaeva, E. Z.}, issn = {00268984}, journal = {Molekuliarnaia biologiia}, number = {4}, pages = {531--541}, publisher = {Russian Academy of Sciences}, title = {{Expanding the genetic code: Unnatural base pairs in biological systems}}, doi = {10.31857/S0026898420040126}, volume = {54}, year = {2020}, } @inproceedings{8322, abstract = {Reverse firewalls were introduced at Eurocrypt 2015 by Miro-nov and Stephens-Davidowitz, as a method for protecting cryptographic protocols against attacks on the devices of the honest parties. In a nutshell: a reverse firewall is placed outside of a device and its goal is to “sanitize” the messages sent by it, in such a way that a malicious device cannot leak its secrets to the outside world. It is typically assumed that the cryptographic devices are attacked in a “functionality-preserving way” (i.e. informally speaking, the functionality of the protocol remains unchanged under this attacks). In their paper, Mironov and Stephens-Davidowitz construct a protocol for passively-secure two-party computations with firewalls, leaving extension of this result to stronger models as an open question. In this paper, we address this problem by constructing a protocol for secure computation with firewalls that has two main advantages over the original protocol from Eurocrypt 2015. Firstly, it is a multiparty computation protocol (i.e. it works for an arbitrary number n of the parties, and not just for 2). Secondly, it is secure in much stronger corruption settings, namely in the active corruption model. More precisely: we consider an adversary that can fully corrupt up to 𝑛−1 parties, while the remaining parties are corrupt in a functionality-preserving way. Our core techniques are: malleable commitments and malleable non-interactive zero-knowledge, which in particular allow us to create a novel protocol for multiparty augmented coin-tossing into the well with reverse firewalls (that is based on a protocol of Lindell from Crypto 2001).}, author = {Chakraborty, Suvradip and Dziembowski, Stefan and Nielsen, Jesper Buus}, booktitle = {Advances in Cryptology – CRYPTO 2020}, isbn = {9783030568795}, issn = {16113349}, location = {Santa Barbara, CA, United States}, pages = {732--762}, publisher = {Springer Nature}, title = {{Reverse firewalls for actively secure MPCs}}, doi = {10.1007/978-3-030-56880-1_26}, volume = {12171}, year = {2020}, } @article{8323, author = {Pach, János}, issn = {14320444}, journal = {Discrete and Computational Geometry}, pages = {571--574}, publisher = {Springer Nature}, title = {{A farewell to Ricky Pollack}}, doi = {10.1007/s00454-020-00237-5}, volume = {64}, year = {2020}, } @article{8325, abstract = {Let 𝐹:ℤ2→ℤ be the pointwise minimum of several linear functions. The theory of smoothing allows us to prove that under certain conditions there exists the pointwise minimal function among all integer-valued superharmonic functions coinciding with F “at infinity”. We develop such a theory to prove existence of so-called solitons (or strings) in a sandpile model, studied by S. Caracciolo, G. Paoletti, and A. Sportiello. Thus we made a step towards understanding the phenomena of the identity in the sandpile group for planar domains where solitons appear according to experiments. We prove that sandpile states, defined using our smoothing procedure, move changeless when we apply the wave operator (that is why we call them solitons), and can interact, forming triads and nodes. }, author = {Kalinin, Nikita and Shkolnikov, Mikhail}, issn = {14320916}, journal = {Communications in Mathematical Physics}, number = {9}, pages = {1649--1675}, publisher = {Springer Nature}, title = {{Sandpile solitons via smoothing of superharmonic functions}}, doi = {10.1007/s00220-020-03828-8}, volume = {378}, year = {2020}, } |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.