category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
quantum mechanics
List of the basic quantum mechanical variables
https://physics.stackexchange.com/questions/66592/list-of-the-basic-quantum-mechanical-variables
<p>Is there a list of basic quantum variables/attributes that all quantum particles have? </p> <p>Ex. An electron has charge, position, speed, momentum, etc. Is there a complete list of these variables? </p> <p>I would figure not all quantum particles share the same set of variables? A photon has position and an electron has position but a photon does not have a charge and an electron does, though I guess it is said a photon has neutral charge.</p>
<p>I will collate my comments here:</p> <p>The <a href="http://en.wikipedia.org/wiki/Standard_Model" rel="nofollow noreferrer">Standard Model</a> of particle physics encapsulates almost all the experimental evidence to date about particles and their interactions. It consists of two branches, the particles and their observed symmetries under interactions and also the quantum mechanical mathematical tools that give the interaction probabilities for desired reactions between them.</p> <p><img src="https://i.sstatic.net/AzBcl.png" alt="standard model"></p> <blockquote> <p>The Standard Model of elementary particles, with the three generations of matter, gauge bosons in the fourth column and the Higgs boson in the fifth.</p> </blockquote> <p>We see from the list of elementary particles that masses, spins, charges are intrinsic attributes. Among the charges one should include the three QCD colors and also lepton number and baryon number.</p> <p>The symmetries that have been established experimentally led to the <a href="http://en.wikipedia.org/wiki/Standard_Model_%28mathematical_formulation%29" rel="nofollow noreferrer">specific mathematical model.</a></p> <p>Mathematical models need variables, these are the four space time dimensions (x,y,z,t) and the energy momentum vector (p_x,p_y,p_z,E) which will describe the kinematics of interacting particles. Based on these variables Quantum mechanical equations represent the dynamics of interacting elementary particles .</p>
100
quantum mechanics
Spectral Decomposition Grover operator
https://physics.stackexchange.com/questions/66813/spectral-decomposition-grover-operator
<p>I am studing the Grover operator. Let be $x_0$ the marked element. Let be $t_f = \pi/2\omega$. The measurement of the register in the computational basis return $x_0$ with probability </p> <p>$$p_{x_0}(t_f)=\langle x_0|U^{t_f}|D\rangle|=1-1/N$$</p> <p>Where $|D\rangle$ is a diagonal state state, $U$ the grover operator, $\cos \omega = 1-2/N$ and $N=2^n$.</p> <p>My lecture say $p_{x_0}(t_f)$ is a lower bound for $p_{x_0}(\lfloor t_f\rfloor).$ $\lfloor t_f\rfloor=\lfloor (\pi/4)\sqrt{N}\rfloor + O(1/\sqrt{N})$ was obtained taking the asymptotic expansion in $N$. My question is Why $p_{x_0}(t_f)$ is a lower bound for $p_{x_0}(\lfloor t_f\rfloor).$?</p>
101
quantum mechanics
Consequences of Quantum Mechanics
https://physics.stackexchange.com/questions/68498/consequences-of-quantum-mechanics
<p>First of all, this question is going to seem a a bit of philosophy but know that vague and purposeless wandering is certainly not what i'm trying to propose here.<br> Also, the reason i didn't post in philosophy communities is that they certainly know a lot less ( if anything ) of quantum mechanics than most of you here. </p> <p>My question : I heard several times that the results of quantum mechanics (double-slit experiment for instance ) challenge our logic. One example of that is the famous physicist Lawrence Krauss. </p> <p>He keeps saying that after our discoveries of quantum mechanics, "logic" is becoming flawed.He wears a 2 + 2 = 5 T-Shirt to support his case. </p> <p>Does anyone know exactly are physics reffering to when they introduce the word "logic" to say it shouldnt be taked for granted ? Would "logic" be common-sense ? would "logic" be sensory experience ? if yes, then i'd highly agree. While we can't experience aurally frequencies out of 20-20khz range we are certain that they do exist. </p> <p>On the other hand, if "logic" reffers to our deductive and inductive logic then i dont understand.<br> How could we say that the result of lots of year of deductive and inductive logic ( from Thales to Newton to our modern Science ) points out that deductive and inductive logic is flawed.<br> Mathematics also rely heavily on deductive logic.Mathematics is also the language in which physics is formally expressed.<br> Denying or not taking for granted even our "deductive logic" would be not taking for granted Mathematics in which case would end Physics. </p>
<p>On the topic of the <em>actual</em> consequences of QM, <a href="/questions/65397/quantum-mechanics-and-everyday-nature">here</a> is answers with a few things that cannot be explained <em>without</em> QM.</p> <p>That aside...</p> <p>There is a golden rule that one should recite before all theoretical physics studies:</p> <blockquote> <p><a href="http://wiki.lesswrong.com/wiki/Egan%27s_law" rel="nofollow">"It all adds up to Normality"</a>.<br> - <em>Greg Egan</em>, <em>Quarantine</em></p> </blockquote> <p>The thing with Quantum Mechanics and "logic" is that humans are really bad at QM and really good at "logic".</p> <p>In fact, "logic" is an actual area of study in psychology: <a href="http://en.wikipedia.org/wiki/Na%C3%AFve_physics" rel="nofollow">Naïve Physics</a>.</p> <p>Naive Physics (forgive me not using umlauts) is an all-right approximation of anthropically scaled physical phenomena: Object permanence, an exclusion principle based on volume, absolute time, a primitive notion of gravity...</p> <p>This is what humans think with, every day, on an instinctive level.</p> <p>Over the times, you see refinements of Naive Physical concepts in works of <a href="http://en.wikipedia.org/wiki/Aristotelian_physics" rel="nofollow">Aristotle</a> and Newton.</p> <p>Object permanence, volume exclusion, gravity, absolute time.</p> <p>Then relativity comes along and throws absolute time out the window. Believe me, people cried "Logic is meaningless" when relativity was new too.</p> <p>Then comes Quantum Mechanics: Throw away volume exclusion and even to a degree the intuitive notion of Object Permanence.</p> <p>There is one other concept of Naive Physics which QM seemingly violates: Looking is a free action. Suddenly you chance things by "looking".</p> <p>So people do indeed cry out "Logic is broken." But it isn't, because Logic has nothing to do with QM.</p> <p>Logic, and indeed all of mathematics is about Axioms and Theorems and the steps of inference in between. QM's apparent weirdness has no effect on me using the <a href="http://en.wikipedia.org/wiki/Peano_Arithmetic" rel="nofollow">Peano Axioms</a> to say:</p> <p>$$ \begin{array}{l l}\vdash &amp; 0 = 0 \\ \vdash&amp; \forall x, y : S(x) = S(y) \iff x = y \\ \vdash&amp; \forall x : x + 0 = 0 + x = x \\ \vdash&amp; \forall x, y : x + S(y) = S(x) + y \\ \vdash&amp; 2 = S(S(0)) \\ \vdash&amp; 5 = S(S(S(S(S(0))))) \\ \vdash&amp; 2 + 2 = 2 + S(S(0)) = S(2) + S(0) = S(S(2)) + 0 = S(S(2)) \\ \vdash&amp; 2 + 2 \not = 5 \\ &amp; \iff S(S(S(S(0)))) \not = S(S(S(S(S(0))))) \\ &amp; \iff S(S(S(0))) \not = S(S(S(S(0)))) \\ &amp; \iff S(S(0)) \not = S(S(S(0))) \\ &amp; \iff S(0) \not = S(S(0)) \\ &amp; \iff 0 \not = S(0) \\ \end{array} $$</p> <p>Captial L Logic is set in mathematical stone and to say that QM's "mysteries" imply $2 + 2 = 5$ is juvenile, trivially untrue, and shows that the speaker hasn't really understood anything at all. It is like saying:</p> <blockquote> <p>Boy am I bad at abstracting from my everyday life and interactions, that I cannot even sit down and actually learn QM. I am going to put that on a T-shirt!</p> </blockquote>
102
quantum mechanics
Is a permutation of coordinates or labels really equivalent?
https://physics.stackexchange.com/questions/68673/is-a-permutation-of-coordinates-or-labels-really-equivalent
<p>To construct a N-body anti-symmetric wave function some derivations start with the requirement that the N-body wave function should be anti-symmetric under a permutation of coordinates, other derivations start with the requirement that the total wave function should be anti-symmetric under a permutation of labels or states, for example <a href="http://en.wikipedia.org/wiki/Identical_particles" rel="nofollow">this</a> derivation . I can see the equivalence is trivial if you assume you can freely change the order of the wavefunctions. This is true in the case the wavefunctions are just scalars but what if you are dealing with multicomponent wavefunctions. For example relativistic wavefunctions which are 4x1 matrices.</p> <p>A simple example of my problem is the following, suppose we want to write down a general 2 body wavefunction in terms of single particle wavefunctions, \begin{align} \Psi(x_1,x_2) \propto \varphi_{a}(x_1)\varphi_b(x_2) \end{align} Where $x_i$ denote the spatial,spin,... coordinates, and $a,b$ denote single particle eigenstates. If we permute the coordinates in order to derive a antisymmetric wavefunction we get, \begin{align} \frac{1}{\sqrt{2}} \left(\varphi_{a}(x_1)\varphi_b(x_2) - \varphi_{a}(x_2)\varphi_b(x_1) \right) \end{align} while a permutation in states gives, \begin{align} \frac{1}{\sqrt{2}} \left(\varphi_{a}(x_1)\varphi_b(x_2) - \varphi_{b}(x_1)\varphi_a(x_2) \right) \end{align} Obviously above expressions are identical for scalar wavefunctions. When I try to check the normalisation of the two particle wavefunction (assuming the single particle states are orthormal and general $N \times 1$ matrices) I get for the label permutation, \begin{align} \int | \Psi(x_1,x_2) |^{2} &amp;= \frac{1}{2} \int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1) \varphi_{a}(x_1)\varphi_b(x_2) - \frac{1}{2} \int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1)\varphi_{b}(x_1)\varphi_a(x_2) \\ &amp;- \frac{1}{2} \int \varphi_{a}^{\dagger}(x_2) \varphi_{b}^{\dagger}(x_1) \varphi_{a}(x_1)\varphi_b(x_2) + \frac{1}{2} \int \varphi_{a}^{\dagger}(x_2) \varphi_{b}^{\dagger}(x_1)\varphi_{b}(x_1)\varphi_a(x_2) \\ &amp;= \frac{1}{2} - 0 - 0 + \frac{1}{2} = 1 \end{align} Which is the expected result (note that the integral is assumed to integrate over all continuous degrees of freedom, or sum over the discrete ones). For the coordinate permutation however I get, \begin{align} \int | \Psi(x_1,x_2) |^{2} &amp;= \frac{1}{2} \int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1) \varphi_{a}(x_1)\varphi_b(x_2) - \frac{1}{2}\int \varphi_{b}^{\dagger}(x_2) \varphi_{a}^{\dagger}(x_1)\varphi_{a}(x_2)\varphi_b(x_1) \\ &amp;- \frac{1}{2}\int \varphi_{b}^{\dagger}(x_1) \varphi_{a}^{\dagger}(x_2) \varphi_{a}(x_1)\varphi_b(x_2) + \frac{1}{2}\int \varphi_{b}^{\dagger}(x_1) \varphi_{a}^{\dagger}(x_2)\varphi_{a}(x_2)\varphi_b(x_1) \\ \end{align} Where it is not clear to my why the cross terms, \begin{align} \propto \int \textrm{d}x_1 \textrm{d}x_2 \left[ \varphi_{b}^{\dagger}(x_1) \varphi_b(x_2) \right] \left[ \varphi_{a}^{\dagger}(x_2) \varphi_{a}(x_1) \right] \end{align} should vanish. (Even if they did my main question still stands namely if you can always freely change the order of the wavefunctions). Note that the choice of permutation in coordinates or labels is equivalent to the choice whether to expand to rows or columns in a <a href="http://en.wikipedia.org/wiki/Slater_determinant" rel="nofollow">Slater determinant</a>.</p> <p>EDIT: upon request a more elaborate version. Suppose we want to write down a general wave function for 2 identical fermions $\Psi$. This wavefunction has to be antisymmetric. Suppose the single particle states are given by $$ \varphi_{\alpha_i}(r_i,s_i) \,\,\,\,\,\ i \in \{1,2\} $$ Where $\alpha_i$ denotes the quantum numbers of a state, while $r_i,s_i$ denote the spatial and spin coordinates. A way to antisymmeterize $N$-body wavefunction in terms of single particle wavefunctions is often introduced by the concept of a Slater determinant. In the case of two particles a Slater determinant looks like \begin{align} \Psi_{\alpha_1,\alpha_2}(r_1,s_1,r_2,s_2) = \frac{1}{\sqrt{2}} \left| \begin{array}{c c} \varphi_{\alpha_1}(r_1,s_1) &amp; \varphi_{\alpha_2}(r_1,s_1) \\ \varphi_{\alpha_1}(r_2,s_2) &amp; \varphi_{\alpha_2}(r_2,s_2) \end{array} \right|. \end{align} Now we can calculate this determinant by expanding along a row or along a column. Which is equivalent to the choice whether to permute the quantum numbers of the single particle states or the the coordinates of the single particle states. More explicitly, expanding the matrix along the first row (or equivalently, permuting the label of the single particle states) we get, $$ \frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right). $$ While an expansion along the first column gives (equivalent to a coordinate permutation), $$ \frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right). $$ The two above expression certainly look to be equivalent. However if we consider the wave functions to be $N$-component vectors I get some strange results. Suppose we want to calculate the normalization of the two particle wave function. We would then write the following $$ \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2) $$ Note that I have introduced the $\dagger$ instead of the complex conjugate $*$ as the normalization should give a scalar. If we now replace $\Psi$ with the expression for the "label" permutation case we get, \begin{align*} \sum_{s_1} \sum_{s_2} &amp; \int \textrm{d}r_1 \textrm{d}r_2 \Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2) \\ &amp;= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right)^{\dagger} \\ &amp; \times \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right) \\ &amp;= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left( \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \right. \\ &amp; \,\, - \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \\ &amp; \,\, - \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \\ &amp; \,\, \left. + \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right) \\ &amp;= \frac{1}{2} \sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi_{\alpha_2}(r_2,s_2) \\ &amp;-\frac{1}{2}\sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi_{\alpha_1}(r_2,s_2) \\ &amp;-\frac{1}{2}\sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_2}(r_2,s_2) \\ &amp;+\frac{1}{2}\sum_{s_1} \int \textrm{d}r_1 \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi_{\alpha_2}(r_1,s_1) \sum_{s_2} \int \textrm{d}r_2 \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_1}(r_2,s_2) \end{align*} Note that we have used the general expression $(AB)^{\dagger} = B^{\dagger} A^{\dagger}$ as well as the fact that the product $\varphi^{\dagger} \varphi$ contracts to a scalar and can be safely dragged trough the expressions. Now if one assumes the single particle states are orthogonal we have the following relation, \begin{align} \sum_{s_i}\int \textrm{d} r_i \varphi^{\dagger}_{\alpha_m}(r_i,s_i) \varphi_{\alpha_n}(r_i,s_i) = \delta_{m,n} \end{align} So we get, \begin{align} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 &amp;\Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2) \\ &amp; =\frac{1}{2} \delta_{\alpha_1,\alpha_1} \delta_{\alpha_2,\alpha_2} - \frac{1}{2} \delta_{\alpha_1,\alpha_2} \delta_{\alpha_2,\alpha_1} - \frac{1}{2} \delta_{\alpha_2,\alpha_1} \delta_{\alpha_1,\alpha_2} + \frac{1}{2} \delta_{\alpha_2,\alpha_2} \delta_{\alpha_1,\alpha_1} \\ &amp;=1 \end{align} Which is to be expected. If we start from the second expression for $\Psi$ however we get the following, \begin{align*} \sum_{s_1} \sum_{s_2} &amp; \int \textrm{d}r_1 \textrm{d}r_2 \Psi^{\dagger}(r_1,s_1,r_2,s_2) \Psi(r_1,s_1,r_2,s_2) \\ &amp;= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right)^{\dagger} \\ &amp; \times \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right) \\ &amp;= \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d}r_1 \textrm{d}r_2 \left( \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \right. \\ &amp; \,\, - \varphi^{\dagger}_{\alpha_2}(r_2,s_2) \varphi^{\dagger}_{\alpha_1}(r_1,s_1) \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \\ &amp; \,\, - \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) \\ &amp; \,\, \left. + \varphi^{\dagger}_{\alpha_2}(r_1,s_1) \varphi^{\dagger}_{\alpha_1}(r_2,s_2) \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right) \\ \end{align*} It is clear that the first and the fourth term will give $1/2 + 1/2 = 1$ as they are exactly the same as in the previous expression. However I do not see why the cross term should be necessarily zero. So it would seem that starting from the different antisymmeterization expression one gets different results. Hence my phrase "order matters" as the following appears to be true $$ \frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right) \neq \frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_1}(r_2,s_2)\varphi_{\alpha_2}(r_1,s_1) \right) $$ A final note about the Dirac notation. If one would start from the Dirac formalism and introduce $ | \Psi \rangle$ as $$ | \Psi \rangle = \frac{1}{\sqrt{2}} \left( | \alpha_1 \rangle | \alpha_2 \rangle - | \alpha_2 \rangle | \alpha_1 \rangle \right) $$ A projection into coordinate space gives the "label" permutation expression, $$ \langle r_1,s_1 ; r_2,s_2 | \Psi \rangle = \frac{1}{\sqrt{2}} \left( \langle r_1,s_1| \alpha_1 \rangle \langle r_2,s_2 | \alpha_2\rangle - \langle r_1,s_1| \alpha_2 \rangle \langle r_2,s_2 | \alpha_1\rangle \right) \\ = \frac{1}{\sqrt{2}} \left(\varphi_{\alpha_1}(r_1,s_1)\varphi_{\alpha_2}(r_2,s_2) - \varphi_{\alpha_2}(r_1,s_1)\varphi_{\alpha_1}(r_2,s_2) \right) $$ The expression for the normalization starting from the Dirac formalism gives, $$ \langle \Psi| \Psi \rangle = \frac{1}{2} \left( \langle \alpha_1 | \alpha_1 \rangle \langle \alpha_2 | \alpha_2 \rangle - \langle \alpha_1 | \alpha_2 \rangle \langle \alpha_2 | \alpha_1 \rangle - \langle \alpha_2 | \alpha_1 \rangle \langle \alpha_1 | \alpha_2 \rangle + \langle \alpha_2 | \alpha_2 \rangle \langle \alpha_1 | \alpha_1 \rangle \right) $$ Using the unit identities: $1 = \sum_s | s \rangle \langle s |$ and, $1 = \int \textrm{d} r | r \rangle \langle r |$ we get, $$ \langle \Psi| \Psi \rangle = \frac{1}{2} \sum_{s_1} \sum_{s_2} \int \textrm{d} r_1 \int \textrm{d} r_2 \\ \left( \langle \alpha_1 | r_1,s_1 \rangle \langle r_1,s_1 | \alpha_1 \rangle \langle \alpha_2 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_2 \rangle - \langle \alpha_1 | r_1,s_1 \rangle \langle r_1,s_1 | \alpha_2 \rangle \langle \alpha_2 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_1 \rangle - \langle \alpha_2 | r_1,s_1 \rangle \langle r_1,s_1| \alpha_1 \rangle \langle \alpha_1 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_2 \rangle + \langle \alpha_2 | r_1,s_1 \rangle \langle r_1,s_1| \alpha_2 \rangle \langle \alpha_1 | r_2,s_2 \rangle \langle r_2,s_2| \alpha_1 \rangle \right) $$ Which is when using $ \langle r_i, s_i | \alpha_i \rangle = \varphi_{\alpha_i}(r_i,s_i)$ exactly the same as the normalization expression I get for the label permutation case, which works out nicely to $1$ as expected.</p>
<p>You had the following cross term for the case of coordinate permutation:</p> <p>\begin{align} \propto \int \textrm{d}x_1 \textrm{d}x_2 \left[ \varphi_{b}^{\dagger}(x_1) \varphi_b(x_2) \right] \left[ \varphi_{a}^{\dagger}(x_2) \varphi_{a}(x_1) \right]. \end{align}</p> <p>However, this is not how you take the inner product of $\varphi_{a}(x_{1})\varphi_{b}(x_{2})$ and $\varphi_{a}(x_{2})\varphi_{b}(x_{1})$.</p> <p>$\varphi_{a}(x_{1})$ and $\varphi_{b}(x_{1})$ belong to the Hilbert space of particle 1, which I denote $\mathcal{H}_{1}$, and similarly, $\varphi_{a}(x_{2}), \varphi_{b}(x_{2}) \in \mathcal{H}_{2}$. The objects $\varphi_{a}(x_1) \varphi_{b}(x_2)$ and $\varphi_{a}(x_2) \varphi_{b}(x_1)$ are members of $\mathcal{H}_{1}\otimes\mathcal{H}_{2}$, which is the tensor product of $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$. Furthermore, as $\varphi_{a}(x_{1})$ and $\varphi_{b}(x_{2})$ belong to different Hilbert spaces, $\varphi_{a}(x_1) \varphi_{b} (x_2)$ and $\varphi_{b} (x_2)\varphi_{a}(x_1)$ really mean the same thing: there is no ordering issue even if $\varphi$ is a $N$-component vector.</p> <p>When taking the inner product of the two objects $\varphi_{a}(x_{1})\varphi_{b}(x_{2})$ and $\varphi_{a}(x_{2})\varphi_{b}(x_{1})$ in $\mathcal{H}_{1}\otimes\mathcal{H}_{2}$, you should contract the parts in $\mathcal{H}_{1}$ together, and the parts in $\mathcal{H}_{2}$ together. Then, $$ \int dx_{1} dx_{2} \big[\varphi_{a}(x_{1})\varphi_{b}(x_{2})\big]^{\dagger} \varphi_{a}(x_{2})\varphi_{b}(x_{1}) = \left[\int dx_{1} \varphi_{a}(x_{1})^{\dagger} \varphi_{b}(x_{1})\right]\left[\int dx_{2} \varphi_{b}(x_{2})^{\dagger} \varphi_{a}(x_{2})\right] = 0. $$</p>
103
quantum mechanics
What is the volume of electron?
https://physics.stackexchange.com/questions/68717/what-is-the-volume-of-electron
<p>I know that electron has mass , and that is particle( a body which has only mass and whose size is negligible) but can we ever calculate the volume of the electron . if yes how much it is . if no why?</p>
104
quantum mechanics
Quantization for particle in a box problem
https://physics.stackexchange.com/questions/70410/quantization-for-particle-in-a-box-problem
<p>Consider the particle in a box problem in QM. The crux of the reason why QM is able to explain the physical phenomenon is not just the theory but also able to impose boundary conditions which eventually result in quantization. Now in the particle in a 1-d box problem, the wave function is assumed to be zero at the boundaries. It has been said that it is imposed, so that the wave function is continuous. Okay, but what about differentiability? In order for the wave function to satisfy Schrodinger equation,we also need differentiability right? Okay if we assume only left (from one side) derivative to exit, we could have as well assumed only left continuity (from one side). For continuity, we assume it should be from both sides, but for differentiability we need only one side? They also say the slope also must be continuous. I don't see any rationale behind these quantizations!</p>
<p>Differentiability of the wave function is only required for finite changes in the potential. If the your potential is infinite (as it is outside the inifinitely deep potential well which you describe) the Hamiltonial is ill-defined anyways.</p> <p>An other case where you can have an infinite potential is if you have a $\delta$-distribution as a potential, there again you will find that the wave-function must be continuous but not differentiable (the difference between left-side and right-side derivative is given by the strength of the $\delta$-potential in this case).</p>
105
quantum mechanics
Potentials in Feynman path integral II
https://physics.stackexchange.com/questions/71667/potentials-in-feynman-path-integral-ii
<p>I am still working on the Feynman path integral, more specifically on the case of a free particle with an infinite potential wall, i.e. the quantum system defined by the Hamiltonian </p> <p>$$H_1 = \frac{\mathbf{P}^2}{2m} + V(\mathbf{Q})$$</p> <p>where $V(\mathbf{Q})$ is the potential defined by</p> <p>$$ V(\mathbf{Q})=\left\{ \begin{array}{cc} \infty, &amp; \mathbf{Q} \leq b \\ 0, &amp; \mathbf{Q}&gt;b. \\ \end{array} \right. $$</p> <p>It is clear that a general solution for the free particle case is given by $$\psi_E(q) = A_1 e^{i\frac{\sqrt{2mE}}{\hbar}q} + A_2 e^{-i\frac{\sqrt{2mE}}{\hbar}q}.$$ Using the fact that $\psi_E(0)=0$ I get that $A_2=-A_1$ so the general solution is $$A \sin\left(\frac{\sqrt{2mE}}{\hbar}q\right) = A \sin(kq)$$ where $k$ is defined as $$k=\frac{\sqrt{2mE}}{\hbar}.$$ The problem I have is the calculation of the constant $A$. In some references they say this constant should be $$A = \sqrt{\frac{2m}{\pi\hbar^2 k}}$$ and in other references I find $A=2$ or $A=-2i$,...</p> <p>Could you please help me to understand that point ? Thanks.</p>
106
quantum mechanics
Where does the change in energy come from when trapping a photon between mirrors?
https://physics.stackexchange.com/questions/71840/where-does-the-change-in-energy-come-from-when-trapping-a-photon-between-mirrors
<p>You have a photon traveling with E=hf and you trap it between two perfectly reflecting mirrors (like a QM particle in a box). The photon has to make a standing wave between the mirrors and its spacial frequency is dependant on the distance between the mirrors, L. Its time frequency, and hence energy are derived from the spacial frequency; Therefore E is dependant on L. Where does the change in energy come from?</p>
<p>When you change the separation between the mirrors you are doing [positive or negative] mechanical work against the radiation pressure of the photon. This work is the source of energy here. The situation is completely analogous to the classical gas (or a single particle) under piston. </p> <p>More explicitly:</p> <p>1) The frequency of the photon, for the lowest mode standing wave is $\nu = \frac{c}{2L}$;</p> <p>2) The momentum associated with the photon is $p = \frac{E}{c} = \frac{h \nu}{c}$;</p> <p>3) The pressure on the mirror is $P = 2 \nu p = \frac{2h}{c} \nu^2 = \frac{h c}{2} \frac{1}{L^2}$;</p> <p>4) The mechanical work (per unit area) done by moving the mirror from infinity to L is $A = \int_{\infty}^L PdL = \frac{h c}{2} \frac{1}{L}$</p> <p>The latter can be recognized as $h \nu$ which is the photon energy E.</p>
107
quantum mechanics
QM - calculating expectation value for velocity of an electron
https://physics.stackexchange.com/questions/73445/qm-calculating-expectation-value-for-velocity-of-an-electron
<p>How do we calculate the expectation value for speed? I have heard that we must first calculate the expectaion value for kinetic energy. Someone please explain a bit what options do we have.</p>
<p>Calculating the energy eigenvalue, will give you $\langle v^2 \rangle$. This is how it's done:</p> <p>$$\langle v^2 \rangle = \frac{2}{m}\langle T \rangle=\frac{2}{m}\langle \psi|\hat T|\psi\rangle=\frac{2}{m}\int\psi^*(x)\hat T \psi(x)dx$$</p> <p>Where you should write $T$(the kinematic energy) as an <a href="http://en.wikipedia.org/wiki/Hamiltonian_operator" rel="nofollow">operator</a>. This can be done by writing it as a function of $x$ and $p$, and then replacing $p$ with its operator.</p> <p><strong>However this is not what we call the expectation value of speed</strong>. To calculate the expectation value of speed, we calculate the expectation value of its momentum:</p> <p>$$\langle v \rangle = \frac{\langle p\rangle}{m}=\frac{1}{m}\int \psi^*(x)p\psi(x)dx$$</p> <p>Which can be calculated either by transforming $\psi$ to the momentum space, or replacing $p$ with its <a href="http://en.wikipedia.org/wiki/Momentum_operator" rel="nofollow">operator</a> $-\mathfrak i \hbar\frac{\partial}{\partial x}$.</p> <p>An important thing to note is: $\langle v^2 \rangle \ne \langle v\rangle^2$ in general. E.g. consider the symmetric harmonic potential where $\langle v \rangle=0$ but $\langle v^2 \rangle &gt; 0$. The difference is called <a href="http://en.wikipedia.org/wiki/Variance" rel="nofollow">variance</a> $\sigma_v^2=\langle v^2 \rangle-\langle v \rangle^2$.</p>
108
quantum mechanics
What is a ket of a vector with a bra of another one?
https://physics.stackexchange.com/questions/74995/what-is-a-ket-of-a-vector-with-a-bra-of-another-one
<p>Suppose we have an orthonormal basis $\{ \psi \}$ in finite dimension of a hilbert space; What is the butterfly operator of a sum of the $ \psi $, say $\psi_i +\psi_j$ ?</p> <p>Since by linearity of "taking the dual" we must have, writing $B$ for but..terfly, $\lvert\psi_i +\psi_j\rangle\langle\psi_i +\psi_j \rvert = B_i + B_j + \lvert \psi_i\rangle\langle\psi_j\rvert + \lvert\psi_j\rangle\langle\psi_i\rvert$ but I have never seen expressions like $\lvert\psi_i\rangle\langle\psi_j\rvert$ when $i$ and $j$ are different.</p> <p>What happens when we relax orthonormality to get a general basis ? When we impose $\langle\psi_j\lvert\psi_i\rangle = \delta_{ij}$, what is the subspace the operator? How to visualize it?</p> <p>What if we have another basis $\{ \phi \}$ and impose $\langle\phi_j\lvert \psi_i \rangle = \delta_{ij}$; what does it mean to look at $\lvert\phi_j\rangle\langle\psi_i\rangle$? Can we say more than "it is projecting on $\lvert\phi_j\rangle$ when we take a ket and projection on $\langle\psi_i\rangle$ when we take a bra?</p>
109
quantum mechanics
A projector equal to its own conjugate by a unitary
https://physics.stackexchange.com/questions/75282/a-projector-equal-to-its-own-conjugate-by-a-unitary
<p>For projector $p$, in finite dimension say, some unitaries $u, v$ does $upu^\dagger = vpv^\dagger$ implies $u = v$ ? Intuitively, can we not say that a unitary is matrix permuting the basis and since $p$ is diagonal then obviously $u$ is $v$ ? But for an exact proof ?</p> <p>what if further, $p = upu^\dagger = vpv^\dagger$ ?</p>
<p>The answer is no, your result does not follow.</p> <p>To understand why, suppose we have some $N\times N$ projector $\mathbf{P}$, with a range space $\operatorname{Ran}(\mathbf{P})$ (the subspace that the projector projects into) of dimension $\dim(\operatorname{Ran}(\mathbf{P})) = n$ and kernel (nullspace) $\ker(\mathbf{P})$ of dimension $\dim(\ker(\mathbf{P})) = m$ so that $N = n+m$. Intuitively, you can then do any nontrivial unitary transformation that unitarily transforms the range space alone, or any nontrivial unitary transformation that transforms the kernel alone, or any unitary transformation that is the product of the two, and the projector will have the same matrix.</p> <p>To see this in detail, choose a basis $\{X_1, X_2, \cdots, X_n\}$ to span $\operatorname{Ran}(\mathbf{P})$ and basis $\{Y_1, Y_2, \cdots, Y_m\}$ to span $\ker(\mathbf{P})$. In this basis:</p> <p>$$\mathbf{P} = \operatorname{diag}[\overbrace{1,\cdots,\,1}^{n\,\mathrm{terms}},\,\underbrace{0,\cdots,\,0}_{m\,\mathrm{terms}}] = \left(\begin{array}{cc}\mathbf{1}_{n\times n}&amp;\mathbf{0}_{n\times m}\\\mathbf{0}_{m\times n}&amp;\mathbf{0}_{m\times m}\end{array}\right)$$</p> <p>where I have partitioned the matrix into the obvious blocks. Now consider any similarity transformation where we conjugate by a matrix of the form:</p> <p>$$\left(\begin{array}{cc}\mathbf{U}_{n\times n} &amp; \mathbf{0}_{n\times m}\\ \mathbf{0}_{m\times n} &amp; \mathbf{U}_{m\times m}\end{array}\right)$$</p> <p>where $\mathbf{U}_{n\times n}$ and $\mathbf{U}_{m\times m}$ are any unitary $n\times n$ and $m \times m$ matrices you can think of. We do the straightforward block-partitioned matrix calculation as follows:</p> <p>$$\begin{array}{lcl}\left(\begin{array}{cc}\mathbf{U}_{n\times n} &amp; \mathbf{0}_{n\times m}\\ \mathbf{0}_{m\times n} &amp; \mathbf{U}_{m\times m}\end{array}\right)\, \mathbf{P}\, \left(\begin{array}{cc}\mathbf{U}_{n\times n}^\dagger &amp; \mathbf{0}_{n\times m}\\ \mathbf{0}_{m\times n} &amp; \mathbf{U}_{m\times m}^\dagger \end{array}\right) &amp;= &amp; \\ \left(\begin{array}{cc}\mathbf{U}_{n\times n} &amp; \mathbf{0}_{n\times m}\\ \mathbf{0}_{m\times n} &amp; \mathbf{U}_{m\times m}\end{array}\right)\, \left(\begin{array}{cc}\mathbf{1}_{n\times n}&amp;\mathbf{0}_{n\times m}\\\mathbf{0}_{m\times n}&amp;\mathbf{0}_{m\times m}\end{array}\right)\, \left(\begin{array}{cc}\mathbf{U}_{n\times n}^\dagger &amp; \mathbf{0}_{n\times m}\\ \mathbf{0}_{m\times n} &amp; \mathbf{U}_{m\times m}^\dagger \end{array}\right) &amp;=&amp; \\ \left(\begin{array}{cc}\mathbf{U}_{n\times n}\mathbf{1}_{n\times n}\mathbf{U}_{n\times n}^\dagger &amp; \mathbf{0}_{n\times m}\\\mathbf{0}_{m\times n} &amp; \mathbf{U}_{m\times m}\mathbf{0}_{m\times m}&amp;\mathbf{U}_{m\times m}^\dagger \end{array}\right) &amp;=&amp; \left(\begin{array}{cc}\mathbf{1}_{n\times n}&amp;\mathbf{0}_{n\times m}\\\mathbf{0}_{m\times n}&amp;\mathbf{0}_{m\times m}\end{array}\right) = \mathbf{P}\end{array}$$ </p> <p>thus retrieving $\mathbf{P}$ and showing that there are huge classes of similarity transformations that will leave $\mathbf{P}$ invariant.</p> <p>Even in the $2\times 2$ case, where $\operatorname{Ran}(\mathbf{P})$ and $\ker\mathbf{P})$ are one-dimensional (so there is no eigenvector "degeneracy" that can be used) we can still put $\mathbf{U}_{1\times1} = \exp(i\phi)$ for any real phase angle $\phi$.</p>
110
quantum mechanics
Does the Nakajima-Zwanzig equation preserve the trace of the projected density matrix?
https://physics.stackexchange.com/questions/77869/does-the-nakajima-zwanzig-equation-preserve-the-trace-of-the-projected-density-m
<p>Looking at the <a href="http://en.wikipedia.org/wiki/Nakajima-Zwanzig_equation" rel="nofollow">Nakajima-Zwanzig equation</a>, wich gives the time evolution of a projection $\cal {P} \rho$ of a full density matrix $\rho$, I am wondering if the trace of $\cal P \rho$ is preserved under time evolution.</p> <p>The terms like $LX \stackrel{def}= \dfrac{i}{\hbar} [X,H]$ have a null trace, but it seems there is no reason why terms like $\mathcal {P} L X$ necessarily have a null trace, so it seems that the trace of the projected density matrix is not necessarily conserved.</p> <p>Is it correct ?</p>
<p>If you use the canonical choice $\mathcal{P}X = \mathrm{Tr}_B(X)\otimes\rho_B$, where $B$ denotes the irrelevant degrees of freedom in the total Hilbert space $\mathbb{H} = \mathbb{H}_A \otimes \mathbb{H}_B$, and $\rho_B$ is an arbitrary reference state on $\mathbb{H}_B$, then $$ \mathrm{Tr} (\mathcal{P} L X )= \mathrm{Tr} (\mathrm{Tr}_B(LX)\otimes\rho_B) = \mathrm{Tr}_A(\mathrm{Tr}_B(LX))\times\mathrm{Tr}_B(\rho_B) = \mathrm{Tr}(LX) = 0. $$ I expect similar reasoning will show that the full Nakajima-Zwanzig (NZ) equation is trace-preserving. This must be the case since it is formally equivalent to the von-Neumann equation. However, usually one makes some approximation to the NZ equation, such as truncating the time propagator for the irrelevant subspace $e^{\mathcal{Q}Lt}$ to zeroth or maybe lowest non-trivial order in $L$ (the first order term typically vanishes). In this case there is no guarantee of a physical (completely positive) evolution. However, it still looks like even such a truncated NZ equation must be trace-preserving by the same argument, since it is formulated in terms of commutators that vanish under the trace operation. </p> <p>I don't know how to prove it for other definitions of the projection $\mathcal{P}$, but the choice I have used is by far the most common (if not the only) one in use in the open quantum systems literature. </p>
111
quantum mechanics
Probability amplitude in basic quantum mechanics
https://physics.stackexchange.com/questions/79058/probability-amplitude-in-basic-quantum-mechanics
<p>I came across this proportionality statement in my quantum mechanics notebook: $\psi(x,t)$ is proportional to </p> <p>$$ \begin{align} \cos(kx - wt) &amp;= \exp(i(kx-wt)) + \exp(-i(kx-wt)) \\ &amp;= \exp (i(kx-wt)) \end{align} $$ I looked through most commonly used textbooks for quantum mechanics and I couldn't find this in them. Can you help me figure this out? Thanks.</p>
<p>The equality $\exp(i(kx-wt)) + \exp(-i(kx-wt)) = \exp (i(kx-wt))$ is incorrect, perhaps you recorded something else incorrectly too?</p> <p>Generally, wavefunctions that go as $e^{i(kx - \omega t)}$ represent travelling waves and are used in calculating transmission/reflection coefficients for potentials, while a wavefunction that behaved like $\sin(kx)$ represent a standing wave. I'm not sure which one you wrote down.</p>
112
quantum mechanics
Hermitian Operators in time and Measurements
https://physics.stackexchange.com/questions/81773/hermitian-operators-in-time-and-measurements
<p>Consider an observable that can be described by a hermitian operator $A$ . No explicit relationship with time is given. What would happen to the probability if the quantity is measured a few days later? </p>
<p>The time evolution of observables in the Schrodinger picture is determined by the wave function. So, the operator itself can say nothing about the time evolution. To get this information one has to know the Hamiltonian of the system and solve time-dependent Schrodinger equation. In turn, Schrodinger equation requires knowing of initial conditions. So, how the system was prepared at the initial moment will influence on the final result.</p> <p>Also, you can use a property that Hermitian operator has real eigenvalues $\hat{O}\psi(\mathbf{r})=a\psi(\mathbf{r})$:</p> <p>$$\int d \mathbf{r}\psi^*(\mathbf{r})\hat{O}\psi(\mathbf{r})=a \int d \mathbf{r}\psi^*(\mathbf{r})\psi(\mathbf{r})=a$$</p>
113
quantum mechanics
Quantum Regime of Particles in Solids
https://physics.stackexchange.com/questions/81872/quantum-regime-of-particles-in-solids
<p>On my midterm today, I read that when the deBroglie wavelength of a particle exceeds the spacing between the particles in a solid or liquid, the particles begin to behave quantum dynamically. Why is this? I thought a larger deBroglie wavelength implied a less quantum mechanical behavior.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Debroglie_Wavelength" rel="nofollow noreferrer">de Broglie wavelength</a></p> <p><img src="https://i.sstatic.net/Vj3Q1.png" alt="debrogliewave"></p> <blockquote> <p>where lambda is the wavelength and p the momentum of a particle, E its energy and f the frequency in a proposition that the particle may appear as a wave.</p> </blockquote> <p>The relations allow estimating whether a quantum mechanical entity will behave as a classical particle ( billiard ball) or as a probability wave. Accurate solutions of quantum mechanical equations justify this view.</p> <p>Lambda is in units of length, and the probability of finding the particle within this length is high. If lambda is larger than the spacing of the solid's structure, for example, one will not be able to use the particle nature, but will have to consider the quantum mechanical solutions that will encompass more than one atom of the structure. (Though you do not refer in what situation a single particle will be found in a solid, an electron in metal? )</p> <p>Think of the particle passing through a slit. If its deBroglie wavelength is smaller than the width of the slit, it will act as a classical particle, if it is larger then the probability of hitting the wall is high.</p>
114
quantum mechanics
How to derive the Bethe stopping power formula
https://physics.stackexchange.com/questions/80525/how-to-derive-the-bethe-stopping-power-formula
<p>I need the derivation of Bethe formula for stopping power, but I can't see the corresponding paper to this matter.</p> <blockquote> <p>Application of Ordinary Space-Time Concepts in Collision Problems and Relation of Classical Theory to Born's Approximation. E. J. Williams. <a href="http://dx.doi.org/10.1103/RevModPhys.17.217" rel="nofollow"><em>Rev. Mod. Phys.</em> <strong>17</strong> no. 2-3 (1945) pp. 217-226</a>.</p> </blockquote> <p>Can anyone help me about this paper please?</p>
115
quantum mechanics
Ehrenfest&#39;s theorem on Gaussians
https://physics.stackexchange.com/questions/82002/ehrenfests-theorem-on-gaussians
<p>Considering the free evolution of a Gaussian wave packet, is it possible to use Ehrenfest's theorem to determine the average value of momentum given that of position? </p> <p>And I imply the simplified version of the theorem, namely $\frac{\mathrm d}{\mathrm d t}\langle x \rangle = \frac{1}{m}\langle p \rangle$, where $x$ and $p$ are the position and momentum, respectively. </p>
<p>Yes, you can apply the Ehrenfest's theorem to determine the expectation value of momentum for a given position or to determine the expectation value of position for a given momentum for a Gaussian wave packet! </p>
116
quantum mechanics
Quantum Box and Quantum Number
https://physics.stackexchange.com/questions/86505/quantum-box-and-quantum-number
<p>How many quantum numbers are needed to describe a stationary state of a particle in a multi-dimensional quantum box (say 73)?</p>
<p>Generalizing from 1d. For every of the $d$ dimensions you have one independent momentum variable $k_i$ such that $i\in\{1..d\}$. The boundary conditions quantize them. </p> <p>The <em>energy</em> on the other hand is fixed by a single positive integer, since for a free particle $$ E \propto k_1^2+k_2^2+\cdots k_d^2 \propto n_1^2+n_2^2+\cdots n^2_d\equiv n \in \mathbb{N}$$</p>
117
quantum mechanics
Decomposition of two particle wavefunction into product of single-particle wavefunctions
https://physics.stackexchange.com/questions/87807/decomposition-of-two-particle-wavefunction-into-product-of-single-particle-wavef
<p>Suppose you prepare a two-particle system such that $\Psi(\vec{r}_1,\vec{r}_2, t_0) = \Psi_1(\vec{r}_1, t_0)\Psi_2(\vec{r}_2, t_0)$.</p> <p>So then, initially $\Psi(\vec{r}_1,\vec{r}_2,t_0) - \Psi_1(\vec{r}_1, t_0)\Psi_2(\vec{r}_2, t_0) = 0$</p> <p>But now you let the system evolve in time and $\Psi(\vec{r}_1,\vec{r}_2, t)$ can now no longer be decomposed into a product of two single-particle wavefunctions.</p> <p>What then does, physically, does $\Psi(\vec{r}_1,\vec{r}_2, t) - \Psi_1(\vec{r}_1, t)\Psi_2(\vec{r}_2, t)$ represent?</p>
<p>You might be interested by the correlation between two observables relatives to particles $1$ and $2$, for instance $X_1$ and $X_2$ : </p> <p>$corr(X_1,X_2) = \dfrac{cov(X_1, X_2)}{\sigma_{X_1}\sigma_{X_2}} = \dfrac{\langle X_1X_2 \rangle - \langle X_1 \rangle \langle X_2 \rangle }{\sqrt{\langle X_1^2 \rangle \langle X_2^2 \rangle}} \tag{1}$</p> <p>The covariance $cov(X_1,X_2)$ may be written : </p> <p>$cov(X_1,X_2) = \langle X_1X_2 \rangle - \langle X_1 \rangle \langle X_2 \rangle \\=\int d^3 \vec r_1 d^3 \vec r_2 ~ (|\psi(\vec r_1,\vec r_2)|^2 - |\psi_1( \vec r_1)|^2|\psi_2(\vec r_2)|^2) ~(x_1x_2)\tag{2}$</p> <p>When the particles $1$ and $2$ are independent, you have the factorization $\psi(\vec r_1,\vec r_2) = \psi_1(\vec r_1) \psi_2(\vec r_2)$, so, all the correlations between observables relative to particles $1$ and $2$ are zero, as wished.</p> <p>So, the expression $(|\psi(\vec r_1,\vec r_2)|^2 - |\psi_1( \vec r_1)|^2|\psi_2(\vec r_2)|^2)$ might be seen as a general (observable-independent) measure of correlations (more precisely covariance) at the point ($\vec r_1, \vec r_2$)</p>
118
quantum mechanics
Total angular momentum operator forms a complete set? (Clebsch-Gordan coefficients)
https://physics.stackexchange.com/questions/91071/total-angular-momentum-operator-forms-a-complete-set-clebsch-gordan-coefficien
<p>While introducing Clebsch-Gordan coefficients, they state that the operators: $$ \vec{J_1}^2,\vec{J_2}^2,J_{1z},J_{2z}$$ form a complete set of compatible observables. Which means that there is no degeneracy in their common eigenspaces. </p> <p>What I wonder is, how it follows that (if the above operators form indeed a complete set), that also $$\vec{J}^2,\vec{J_1}^2,\vec{J_2}^2,J_z$$ with $$\vec{J} = \vec{J_1}+\vec{J_2}$$ forms a complete set. Which is what they claim in the textbook. I see that the operators in both sets are compatible (i.e. they commute) and that $J_{1z}$ and $J_{2z}$ are not compatible with the second set. Does it follow from this information that the second set of observables is complete, if the first is?</p>
<p>It does, yes. The reason for this is central to the mathematical theory which undergirds quantum mechanics.</p> <p>We are used to thinking about vectors as little arrows with lengths and directions, but that is not complete nor is it always useful. A particle's wave function (and indeed its spin) are also vectors but in a more general way. The defining characteristic of vectors that we should hold on to is that they "behave the same" in every reference frame (i.e. if you rotate or translate a collection of vectors, the resulting vectors will be expressed differently, but their relative lengths and directions will be the same).</p> <p>The same basic principle applies in quantum mechanics. The state of a system can be described in "momentum space" or "position space" or "energy space" but the <strong>state itself</strong> does not depend on how it is expressed. You may be familiar with the "identity operator" $\Bbb I=\sum_n |\psi_n\rangle\langle\psi_n|$, where $|\psi_n\rangle$ can be any orthonormal basis you like. This is called the identity operator because by applying it to some state all you accomplish is expanding that state in the $\psi$-basis.</p> <p>The same idea works for spin. You have some state of the system described by the individual total and $z$-component angular momenta of two particles ($|l_1,l_2,m_1,m_2\rangle$), and you are going to express it in another basis ($|l_{tot},l_1,l_2,m_{tot}\rangle$). Your question about whether the basis is complete could be rephrased this way "Is all of the information I had about the system in my first basis still available in my second basis?" </p> <p>To show that this is the case, you only need to show that every operator of the new basis commutes with every other operator in the new basis and that the total number of operators in the new basis is the same number as before (i.e. that the state is specified by the same number of quantum numbers). In this case, both of these conditions are trivial to check. (This is the same basic procedure as when you change coordinate bases in a regular vector space: you have to make sure that the dimension of the new basis is the same as the dimension of the old basis.) Another way to check that the new basis works is to show that there <strong>does not exist</strong> any other operator that commutes with <strong>every</strong> operator of the basis. While that's possible, it is also tedious.</p> <p>By way of pointing out that this analogy to traditional vector spaces is actually exact in this case (it isn't actually an analogy), realize two things:</p> <p>First, the spin-state space is a finite-dimensional vector space since spin is quantized (i.e. spin states can be exactly represented as column vectors).</p> <p>Second, the Clebsch-Gordan Coefficients are exactly those numbers which characterize the following transformation: $$ |l_{tot},m_{tot},l_1,l_2\rangle=\sum_{m_1, m_2}C \ |l_1,m_1,l_2,m_2\rangle $$ And by recalling the identity operator that I mentioned above, we can deduce that the Clebsch-Gordan Coefficients must be given by $$ C=\langle l_1,m_1,l_2,m_2|l_{tot},m_{tot},l_1,l_2\rangle $$</p>
119
quantum mechanics
Does measurement prevent tunneling?
https://physics.stackexchange.com/questions/92520/does-measurement-prevent-tunneling
<p>Does observation collapse the wave function, thus preventing an object from tunneling to a classically forbidden region?</p> <p>If I understand correctly, observation causes objects to collapse into the state in which they were observed, so there will no longer be a probability of finding them elsewhere. Then if I am somehow measuring the position of an object, does that mean it can't tunnel as long as I continue to observe it? For example, can my hand tunnel to the moon, even though I am looking at it right now, thus measuring its position? How does measurement affect the possibility of tunneling?</p>
<p>Prior to measurement you can have a wavefunction that has some contribution beyond a potential barrier. Thus if you were to measure the position of the particle you have some probability to find that the particle would have tunnelled to the other side of the barrier. Observing the particles position doesn't stop the particle from tunnelling, it checks where the particle is. </p> <p>That being said after measuring the position of a particle it becomes a position eigenstate and the wavefunction becomes a delta function at whatever position you measured. If left unobserved then the wave function will evolve in time and spread out again. </p> <p>If the position of the wavefunction is measured often enough it will not have a chance to evolve in time and spread out, thus stopping it from tunnelling. That being said, the time it takes a wavefunction to spread out may be really short making it difficult to measure the wavefunction quickly enough to stop it from spreading out prior to the next measurement</p>
120
quantum mechanics
Difference: Fermi wave length vs. phase-breaking length?
https://physics.stackexchange.com/questions/93471/difference-fermi-wave-length-vs-phase-breaking-length
<p>I am reading a quantum transport book, where they often mention: phase breaking length and Fermi wavelength. I have looked up and found that:</p> <p><strong>Phase breaking length</strong>= length over which electron remains its phase.</p> <p><strong>Fermi wavelength</strong>= Wavelength associated with the maximum energy of electron (Fermi energy). This is often equal to the distance between 2 electrons.</p> <p>What I couldn't find, however (and in which I am interested) is:</p> <p>1) <em>What is the difference between the Fermi wave length and the phase-breaking length?</em> </p> <p>2) <em>In which transport regimes do they play a role?</em></p>
<p>You have the correct definition for the two lengths. From them you realise that they are not connected to each other, so both your questions make no real sense. </p> <p>The Fermi wave-length is a property of a Fermi gas, the phase-breaking length (also called coherence length) is a property of coherent gas. Bosons have a coherence length as well, not only fermions, whereas only fermions have a Fermi length... </p> <p>Quantum effects are associated with the coherence length, i.e. quantum effects emerge at length scale below the coherence length. Typically, if you have a mesoscopic structure of length smaller than the coherence length, then you can see coherent effect (Aharonov-Bohm, quantum Hall among others). The coherence length depends strongly on the interaction in the system, the temperature, ...</p> <p>There are not a lot of physics associated with the Fermi wave-length, since this is usually the smallest one. It is usually of the order of magnitude of the distance between atoms/electrons in a solid. It tells you that there are not much higher energy than the Fermi one in a degenerate fermionic gas. The Fermi length does <em>not</em> depend strongly on temperature, interaction, ... </p>
121
quantum mechanics
Approximating evolution as occurring in a two-dimensional subspace
https://physics.stackexchange.com/questions/93589/approximating-evolution-as-occurring-in-a-two-dimensional-subspace
<p>Suppose you have a quantum system with a Hamiltonian having some number (greater than 2, possibly infinite) of eigenfunctions, and that the system is prepared in the ground state.</p> <p>When can you approximate it as by two-level system (using just the ground state and first excited state)? Is there some property that will make it better approximated by a two-level system (e.g., something like a bigger energy gap between the second and third energy levels)?</p>
<p>It depends very much on the hamiltonian and the external potential. There is a huge class of possible situations and of possible behaviours, and many of those do lend themselves quite often to two-level approximations.</p> <p>The most common, I think, is when you have a weak perturbation which oscillates at the right frequency to couple only two levels and leave out the others. In this scheme, you have a hamiltonian with a discrete spectrum (e.g. an atom), and you start off in the ground state. Then, if </p> <ul> <li>your perturbation is weak, so only single-photon transitions can occur, </li> <li>your perturbation is tuned exactly to the energy difference between the ground and some excited state (not necessarily the first), and most importantly</li> <li>its bandwidth is smaller than the distance to any neighbouring states, so it has essentially no power at those frequency components,</li> </ul> <p>you can essentially ignore all other levels and simply treat your system as having two levels. This scheme, or variations to include more levels with suitably-tuned additional lasers, is essentially everywhere in quantum optics and any quantum information processing with matter systems.</p> <p>Another possible scheme is to have a perturbation that can be strong enough to appreciably shift the energy levels of your initial hamiltonian, but that varies slowly enough. In the adiabatic limit of infinitely slow variations, you will stay in the "dressed" ground state, which is the ground state of the total hamiltonian at each particular instant. As you slowly increase the rate of change of the perturbation, you begin to get <a href="http://homepage.univie.ac.at/mario.barbatti/papers/review/landau-zener.pdf" rel="nofollow">Landau-Zener transitions</a> to the first excited state at the points where the two levels are closest. If the second excited state is far away enough, then you can increase the nonadiabaticity strongly enough that you get significant transfer of population between the ground and first excited states, while still ignoring all other levels.</p> <p>Further than that there are, of course, more situations where you can reduce the dimensionality of interest of your problem to two or only a few states, but they get more and more specific. In general, when one is faced a difficult quantum mechanical problem, a large part of the solution process is finding the subspaces where most of the population is, and coming up with schemes to reduce the dimensionality to something more manageable; once you do that you're essentially in a position to easily solve whatever's left to do.</p>
122
quantum mechanics
When combining two quantum states is there any rules that say its going to be in a mixed or pure state?
https://physics.stackexchange.com/questions/95793/when-combining-two-quantum-states-is-there-any-rules-that-say-its-going-to-be-in
<p>I have two quantum systems, a probe and a target that have been entangled. The probe is prepared in a mixed state and so has the target. Is the combined system therefore a mixed state?</p> <p>If I had prepared my target in a pure state and probe in a mixed is the resulting combined system still a mixed state?</p>
<p>So, if I understand you correctly, you have a system with two parts, the probe and the target, i.e. the density matrices $\rho_{probe}$ and $\rho_{target}$ are the reduced density matrices of a state $\rho$ describing the whole system.</p> <p>Let us first consider the case that you don't have entanglement. If the two states are not correlated in any way, it'll just be the product of the two density matrices. Hence, given $\rho_{probe}$ and $\rho_{target}$, the combined system will be $\rho_{probe}\otimes \rho_{target}$. Then we obtain:</p> <ul> <li>two pure states add up to a pure state</li> <li>a mixed state and a pure state add up to a product state.</li> <li>two mixed states add up to a mixed state</li> </ul> <p>The reason is that the pure states are (let's stick to finite dimensional systems for simplicity, everything should also be true in infinite dimensions) rank-one density matrices and the rank is multiplicative. Basically the same holds true for classical correlations (i.e. the combined system is separable).</p> <p>Now, if the combined system has entanglement, hence the state is not a product, you need to be more careful. Let's first consider the case that one state is pure (wlog $\rho_{probe}$ is pure) and the other is mixed. In this case, the state $\rho$ of the two systems will always be mixed. The reason is that if $\rho$ were pure, $S(\rho_{target})=S(\rho_{mixed})$, i.e. the two von Neumann entropies would be equal. Since $S(\rho_{target})=0$ as it is pure and $S(\rho_{mixed})\neq 0$ as it is mixed, this cannot happen. In fact, when $\rho_{probe}$, which is the reduced density matrix of $\rho$ is pure, then the whole system must be in a product state, i.e. there cannot be any entanglement at all!</p> <p>So if you have genuine entanglement in $\rho$, then both reduced density matrices must be mixed. In this case, the above reasoning tells you that if both target and probe are mixed, the state of the whole system may well be pure - this can happen, when $S(\rho_{target})=S(\rho_{probe})$. The easiest example is given by a bipartite maximally entanlged pure state, since then each of the two parts (i.e. probe and target) is in a maximally mixed state.</p>
123
quantum mechanics
Superposition of electron and positron particle states
https://physics.stackexchange.com/questions/31793/superposition-of-electron-and-positron-particle-states
<p>Let $b_k^\dagger ,b_k$ represent the creation and annihilation operators for an electron in state $k$. Let $d_j^\dagger ,d_j$ represent the same for a positron in state $j$. And let $|0\rangle$ represent the vacuum.</p> <p>Is it possible to have a state described by $ \left( b_k^\dagger + re^{i\theta} d_k^\dagger \right)|0\rangle $? I include the $re^{i\theta}$ for generality.</p> <p>How do I interpret such a state? If I make measurements of the number of particles, the energy, the momentum, the charge, etc... what would I observe?</p> <p>The question of how many particles is easy. The answer is 1. (On that note, can we have superpositions of states with differing number of particles?)</p> <p>What the energy and momentum are depends on what the labels $k$ and $j$ mean.</p> <p>But what about charge? What would I measure for charge? If we enclosed the system in a box that measured the electric field, what would we get for $\oint{\vec{E}}\cdot d\vec{A}$?</p> <p>Thanks!</p>
<p>The Vacuum sector Hilbert space is generated by the action of $U(1)$ invariant operators $\overline{\hat\psi(x)}\Gamma\hat\psi(x)$ (and their Fourier transforms, where $\Gamma$ is an arbitrary Dirac matrix) on the Lorentz and translation invariant vacuum vector, which does not allow the state you ask about to be constructed because $U(1)$ invariant operators do not change the charge. One can also discuss this in terms of superselection rules, which is, however, an alternative presentation of $U(1)$ (or other) invariance of observables. [Strictly speaking, note that $\hat\psi(x)$ and its Fourier transform are operator-valued distributions, not operators, which introduces issues that anyone who might be concerned about such details can fill in.]</p> <p>The question of how many particles there are in a given state is best answered in aggregate: what is the aggregate number of electrons minus the number of positrons, which for the vacuum sector is zero. One can create other Hilbert spaces, with different aggregate numbers of electrons/positrons, which can then be used to construct mixed states, but not superpositions.</p> <p>There is, however, a lot else that could be said.</p>
124
quantum mechanics
Is this paragraph on probabilities of sub atomic partials accurate?
https://physics.stackexchange.com/questions/34035/is-this-paragraph-on-probabilities-of-sub-atomic-partials-accurate
<p>I am working on a concept for something and i want to make sure i understand something clearly before i start on everything else. Note, my project is more about the interactions of elements of complex systems rather than physics. Im just using this paragraph as an example, not doing a project on quantum mechanics. Is this paragraph accurate:</p> <p>The universe is built on probabilities. In the whole scope of the universe, absolute mathematics and absolute certainty do not exist. Its incorrect to say 1 + 1 = 2, the more accurate way to say it is 1 + 1 probably equals 2, but not always. It is all based on the probability that what you expect to happen, will. Its quantum probabilities that dictate that a group of sub atomic particles, at this exact moment in time, line up precisely in a certain way that allow an atom to exist. That atom lives in an environment (dictated by other probabilities) that has a calculable probability to grab onto another atom and form an object with mass. That object was hammered into a shape by a craftsman trusting that the probabilities dictating the actions of the subatomic particles allow it to not crack, or disintegrate, or break the tool. That object is now sitting on your desk holding your soft drink in the form of a can. The subatomic particles that are almost insignificantly small form a thread of interaction that leads up to that can sitting on your desk. If that thread breaks at any point, that can ceases to exist as you know it and your soda is running all over your desk. </p> <p>Thanks everyone, im not sure if this falls in the scope of this site, but i can't think of any other place to ask this question.</p>
<blockquote> <p>absolute certainty do not exist.</p> </blockquote> <p>I'm absolutely certain that I exist and more, I am absolutely certain that Existence exists and that I am aware of it.</p> <p>It is no more incorrect to say that 1 + 1 = 2 than it is incorrect to say that all bachelors are unmarried men.</p> <p>To say that there is some non-zero probability that there is a married bachelor is to reveal an ignorance of the concept labelled by "bachelor". Likewise, to say 1 + 1 does not always equal 2 is to reveal an ignorance of the mathematical concepts in the statement "1 + 1 = 2".</p> <p>Also please note the near self-refuting nature of your entire paragraph. You start out with the statements that absolute certainty does not exist and that all is probabilities. But then, you go on to make a number of statements without any qualification (or recognition?) whatsoever that these statements, by your previous opening statements and including those, must not always be true.</p>
125
quantum mechanics
Beginning with an arbitrary classical equation for energy, how do I get the QM Hamiltonian?
https://physics.stackexchange.com/questions/35773/beginning-with-an-arbitrary-classical-equation-for-energy-how-do-i-get-the-qm-h
<p>For linear momentum I can use the de Broglie equation, but what about energy in terms of moment of inertia or some other form? </p>
126
quantum mechanics
Does a particle lose its (location) wavefunction if its location is measured exactly?
https://physics.stackexchange.com/questions/35827/does-a-particle-lose-its-location-wavefunction-if-its-location-is-measured-exa
<p>As the title says, does a particle lose its location wavefunction if its location is measured exactly (I know this would be impossible in reality)?</p> <p>Also, in reality, if one measures a particle, does the wavefunction of a particle become something different from original afterwards?</p>
<p>The particle doesn't "lose" its location wavefunction, rather its wavefunction changes to one which is sharply peaked around the location which results from the measurement. In the Copenhagen interpretation, the wavefunction changes smoothly with time until a measurement, such as this one, is performed, after which it abruptly changes to a new one. The wavefunction represents the particle state, and in the language of states, the state evolves unitarily until a measurement is performed, at which time it abruptly changes to an eigenstate of the operator representing the quantity being measured.</p> <p>Note that there are different pictures to this Copenhagen one, in particular ones involving <a href="http://en.wikipedia.org/wiki/Quantum_decoherence#Measurement" rel="nofollow">decoherence</a>, which do a more thorough job of describing what happens to the wavefunctions of the measuring apparatus, measured system and environment. These show how the <em>appearance</em> of a sudden change in the particle wavefunction can arise, whilst in reality, everything continues to evolve unitarily.</p>
127
quantum mechanics
Is emission/absorption of a photon lossy?
https://physics.stackexchange.com/questions/35343/is-emission-absorption-of-a-photon-lossy
<p>I recall vaguely that energy is absorbed/radiated in packets called quanta. Quanta were what are now known as photons. </p> <p>What I'm curious about - Is absorption/radiation vis-a-vis photon lossy? Do the total number of photons exactly match the energy acquired/released?</p>
<p>It is not a lossy process. For there to be a loss at one place would require there to be a gain somewhere else. When the atom releases the photon there would be recoil of the atom that would contain some energy, but 100% of the energy lost by the atom would be gained by the photon.</p>
128
quantum mechanics
Energy is quantized
https://physics.stackexchange.com/questions/38433/energy-is-quantized
<p>How can energy be quantized if we can have energy be measured like in 1.56364, 5.7535, 6423.654 kilo joules, with decimals? Thanks</p> <p>Also isnt it quantization means energy is represented in bit quantities meaning you can not divide, lets say 1 bit of energy</p>
<p>Typically, in quantum mechanics, bound states are quantized and free/scattering states are not. This is because bound states, by the mere fact that they're constrained to a certain area, will have to satisfy certain boundary conditions, and these conditions won't be able to be satisfied in a continuous range. </p> <p>The classic example of this is the infinite square well potential, where $V(x) = 0$ if $0&lt;x&lt;a$, and $V(x) =\infty$ elsewhere. Then, the particle will have zero probability of appearing outside of the well, and will have to satisfy the zero-potential Schrödinger equation $E\psi = -\frac{\hbar^{2}}{2m}\nabla^{2}\psi$ inside of the well. For simplicity, we'll only consider one-dimensional motion.</p> <p>In this case, we see right away that the basis states to our solutions have to satisfy $\psi = A\sin\left(\frac{\sqrt{2mE}}{\hbar}x+\phi\right)$, and we also know that the wave function must be continuous, and that it is restricted to be zero for $x&lt;0$ and $x&gt;a$. We can satisfy the first boundary condition by choosing $\phi=0$, but the second one is not satisfied for all values of the energy. Instead, it is necessary that $\frac{\sqrt{2mE}}{\hbar}a=n\pi$, where $n$ is some integer. Thus, the allowed energies of 'pure' states of this system are quantized, and take the values $E_{n} = \frac{n^{2}\pi^{2}\hbar}{2m}$. </p> <p>For any other bound state, you will find yourself using similar logic about boundary conditions, albiet with much, much more complexity. Note that, however, it is also the case that we can construct a general state out of the energy eigenstates $\Psi = \sum a_{n}\psi_{n}$, and that the expectation value for the energy of $\Psi$ will be $\sum|a_{n}|^{2}E_{n}$, so the values for the "average" value of a state are still allowed to be continuous (and in the case of the infinite square well, can actually take any value greater than the ground state energy).</p>
129
quantum mechanics
Something I don&#39;t understand in Quantum Mechanics
https://physics.stackexchange.com/questions/39540/something-i-dont-understand-in-quantum-mechanics
<p>I've just started on QM and I'm puzzled with a lot of new ideas in it.</p> <p>1.On a recent lecture I've attended, there is an equation says: $\langle q'|\sum q|q\rangle \langle q|q' \rangle =\sum q \delta(q,q')$</p> <p>I don't understand why $\langle q'|q\rangle \langle q|q' \rangle =\delta (q,q')$</p> <p>Can you explain this equation for me?</p> <p>2.Actually, I'm still not clear about the bra-ket notation. I've learnt the bra and the ket could be considered as vectors. Then what are the elements of the vectors?</p> <p>Thank you very much!</p>
<ol> <li><p>The equation is true, if $|q\rangle$,$|q'\rangle$ are chosen from an orthonormal set of vectors, such as an eigenbasis of an operator. Then, by definition, $\langle q|q' \rangle = \delta_{q,q'}$ </p></li> <li><p>$| q \rangle$ just denotes some vector labeled $q$ in some Hilbert space. The dimension equals the number of distinct classical states that your system can be in. </p></li> </ol> <p>${{{}}}$</p>
130
quantum mechanics
Is there a quantum state for a large system
https://physics.stackexchange.com/questions/41740/is-there-a-quantum-state-for-a-large-system
<p>My understanding of quantum mechanics is that the state of a system is represented by a vector in multidimensional complex vector space. Is there, in principal, a state vector that represents a large, classical object such as, say, a cheeseburger, at an instant in time? If so, what is the physical meaning of that "state"?</p>
<p>Quantum states of macroscopic systems are routinely considered in statistical mechanics. They used to derive both the thermodynamic properties of macroscopic materials and the way they deform and respond to external forces. </p> <p>However, these macroscopic quantum states are never described by state vectors (pure states) but always by density matrices (mixed states).</p> <p>In practice, quantum derivations are restricted to simple and fairly homogeneous materials, because of the difficulty to work numerically with more complex states. But there is no limitation in theory of the size to which quantum mechanics applies; in particular, it would apply to a cheeseburger if one would model it as an $N$-particle system with $N$ of the order of $10^{25}$. For example, it is applied to derive the conditions of the hydrodynamic reactive flow in the interior of the sun. (Although the sun's apparent size is similar to that of a cheeseburger, its true size is much bigger.) </p> <p>The state of a quantum system (no matter whether consisting of a single qubit or of $10^{25}$ atoms) describes all properties of a system that can possibly be measured.</p>
131
quantum mechanics
How can I prove this inequality?
https://physics.stackexchange.com/questions/44218/how-can-i-prove-this-inequality
<p>Prove that $$ \lambda _{1}\lambda _{2}^{*}\varphi _{1}\varphi _{2}^{*}+\lambda _{1}^{*}\lambda _{2}\varphi _{1}^{*}\varphi _{2} \leq \left | \lambda _{1} \right |\left | \lambda _{2} \right |\left \{ \left | \varphi _{1} \right |^{2}+\left | \varphi _{2} \right |^{2} \right \} $$ where all symbols are complex numbers.</p> <p>I encountered this while trying to prove that the set of all square integrable functions form a vector space.</p>
<p>OP's inequality (v3) is obvious if either $\lambda_1=0$ or $\lambda_2=0$, so we may assume that $\lambda_1\neq 0$ and $\lambda_2\neq 0$.</p> <p>Define $$\phi_1:=\sqrt{\frac{\lambda_1\lambda_2^*}{|\lambda_1\lambda_2|}}\varphi_1,$$</p> <p>and</p> <p>$$\phi_2:=\sqrt{\frac{\lambda_1^*\lambda_2}{|\lambda_1\lambda_2|}}\varphi_2.$$</p> <p>Then OP's inequality becomes</p> <p>$$2{\rm Re}(\phi_1\phi_2^*) \leq |\phi_1|^2 + |\phi_2|^2, $$</p> <p>or equivalently </p> <p>$$ |\phi_1-\phi_2 |^2 \geq 0, $$ </p> <p>which is true.</p>
132
quantum mechanics
Origin of exchage interactions
https://physics.stackexchange.com/questions/46185/origin-of-exchage-interactions
<p>Can someone explain to me the origin of the exchange interaction between two electrically charged spin 1/2 fermions? Quantitative or qualitative accepted. </p>
<p>The wave function is antisymmetric under exchange of (all) the coordinates of each electron (we'll just call them electrons since that's shorter than "two electrically charged spin 1/2 fermions" and equivalent). We'll write the wave function as: \begin{align} \Psi(1,2) &amp;= \psi_1(\mathbf{r}_1)\psi_2(\mathbf{r}_2)|s_1s_2\rangle -\psi_1(\mathbf{r}_2)\psi_2(\mathbf{r}_1)|s_2s_1\rangle. \end{align} (Check that this is antisymmetric under $\mathbf{r}_1 \leftrightarrow \mathbf{r}_2$ &amp; $s_1 \leftrightarrow s_2$.)</p> <p>Now let's calculate the force between these due to some two-body operator $V(\mathbf{r}_1,\mathbf{r}_2)$ (that might depend on spin). It's proportional to (I'm ignoring normalization): \begin{align} &amp;\int d^3r_1 d^3r_2 \Psi^\dagger(1,2)V(\mathbf{r}_1,\mathbf{r}_2)\Psi(1,2)\\ &amp;= 2\int d^3r_1 d^3r_2 \Big\{ |\psi(\mathbf{r}_1)|^2|\psi(\mathbf{r}_2)|^2 \langle s_1 s_2|V(\mathbf{r}_1,\mathbf{r}_2)|s_1 s_2\rangle\\ &amp;- \mbox{Re}[\psi_1^*(\mathbf{r}_1)\psi_2(\mathbf{r}_1)\psi_2^*(\mathbf{r}_2)\psi_1(\mathbf{r}_2) \langle s_1 s_2|V(\mathbf{r}_1,\mathbf{r}_2)|s_2 s_1\rangle]\Big\} \end{align} The second term is the exchange term. Note that the square of the wave functions in the first term are proportional to particle densities at $\mathbf{r}_1$ and $\mathbf{r}_2$. While the second term has different wave functions, $\psi^*_1(\mathbf{r}_1)\psi_2(\mathbf{r}_1)$ at the point $\mathbf{r}_1$.</p> <p>The short answer is: "The origin of the exchange interaction is the property of definite parity under coordinate exchange of the wave function." (Often the answer is given that it's the Pauli principle. But this is incomplete. Systems of bosons with internal coordinates (flavor, spin, etc.) enjoy the exchange interaction too. Prove this.) Note too that this holds for any two-body operator.</p>
133
quantum mechanics
Differential of Quantum mean value or expectation value
https://physics.stackexchange.com/questions/46722/differential-of-quantum-mean-value-or-expectation-value
<p>How to take differential of Quantum mean value over hermitian operator (mean or expectation value)? $$d\langle \hat A\rangle$$ remark: or time evolution of mean value over operator $$\frac {d\langle \hat A\rangle}{dt}$$</p> <blockquote> <p>what is the problem here? ok let me talk a little more special in three steps. 1. In classical phyaics differential of momentum is: $$dp=dmv=mdv$$ 2. In relativistic physics differential of a momentum is: $$dp=dmv=mdv + vdm$$ 3. how about differential of relativistic quantum mean value over momentum operator? $$d\langle \hat P\rangle=?$$</p> </blockquote> <p>I register ,here is my account: <a href="https://physics.stackexchange.com/users/16861/mare">mare</a></p> <p>but i dont have access to my questions!.</p>
<p>In fact, you missed another type of force:</p> <p>In Lagrangian Mechanics there is a scalar potential field V in which the gradient of V is the force: $$F=-\nabla V$$</p> <p>and this is exactly what we dealing with in QM.</p> <p>$$\frac {d \langle \hat P\rangle}{dt}=\langle \hat F\rangle=\langle -\nabla V\rangle$$</p>
134
quantum mechanics
Quantum mechanics in macroscopic systems
https://physics.stackexchange.com/questions/51570/quantum-mechanics-in-macroscopic-systems
<p>I don't understand the superposition principle in quantum mechanics or the collapse of wave-function (I think it's impossible for me to understand it) My question is: </p> <p>Is it possible to demonstrate the quantum mechanical behaviour (Superposition and wavefunction collapse, etc.) in some macroscopic systems under certain conditions so it may be better to understand it? Is it possible to make the wavefunctions of atoms in some material more or less coherent?</p>
<p>You've asked two questions. Firstly is it possible to see superposition in macroscopic systems? The answer is yes. <a href="http://www.scientificamerican.com/article.cfm?id=quantum-microphone" rel="nofollow noreferrer">This article</a> describes making a tiny "tuning fork" that can be put into a superposition of different vibrational states.</p> <p>Secondly you ask "Is it possible to make the wavefunctions of atoms in some material more or less coherent?". Again the answer is yes, and a good example is a <a href="http://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_condensate" rel="nofollow noreferrer">Bose-Einstein condensate</a>.</p> <p>As <a href="https://physics.stackexchange.com/users/17609/gugg">Gugg</a> points out, there is a good Wikipedia article on <a href="https://en.wikipedia.org/wiki/Macroscopic_quantum_phenomena" rel="nofollow noreferrer">Macroscopic Quantum phenomena</a>.</p>
135
quantum mechanics
Quantum mechanics, what&#39;s possible?
https://physics.stackexchange.com/questions/51627/quantum-mechanics-whats-possible
<p>There is a thread in Physicsforums.com which states due to Quantum Mechanics, if you wait long enough diamonds will appear in your pocket, it also states it's possible for all your atoms to spontaneously re-arrange themselves so you turn into a Boeing airplane. Surely this is fiction?</p>
<blockquote> <p>There is a thread in Physicsforums.com which states due to Quantum Mechanics, if you wait long enough diamonds will appear in your pocket, it also states its possible for all your atoms to spontaneously re-arrange themselves so you turn into a Boeing airplane. Surely this is fiction?</p> </blockquote> <p>No, it could be possible. That's why we say that in Quantum Mechanics we deal with probabilities. As Gell-Mann said: <em>That which is not forbidden is mandatory.</em></p> <p>But at our scale, the probability that this type of effects occur is sooo small, that it's negligible.</p>
136
quantum mechanics
Intuitive meaning of the Hilbert Space formalism
https://physics.stackexchange.com/questions/52252/intuitive-meaning-of-the-hilbert-space-formalism
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://physics.stackexchange.com/questions/48469/intuitive-meaning-of-hilbert-space-formalism">Intuitive meaning of Hilbert Space formalism</a> </p> </blockquote> <p>I am totally confused about the Hilbert Space formalism of Quantum Mechanics. Can somebody please elaborate on the following points:</p> <ol> <li>the observables are given by self-adjoint operators on the Hilbert Space.</li> <li>Gefland-Naimark Theorem implies a duality between states and observables</li> <li>what's the significance of spectral decomposition theorem in this context?</li> <li>What do the Hilbert Space itself corresponds to and why are states given as functionals on the hilbert space. I need a real picture of this. </li> </ol>
137
quantum mechanics
How do particles, such as electrons become visible?
https://physics.stackexchange.com/questions/53422/how-do-particles-such-as-electrons-become-visible
<p>Quantum mechanics says that atoms are invisible - they do not have some specified location, only a probability distribution. So, how can we see them? If there is to be particle-antiparticle annihilation (or other interactions), the particles must have a fixed location, right? So, is this process just random? Is it impossible to know whether a pair will annihilate, but only know the probability?</p>
<p>Firstly, let's address your statement that</p> <blockquote> <p>Quantum mechanics says that atoms are invisible.</p> </blockquote> <p>I would say that this is misleading <em>at best</em>. Quantum mechanics says more something along the lines of "before we make a measurement of the position of a particle, we can only know the probability that if we were to make a measurement, then the particle would be in some localized region. This is an example of what is called the <a href="http://en.wikipedia.org/wiki/Born_rule" rel="nofollow">Born Rule</a>. Once you make a measurement on the position of a particle, you can say with certainty that the particle is where you measured it to be, but you cannot have the same sort of certainty before the measurement is made.</p> <p>When speaking of particle annihilation, we can certainly say that if we observed an annihilation to occur at a certain position, then the two particles that participated in the annihilation were both there during the <a href="http://en.wikipedia.org/wiki/Annihilation" rel="nofollow">annihilation</a> event, even though we may have had only probabilistic information about their positions before the event.</p> <p>Hope that helps!</p> <p>Cheers!</p>
138
quantum mechanics
Given a state function of a particle, can we determine its mass?
https://physics.stackexchange.com/questions/53429/given-a-state-function-of-a-particle-can-we-determine-its-mass
<p>The quantum state of a system is supposed to contain all the information that can be obtained about the system such as its energy, momentum...etc.</p> <p>So I have 2 questions:</p> <p>1-If someone gave us a quantum state of a single particle, can we tell of what mass it is?</p> <p>2-Another question is that, given a quantum state can one tell what the Hamiltonian is?</p>
<blockquote> <p>1-If someone gave us a quantum state of a single particle, can we tell of what mass it is?</p> </blockquote> <p>If you know the energy and momentum, yes, you have the mass. </p> <p><img src="https://i.sstatic.net/8P3Kx.png" alt="mass"></p> <blockquote> <p>2-Another question is that, given a quantum state can one tell what the Hamiltonian is?</p> </blockquote> <p>It depends on how you are given the information of the quantum state. If in mathematical functions, for example Bessel functions, then a reasonable guess might give you the Hamiltonian. If it is measurements, as in high energy physics experiments, fits need to be made to various hamiltonian guesses and probability of fits deduced. At the moment particle physics is trying to fit the Standard Model and the process seems to be converging.</p> <p>If your given is an instantaneous wave function of one particle, no. One needs to accumulate the probability distribution</p>
139
quantum mechanics
What happens if an atom absorbs a photon of energy higher than first excited state but lower than second excited state?
https://physics.stackexchange.com/questions/53790/what-happens-if-an-atom-absorbs-a-photon-of-energy-higher-than-first-excited-sta
<p>Since the energy levels of atoms are quantized, I was wondering what happens if an electron is hit by a photon whose energy is higher than electron's first excited state but lower than second excited state. Does it excite to the first excited state? If yes, what happens to the remaining energy?</p>
<p>Let the electron ground state have energy $E_g$, let the first excited state have energy $E_1$, and let the second excited state have energy $E_2$.</p> <p>Let the energy of the photon be given by $E_p = hf$.</p> <p>Now it isn't the energy of the exited states that is important in transitions, but the <strong>energy differences</strong> between states.</p> <p>So instead I'll assume you mean the following condition: $(E_1 - E_g) &lt; hf &lt; (E_2 - E_g)$.</p> <p>Now as the atom absorbs $hf$, it causes the electron to transition to the first excited state. There is therefore excess energy of $hf - (E_1 - E_g)$. This excess energy is either converted to kinetic energy in the atom, or is reflected as a new photon with lower frequency and energy $E' = hf'$.</p>
140
quantum mechanics
Force analysis of silver atom in Stern–Gerlach experiment
https://physics.stackexchange.com/questions/59491/force-analysis-of-silver-atom-in-stern-gerlach-experiment
<p>In this experiment we only consider the force at z direction, but $\vec B$ field gradient doesn't exclusively exist at z direction according to Maxwell's equations. So why don't we see the splitting in other directions?</p>
<p>The point made by Otto Stern in the <a href="http://positron.physik.uni-halle.de/F-Praktikum/PDF/39_zphys1921v7_249_stern.pdf" rel="nofollow">original publication (german)</a> is that the contribution of $\frac{\partial \vec{B}}{\partial x}$ and $\frac{\partial \vec{B}}{\partial y}$ can be neglected when averaging over a suitable period of time.</p> <p>In that case the force, aka the negative gradient of $\vec{\mu}\cdot\vec{B}$ is only relevant along the z-axis, but technically you are then talking about an time-average of the force.</p>
141
quantum mechanics
Indistinguishable particles in quantum mechanics
https://physics.stackexchange.com/questions/59570/indistinguishable-particles-in-quantum-mechanics
<p>If you have two particles of the same species , Quantum mechanics says that $\Phi_{m_{1},x_{1},p_{1},m_{2},x_{2},p_{2}}=\alpha\Phi_{m_{2},x_{2},p_{2},m_{1},x_{1},p_{1}}$ But I don't understand why $\alpha$ doesn't depend on $x$ , $p$ . If $\alpha$ depends on $SO(3)$ invariants as $x^2 , x.p , p^2$ etc then it will be the same on all reference frame why does one require that it doesn't depend on these variables? Even if it depends on $p ,x$ $\alpha$ is a phase factor so it doesn't affect anything why should this be important?</p> <p>EDIT : I figured out the answer to the second question , for $\alpha$ is a complex number that carries no indices so it cannot change </p>
<p>Let us step back for a moment to answer your question.</p> <p>We consider a system of $n$ indistinguishable particles. What does that mean ? Let $S_n$ be the set of all permutations of $n$ elements, and let $\sigma \in S_n$. Then if $P(\sigma)$ is the (unitary) operator representing $\sigma$ on the $n$-particles Hilbert space $\cal H$, the property of "indistinguishability" means that the two vectors $|\psi\rangle$ and $P(\sigma)|\psi\rangle$ represent the same physical state, and this should be true for <em>any</em> state $|\psi\rangle \in {\cal H}$. In other words, we must have $$ P(\sigma) = e^{i \chi(\sigma)} {\mathbf 1} $$ where $\chi(\sigma)$ is some real number.</p> <p>If I understand your question correctly, what you ask is why couldn't we have the number $\chi(\sigma)$ to be an operator depending on the momentum operator $\hat {\vec P}$ or the position operator $\hat {\vec R}$ (for example). But obviously $P(\sigma)$ would not be proportional to the identity operator $\mathbf 1$, violating the above conclusion.</p> <p>I hope this help !</p>
142
quantum mechanics
Is a blackbody real or imagined?
https://physics.stackexchange.com/questions/59894/is-a-blackbody-real-or-imagined
<p>In my reading of blackbody radiation I am always asked to imagine this or that body being a perfect absorber or emitter of radiation, and I am always left with the impression that a blackbody exists only as a theoretical construct. But is it? Or can one be constructed and tested? </p>
<p>A <a href="http://en.wikipedia.org/wiki/Black_body" rel="nofollow">Blackbody</a> is only theoretical. In other words, it is an ideal one. No such body has been observed with such a perfection in the emissivity. I think the Wiki article holds good. Its first statement:</p> <blockquote> <p><em>A blackbody is an idealized....</em></p> </blockquote> <p>With the help of such a body emitting radiation, we can compare the range of its frequencies with other bodies, approximating their frequencies so that it matches up with the blackbody and we can easily determine the temperature of it.</p> <p>For example, the temperature of sun, molten iron, us or any other <em>hot</em> body can be determined by observing its spectra and then comparing them with a blackbody. The comparison matches very approximately only to a certain blackbody at a certain temperature.</p> <p><em>Thanks to Max Planck</em> ;-)</p>
143
quantum mechanics
Free 1d proton in magnetic field
https://physics.stackexchange.com/questions/62061/free-1d-proton-in-magnetic-field
<h2>Question Statement</h2> <blockquote> <p>Consider a proton which has spin $1/2$ that is free to move throughout all locations $-\infty&lt;x&lt;\infty$. A magnetic field of constant magnitude $B_{\circ}$ is applied perpendicular to the $x$ axis. Let the Hamiltonian be given by: $$ \mathcal{H}=\frac{\hat{p}^2_{x}}{2m_{p}} - \mu_{p}\hat{S}_{z}B_{\circ} $$ What are the stationary states of this system, the energies of these states and the degeneracy of the states?</p> </blockquote> <p>I'm not 100% clear on how to deal with Hamiltonians that operate on position space <em>and</em> spin space. From what I can tell, operators act on one or another, but not both. If that's true, then I would say that the spatial component of the wavefunction is just a plane wave $e^{ikx}$ since there's no potential energy. From there you could impose box normalization, which would make the energies just the energies of an infinite square well of length $L$ but it doesn't seem right to have to introduce the parameter $L$ like that. Is this approach correct?</p> <p>The spin component is the vector $(\chi_{+}(t),\chi_{-}(t))^{T}$, which solves $- \mu_{p}\hat{S}_{z}B_{\circ}|\chi\rangle=i\hbar\frac{\partial}{\partial t}|\chi\rangle$. It seems like this would contribute some energy to the system, but I'm not sure how to find it. Is there any energy associated with this part? </p>
<p>One way to think about this is that the particle has two completely separate degrees of freedom, one associated to translations (i.e. position, momentum) and one associated to spin. You are right that in this case the two parts of the Hamiltonian act only on one or the other of these degrees of freedom. The operator $\hat{p}_x \equiv \hat{p}_x \otimes \hat{1}$ acts as the identity (it does nothing) on the spin degrees of freedom. Likewise, the operator $\hat{S}_z$ should technically be written as $\hat{1}\otimes \hat{S}_z$, because it is the identity on the translational degrees of freedom.</p> <p>The state of the system is a direct product of translational and spin states:</p> <p>$$|\Psi\rangle = |\psi\rangle \otimes |\chi\rangle $$</p> <p>where $$\langle x |\psi \rangle = \psi(x)$$ is the wavefunction in the position representation, while $$|\chi\rangle = a |\uparrow\rangle + b|\downarrow\rangle$$ is the spin state, with $a$ and $b$ some complex numbers satisfying $|a|^2 +|b|^2 = 1$.</p> <p>If you want to normalise the position wavefunction, you need to introduce a length scale. The usual procedure in this situation is to introduce periodic boundary conditions, i.e. $\psi(x) = \psi(x+L)$. This choice has the advantage of preserving the translation invariance so that the eigenstates look the same, unlike box boundary conditions. Then one takes $L\rightarrow \infty$ at the end of the calculation, if desired.</p> <p>The spin surely gives a contribution to the energy. You know that it wants to align its magnetic moment with the magnetic field, so states where the spin is aligned (say $|\downarrow\rangle$) must have lower energy than anti-aligned states (say $|\uparrow\rangle$). The Hamiltonian is a sum of terms that act on the spin and translation Hilbert spaces separately, i.e. you can write $$ \hat{H} = \hat{H}_x \otimes \hat{1} + \hat{1} \otimes \hat{H}_{spin}. $$ A general eigenstate is just a product of translational and spin eigenstates, and you can see that the eigenvalues just add: $$ (\hat{H}_x \otimes \hat{1} + \hat{1} \otimes \hat{H}_{spin}) |\Psi\rangle = \hat{H}_x|\psi\rangle \otimes \hat{1}|\chi\rangle + \hat{1}|\psi\rangle \otimes \hat{H}_{spin}|\chi\rangle = (E_x + E_{spin}) |\psi\rangle\otimes |\chi\rangle,$$ where $E_{spin}$ is proportional to an eigenvalue of <a href="http://en.wikipedia.org/wiki/Pauli_matrices" rel="nofollow">the operator $\hat{S}_z$</a>.</p>
144
quantum mechanics
Frank Hertz experiment and different jumps
https://physics.stackexchange.com/questions/65514/frank-hertz-experiment-and-different-jumps
<p>Why is it assumed that in this experiment, the jump will be between the second and the first states. Couldn't it be that when the electrons have enough energy, an atom absorbs enough to get to the third state and then jumping only to the second, emiting less energetic light?</p> <p>Isn't this right? Or is it just that the number of electrons that do this is very small compared to the ones that excite the atom to only the second level? I am saying this because I've seen the experiment, and a blue-violet glow is clearly visible when the potential is very high. Unless I have super powers, I cannot explain this, because the light emited from the second to the first level is 4.9 eV is equivalent to light of $\lambda \approx$ 250 nm, clearly out of the visible spectrum.</p>
<p>You might be interested in <a href="http://grundpraktikum.physik.uni-saarland.de/scripts/What_really_happens.pdf" rel="nofollow">this article</a>, that goes into some detail on what actually happens in the Frank Hertz experiment.</p> <p>To answer your specific question, once you have electrons ricocheting around the mercury atoms, you'll have a significant population of atoms with various degrees of excitation just due to random thermal collisions. The atomic velocities follow a Maxwell-Boltzmann distribution, and as I recall the Frank Hertz experiment is done at several hundred degrees (to vaporise the mercury), so even with no accelerating voltage a few mercury atoms will be excited by collisions. Once you've ramped up the voltage enough to excite the ground to triplet state you'll have a small but significant population of atoms in more highly excited states that can emit light in the visible region. My guess is that if you pass the light from the experiment through a spectroscope you'd see lots of lines over a wide range of wavelengths.</p>
145
quantum mechanics
Expectation value - Zetilli vs Griffith
https://physics.stackexchange.com/questions/65779/expectation-value-zetilli-vs-griffith
<p>I know that an inner product between two vectors is defined like: </p> <p>$$\langle a | b\rangle = {a_1}^\dagger b_1+{a_2}^\dagger b_2+\dots$$</p> <p>but because a transpose of a component for example $a_1$ is again only $a_1$ the above simplifies to: </p> <p>$$\langle a | b\rangle = \overline{a_1} b_1+\overline{a_2} b_2+\dots$$</p> <p>Where $\overline{a_1}$ is a complex conjugate of $a_1$. Furthermore we can similarly define an inner product for two complex functions like this: </p> <p>$$\langle f | g \rangle = \int\limits_{-\infty}^\infty \overline{f} g\, dx$$</p> <hr> <p>In the <a href="http://rads.stackoverflow.com/amzn/click/0131118927" rel="nofollow">Griffith's book</a> <em>(page 96)</em> there is an equation which describes expectation value and we can write this as an <em>inner product of a function $\Psi$ with a $\widehat{x} \Psi$</em>: </p> <p>\begin{align*} \langle x \rangle = \int\limits_{-\infty}^{\infty}\Psi\,\,\widehat{x}\Psi\,\,dx = \int\limits_{-\infty}^{\infty} \Psi\,\,(\widehat{x}\Psi)\,\, dx \equiv \underbrace{\langle\Psi |\widehat{x} \Psi \rangle}_{\rlap{\text{expressed as an inner product}}} \end{align*}</p> <p>In <a href="http://rads.stackoverflow.com/amzn/click/0470026790" rel="nofollow">Zettili's book</a> <em>(page 173)</em> the expectation value is defined like a fraction: </p> <p>\begin{align*} \langle \widehat{x} \rangle = \frac{\langle\Psi | \widehat{x} | \Psi \rangle}{\langle \Psi | \Psi \rangle} \end{align*}</p> <p><strong>Main question:</strong> I know the meanning of the definition in Griffith's book but i simply have no clue what Zetilli is talking about. What does this fraction mean and how is it connected to the definition in the Griffith's book. </p> <p><strong>Sub question:</strong> I noticed that in Zetilli's book they write expectation value like $\langle \widehat{x}\rangle$ while Griffith does it like this $\langle x \rangle$. Who is right and who is wrong? Does it matter? I think Griffith is right, but please express your oppinion.</p>
<p>If the wave function $\Psi$ is normalized, then $\langle\Psi|\Psi\rangle$ should equal 1. Griffiths' definition assumes the wave function is already normalized, while Zetilli accounts for all possibilities by dividing out the normalization constant. So if the wave function $\Psi$ is normalized, Zetilli's definition will reduce to Griffiths' definition.</p> <p>As for the sub question, that's just a matter of notation, which is just a matter of convention, which doesn't really have an objective "right" and "wrong." They're both right as long as they're consistent with the rest of the notation in their respective books.</p>
146
quantum mechanics
QM the superposition principle
https://physics.stackexchange.com/questions/66096/qm-the-superposition-principle
<p>In <a href="http://www.goodreads.com/book/show/8514649-quantum-mechanics" rel="nofollow">Zetilli's</a> book author says that we can interpret an inner product $\langle x | \psi(t) \rangle$ as a wave function $\psi (x,t)$ and i understand this.</p> <p>Next he talks about how a state of the system $|\psi(t)\rangle$ can also be represented by a superposition of wave functions which he writes like this: </p> <p>$$\underbrace{|\psi\rangle}_{\llap{\text{Where is a time dependancy?}}} = \sum_i a_i |\psi_i\rangle$$</p> <p><strong>Q1:</strong> I don't understand this. Where did the time dependancy go? Should i write it like this: </p> <p>$$|\psi(t)\rangle = \sum_i a_i |\psi_i(t)\rangle$$</p> <p><strong>Q2:</strong> The vectors $|\psi_i (t)\rangle$ aren't the eigenvectors right? I don't know how i can connect this to an eigen equation...</p>
<p><strong>A1:</strong> Yes,you are right. He means to include the time dependace into |ψi⟩ itself. That is; |ψi⟩ denotes |ψi(t)⟩ is understood.</p> <p><strong>A2:</strong> Why are they not? You mean eigenvectors of the Hamiltonian, right? Then, just apply the Hamiltonian operator on your |ψi(t)⟩, (which means taking a derivative of |ψi(t)⟩). The time dependance is in the exponent, so |ψi(t)⟩ will remain unharmed.</p>
147
quantum mechanics
What is the most general unitary that commutes with a one dimensional projector in a finite dimensional Vector Space
https://physics.stackexchange.com/questions/66097/what-is-the-most-general-unitary-that-commutes-with-a-one-dimensional-projector
<p>Given a Hilbert space of finite dimension $N$ with an orthonormal basis $\mathcal{B}=\{|0\rangle,\ldots,|N-1\rangle \}$ what is the most general unitary operation that commutes with the projector onto one of the basis elements (say $|0\rangle$), \emph{i.e.}, what is the most general $\mathcal{U}$ such that $\mathcal{U}^\dagger \mathcal{U}=\mathcal{U} \mathcal{U}^\dagger=\mathbb{I}$ and $\left[\mathcal{U},|0\rangle \langle 0| \right]\equiv \mathcal{U} |0\rangle \langle 0|- |0\rangle \langle 0|\mathcal{U}=0$. A partial answer to the question is: \begin{eqnarray*} \mathcal{U(\lambda,\theta,\gamma)}&amp;=&amp;\exp\left(i G(\lambda,\theta,\gamma )\right)\\ G(\lambda,\theta,\gamma)&amp;=&amp;\sum_{k=1}^{N-1}\lambda_k |k \rangle \langle k|\\ &amp;&amp;+\sum_{k=1}^{N-1}\sum_{l&gt;k}^{N-1} \theta_{l,k} \left(|k \rangle \langle l|+|l \rangle \langle k|\right)\\ &amp;&amp;+i\sum_{k=1}^{N-1}\sum_{l&gt;k}^{N-1} \gamma_{l,k} \left(|k \rangle \langle l|-|l \rangle \langle k|\right) \end{eqnarray*} For $N=2$ it is indeed the answer to the question, but I am not sure for higher dimensions.</p>
<p>As <a href="https://physics.stackexchange.com/a/66100/12623">Lagerbaer pointed out</a>, working with finite dimension can be done in matrix representation. The projector on $|0\rangle$ has only the top left element nonzero, $\hat{\Pi}_0 = \mathrm{diag}(1,0,0,\ldots)$ and we want a unitary $\hat{U}$ that commutes with the projector, $[\hat{U},\hat{\Pi}_0] = 0$.</p> <p>Let us start with dimension $N = 2$. We find that both off-diagonal elements must be zero (as Lagerbaer argues) but the other diagonal element can be arbitrary, as it is always multiplied by zero. We therefore have $\hat{U} = \mathrm{diag}(a,b)$, where $a, b$ are chosen so that $\hat{U}$ is unitary.</p> <p>Similar analysis can be done for higher dimensions. The argument of Lagerbaer can be used and you can find out that the first row and first column must be zero (except for the diagonal term). But it says nothing about the rest of the matrix! Once again, we have $\hat{U} = \mathrm{diag}(a,b)$ but this time $b$ is a $N-1$-dimensional matrix and again, $a,b$ have to fulfil unitarity condition.</p> <p>In conclusion, the unitary factors into two blocks, one for the state at which the projector projects and other for the remaining $N-1$ states. In order to satisfy unitarity, $a = e^{i\phi}$ and $b$ must be unitary.</p>
148
quantum mechanics
Electron-hole symmetry in H and He
https://physics.stackexchange.com/questions/5083/electron-hole-symmetry-in-h-and-he
<p>I'm contemplating particle-hole symmetry, and as an example I am looking at either an electron moving along a hypothetical lattice of hydrogen ions, or a hole moving along a hypothetical lattice of helium atoms. </p> <p>According to some lecture notes I found, the hopping integral I get when I treat this in a tight-binding ansatz is positive for holes and negative for electrons, and now I want to see why this is so. </p> <p>For electrons, I can understand the result: The atomic wavefunctions are just the 1s hydrogen wavefunctions, and they are positive everywhere, so the matrix element with the kinetic energy operator gives something negative. </p> <p>Now, for holes I am not sure how I would have to proceed. Can I just say that the relevant Hamiltonian for holes is the negative of the Hamiltonian for electrons because, after all, a hole with energy $E$ would be an electron with energy $-E$ missing?</p> <p>Of, if that's wrong, how would I come to the conclusion that the hopping integral for electrons is negative and for holes it's positive?</p>
<p>Yes, a hole with energy $E$ is the same as an electron with a negative energy $E$ missing - that's why it's called a hole and that's how Paul Dirac first encountered it in the relativistic context (in the form of positrons).</p> <p>A positively-charged positron may look more "particle-like" but one may describe it as the very same holes in the otherwise filled sea of negative-energy electron states.</p> <p>In both cases - semiconductors and positrons - you may assume that the negatively-charged electrons are the only "real" particles. However, you will always derive the existence of positively-charged holes that behave "just like electrons". </p> <p>If you find states such that the energy $E$ is an increasing function of the momentum, the system will first try to fill the low-momentum, low-energy states, and you may add additional higher-energy, higher-momentum particles (electrons).</p> <p>But some part of the spectrum may have the property that the energy $E$ is a decreasing function of the momentum $k$. In that case, the electron states with a higher value of momentum are filled first, and you're adding them "inwards". This is counterintuitive, so it's more logical to exchange the convention for what we mean by a "filled state" and what we mean by an "empty state". </p> <p>Once we do so, we also change the charge and energy of the particle in each state. Consequently, we will deal with positive-charge holes whose energy $E$ behaves just like $-E$ of the original electrons, and increases with $k$ just like for ordinary electron states. The only difference will be in the charge.</p>
149
quantum mechanics
Why is it necessary to represent Schrodinger&#39;s equation as a partial differential equation?
https://physics.stackexchange.com/questions/5224/why-is-it-necessary-to-represent-schrodingers-equation-as-a-partial-differentia
<p>The Schrodinger equation governs the possible time evolution of a wave function, expressed as a partial differential equation. Isn't this equivalent to the simpler equation</p> <p>$$\omega = \hbar k^2/2m$$</p> <p>i.e. any wave function that satisfies this dispersion relation will also satisfy Schrodinger's equation, and the equation above is a lot easier to understand? </p>
<p>Your equation is the right solution to Schrödinger's equation in the momentum-energy representation. However, it's only that simple for Schrödinger's equation with no potential, $V(x,y,z)=0$. </p> <p>If it's zero, the solution (or, similarly, the reformulation of the equation) is as easy as the algebraic relationship you wrote - but it's also uninteresting for the same reason. The interesting cases have e.g. the Coulomb potential $k/r$ or the harmonic oscillator potential $kx^2/2$ and they can't be "solved" in the simple way you sketched. For a nonzero potential, the problem is genuinely equivalent to a partial differential equation.</p> <p>However, that doesn't mean that it's the only way in which the problem may be formulated or solved. Both the harmonic oscillator and the Hydrogen atom may be solved (i.e. their spectrum may be found) algebraically, by the creation and annihilation operators in the harmonic oscillator case, or by a hidden $SO(4)$ symmetry in the Hydrogen atom case.</p> <p>A general Schrödinger's equation in quantum mechanics is really an ordinary differential equation for the state vector; the "spatial derivatives" only appear as the action of particular operators on the Hilbert space. Some of these operators - e.g. the momenta in the position representations - are conveniently represented as partial derivatives with respect to spatial coordinates but that's only the case if we use a "continuous basis" for the Hilbert space.</p>
150
quantum mechanics
What are the statistics of three to five bosons?
https://physics.stackexchange.com/questions/7245/what-are-the-statistics-of-three-to-five-bosons
<p>This should be a very easy question. If you look at the bottom of "Identical Particles" in Wikipedia, you see Table 1, which gives you the two particle statistics for bosons, fermions and distinguishable particles. The problem is to extend this table for three, four and five particles, or give an equivalent formula. I think I know what the answer is for fermions and distinguishable particles, but what about bosons?</p> <p>My first guess is it would always be evenly distributed, but the many body limit does not equal the classical result, so that can't be right. Or can it? </p> <p>What is the formula, and the first few rows of the "Pascals triangle for bosons"? </p>
<p>Dear Jim, it depends on how many different states - boxes - you have for those particles. If you still have 2 states, just like in the Wikipedia example, then for fermions, the probability is 0 everywhere - it's impossible to put more than 2 fermions to 2 states.</p> <p>For distinguishable particles, each particle has 50% odds to be in state 0 and 50% odds to be in state 1. So for $N$ distinguishable particles, there is $(N {\rm\, \, choose\, \,} k)/2^N$ chance for $k$ of them to be in state 0 and the remaining $N-k$ of them to be in state 1. Note that the odds sum to one, if you sum over $k$ from $0$ to $N$.</p> <p>For $N$ bosons, all separations to state 0 ($k$ particles there) and state 1 ($N-k$ particles there) are possible, and each of those separations only corresponds to one multi-particle state. So the odds are $1/(N+1)$ for each possibility - a direct generalization of the table that has $1/3$ in all columns for the $N=2$ bosons. So yes, this is probably what you mean by "equally distributed".</p> <p>The Fermi-Dirac and Bose-Einstein distributions - and both of them, with the same chances - can only be reduced to the Maxwell-Boltzmann distribution if the number of particles is high (it's not really essential) but, more importantly, the number of states in which they can be found is even much higher. Whenever there is a significant chance that the particles want to share a state etc., the classical Maxwell-Boltzmann approximation is inapplicable. So the toy model with two states 0,1 clearly doesn't allow you to approximate the Bose-Einstein or Fermi-Dirac distrtibutions by the Maxwell-Boltzmann one. This is particularly clear for the fermions because the Fermi-Dirac probabilities can't even be calculated because there is <em>no way</em> to arrange more than two fermions to two states.</p> <p>The Maxwell-Boltzmann distribution is applicable when the density of particles (number of particles divided by the number of states) is low. That's why the Maxwell-Boltzmann $\exp(-E/kT)$ is applicable as a replacement for high values of energy $E$ - because this is where a small number of particles will be found, so their arrangement won't be sensitive on their being bosons (collectivists who love siblings in the same state, more than the love between the distinguishable particles) or their being fermions (individualists who absolutely hate to share the state with others). For states at low $E$, that's where the concentration of particles per state is expected to be high, which is why you get the totally new quantum behaviors - such as the Bose-Einstein condensate in the ground state and/or the Fermi liquid - which obviously can't be described by the distinguishable statistics well.</p>
151
quantum mechanics
Charge distribution in positronium
https://physics.stackexchange.com/questions/8944/charge-distribution-in-positronium
<p>Inspired by this: <a href="https://physics.stackexchange.com/questions/8937/electrical-neutrality-of-atoms">Electrical neutrality of atoms</a></p> <p>If I have a wavefunction of the 'reduced mass coordinate' for a hydrogen like atom made from an electron and a positron, what is the spatial charge distribution?</p> <p>When we solve the hydrogen atom, we change into coordinates of the center of mass, and the separation distance with the reduced mass. Here, the masses of the constituent particles are the same. So the center of mass is equidistant from the positron and electron, and so discussing r and -r is just swapping the particles. Since the probability distribution for all the energy levels of the hydrogen atom are symmetric to inversion (images can be seen here <a href="http://panda.unm.edu/Courses/Finley/P262/Hydrogen/WaveFcns.html" rel="nofollow noreferrer">http://panda.unm.edu/Courses/Finley/P262/Hydrogen/WaveFcns.html</a> ), this seems to say no matter what energy level positronium is in, the charge distribution is neutral? Since the energy level basis is complete, this seems to say we can't polarize a positronium atom without dissociating it!? This doesn't make sense to me, so I'm probably making a big mistake here.</p>
<p>Okay we have the center of mass coordinate $r_{cm} = (r_e + r_p)/2$, and the reduced mass coordinate $r = r_e - r_p$. So given the wavefunction $\psi(r_{cm},r)$ what you are asking is just a change of basis from $|r_{cm},r\rangle$ to $|r_e,r_p\rangle$. So you just need to consider</p> <p>$$\langle r_e,r_p|r_{cm},r\rangle = \delta\left(r_{cm}-(r_e+r_p)/2\right) \ \delta\left(r-(r_e-r_p)\right)$$</p> <p><br> To get more at what appears to be confusing you, let's focus on how positronium can be polarized. The spherical harmonics will have a defined even or odd parity to inversion, and so <em>yes</em> the square of the wavefunction (probability density) is invariant to inversion. But this does not mean all <em>superpositions</em> of these harmonics will have this property.</p> <p>Consider for instance just s + p_z orbital. On one side the wavefunction amplitude will add constructively, while on the other it will add deconstructively. The density is no longer symmetric to inversion. This is the standard chemistry description of orbitals, so probably a good way to get follow up reading is searching for <a href="http://en.wikipedia.org/wiki/Hybrid_orbital" rel="nofollow">hybrid orbitals</a>.</p> <p>Here's a link I found with some images for you:<br> <a href="http://www.uwosh.edu/faculty_staff/gutow/Orbitals/N/What_are_hybrid_orbitals.shtml" rel="nofollow">http://www.uwosh.edu/faculty_staff/gutow/Orbitals/N/What_are_hybrid_orbitals.shtml</a></p>
152
quantum mechanics
Quick question on the ionization energy and the selection rule
https://physics.stackexchange.com/questions/8991/quick-question-on-the-ionization-energy-and-the-selection-rule
<p>So I am looking through my book and it says ".... the order of the excited states is exactly the same order (3p-4s-3d-4p)".</p> <p>But now I am looking at a question in the book and it says "Is 3d to 4s transition possible? Why or why not?"</p> <p>My answer to this question is: No it can't be because it doesn't abide by the selection rule because the difference in the orbital quantum numbers is 2 instead of 1.</p> <p>Now if this is the case, why does the above statement have the oder of (3p-4s-3d-4p) if we know there cannot be a transition between the 4s and 3d state like this order says?</p>
<p><em>Order of excited states</em> means ordered by their energy: 3p is lower than 4s is lower than 3d is lower than 4p. As you correctly point out, the transition from 4s to 3d is forbidden due to a selection rule concerning the angular momentum. So, 3d being higher in energy than 4s has nothing to with there being or not being a transition between them.</p> <p>Note: Nothing is ever <em>really</em> forbidden. You might not have a dipole transition from 4s to 3d, but you might have an electric quadrupole or magnetic dipole or higher multipole transition. However, these have much lower intensity.</p>
153
quantum mechanics
Bell Tests using position measurement
https://physics.stackexchange.com/questions/9170/bell-tests-using-position-measurement
<p>I don't know about all the details of Bell tests using methods like parametric down conversion, but at least in some versions of the EPR paradox you get two photons moving apart in opposite directions. I wonder if you can look for detection coincidences by using photographic plates instead of coincidence counters? The idea would be that if you had two photographic plates on opposite sides of the source, you would get some instances where you would have a perfect matchup of reduced silver crystals, or dots. And that if you inserted crossed polarizers in front of the plates, the coincidences would disappear. I wonder if this analysis is correct, and if so, whether such experiments have been done or proposed? </p>
<p>Yes, something like that is possible, and they've done it in Anton Zeilinger's lab. It's not a Bell's inequality test, and to get it to work it's more complicated than what you're describing, but: they do see spatial (i.e. position) correlations/interference disappear or not based on measurement of one of the entangled particles. </p> <p>It's described in a nice review paper (which has a bunch of other nice EPR/Bell experiments): Rev. Mod. Phys. 71 S288 (1999)</p>
154
quantum mechanics
Tunneling Rate Constant
https://physics.stackexchange.com/questions/9506/tunneling-rate-constant
<p>I am trying to "decode"/derive an expression for the macroscopic rate constant for the tunneling of protons through a potential energy barrier that I read in a journal article: $$ k_{\rm tun}(T)=(2\pi\hbar)^{-1}\int_0^{V_{\rm max}} Q(V,T) P_{\rm tun}(V)\ dV. $$ So basically: the authors say work out the probability of tunneling at each point up the potential energy barrier (from 0 to $V_{\rm max}$), multiply this by the Boltzmann factor ($Q(V,T)$), integrating over all energies (i.e. we are taking a Boltzmann average )and then convert from energy in J to rate by multiplying by $(2\pi\hbar)^{-1}$. </p> <p>The authors use $(2\pi\hbar)^{-1}$ to convert from an energy in J to a rate in per second. This implies that the relationship between energy and frequency being used is: $$ E=\hbar\omega $$ I thought this just applied to electromagnetic radiation and free particles not in a potential. Is it OK to use this relation for protons tunneling in a potential? Or should you use some info about the shape of the potential?</p> <p>Many thanks</p> <p>N26</p>
<p>$E=\hbar\omega$ is a totally universal formula that holds for all particles and everywhere in quantum mechanics. Schrödinger's equation guarantees that. The same question was being answered yesterday:</p> <blockquote> <p><a href="https://physics.stackexchange.com/questions/9457/gravitational-wave-energy/9460">Gravitational wave energy</a></p> </blockquote>
155
quantum mechanics
What are the conditions for decoherence to be irreversible?
https://physics.stackexchange.com/questions/10201/what-are-the-conditions-for-decoherence-to-be-irreversible
<p>Spin echo experiments have been able to reverse the motions of all the molecules in a gas in statistical mechanics in the manner of Loschmidt. The Fermi-Ulam-Pasta model has solutions with a single mode dispersing, only to recohere after quite some time has elapsed. Can the same thing happen for decoherence? What are the conditions fyor decoherence to be irreversible?</p>
<p>An article you might be interested in: <a href="http://www.physics.arizona.edu/~cronin/Research/Lab/some%20decoherence%20refs/RBH97.pdf" rel="nofollow">http://www.physics.arizona.edu/~cronin/Research/Lab/some%20decoherence%20refs/RBH97.pdf</a></p>
156
quantum mechanics
Why is $\frac{dx}{dt}=0$ in this average momentum calculation?
https://physics.stackexchange.com/questions/9705/why-is-fracdxdt-0-in-this-average-momentum-calculation
<p>In the following excerpt from S. Gasiorowicz's <em>Quantum Physics</em>, he derives an expression for the average momentum of a free particle. $\psi(x,t)$ is the wave function of a free particle, $\psi^*$ denotes its complex conjugate.</p> <blockquote> <p>We try the following: since classically,</p> <p>$$ p = mv = m\frac{dx}{dt} $$</p> <p>we shall write</p> <p>$$ &lt;p&gt; = m\frac{d}{dt}&lt;x&gt; = m\frac{d}{dt}\int{dx \psi^*(x,t) x \psi(x,t)} $$</p> <p>This yields</p> <p>$$ &lt;p&gt; = m\int_{-\infty}^\infty{dx\left( \frac{\partial\psi^*}{\partial t} x \psi + \psi^* x \frac{\partial\psi}{\partial t} \right)} $$</p> <p>Note that there is no $dx/dt$ under the integral sign. The only quantity that varies with time is $\psi(x,t)$, and it is this variation that gives rise to a change in $x$ with time.</p> </blockquote> <p>I seem to have trouble understanding the difference between the position $x$ and the average position $&lt;x&gt;$. Why can it be assumed that $\frac{dx}{dt}=0$? What <em>is</em> x?</p>
<p>The confusion seems to stem from a) not understanding what kind of objects you are dealing with and b) usual custom of not writing (all) arguments of functions when they are understood.</p> <p>To clarify a) note that the position operator $\hat x$ <strong>does not</strong> depend on time, and so also its kernel $\left&lt; x \right | \hat{x} \left | x&#39; \right&gt; = x\delta(x-x&#39;)$ with respect to "position vectors" $\left | x \right &gt;$ also doesn't depend on time. This $x$ is the one that is present in your integral. So in particular ${{\rm d} x \over {\rm d} t} = 0$.</p> <p>On the other hand, the average of the operator $\hat A$ in the state $\psi$ (which depends on time) obviously depends on the state $\psi$: $\left&lt; \hat {A} \right&gt; := \left&lt; \psi \right | \hat {A} \left | \psi \right&gt;$ and so if you perform averages on a family of vectors $\psi(t)$ so also the average will depend on time. In your case, this should be written $\left&lt; \hat{x} \right &gt; (t)$ to make it obvious that one is dealing with a function of time. But this dependence is usually understood and omitted.</p>
157
quantum mechanics
Wave function normalization
https://physics.stackexchange.com/questions/11740/wave-function-normalization
<p>A book by C. J. Ballhausen led me to believe that a quick way to check that I performed step operators properly was by observing that the "wave function should appear normalized," but I have found some issue applying this in practice and believe it is due to my misunderstanding of the underlying physics; I'm trying to understand what C.J.B. meant by that and if it applies in my case.</p> <p>In his case he was observing two equivalent electrons. Let's say they are two equivalent $p$ electrons for the sake of example. (He was actually considering $d$ electrons and I can provide the specifics of that if it will help.) There are several unperturbed functions which can be described by symbols such as $(1^+ 0^+)$, in which case the first number means $m_l$ of the 1st 2p electron, the next symbol indicates its spin and so forth.</p> <p>For the term $ ^1 D : M_L = 2, M_S=0: (1^+ 1^-) $ is an eigenfunction of $p^2$ configuration that is known. Using a step down operator on the angular momentum gives: $M_L=1 : (2)^{(-1/2)} [ (1^+ 0^-) - (1^- 0^+) ]$. Here I get the impression that what we observe appears normalized because squaring the coefficients gives unity. I realize one could in principle perform $\int \psi^* \psi d \tau$, but I do not suspect that is what he means.</p> <p>Now if we apply the step operator again we get $M_L=0: (6)^{(-1/2)} [ (1^+ -1^-) - (1^- -1^+) + 2(0^+ 0^-) ]$. Here the square of the coefficients is decidedly unity. His examples given in the book also happen to go to unity; is this just coincidence or is it going to always be true? </p> <p>My specific example is as follows, starting with a $d^3$ configuration: $$\psi(L,M_L,S,M_S)=\psi(5,5,\frac{1}{2},\frac{1}{2})=(2^+,2^-,1^+)$$</p> <p>Applying the lowering operators gives $\sqrt{10} \psi(5,4,\frac{1}{2},\frac{1}{2}) = -\sqrt{4} (2^+,1^+,1^+) - \sqrt{4} (2^+,1^+,1^-) + \sqrt{6}(2^+,2^-,0^+)$ and the coefficients go to unity as expected. Above you will notice that the ordering in the first and third terms has changed and an odd permutation brings about a change in sign. Also notice the first term must be equal to zero by Pauli Principle. Dividing out we get,</p> <p>$$\psi(5,4,\frac{1}{2},\frac{1}{2}) = \sqrt{3/5} (2^+,2^-,0^+) - \sqrt{2/5} (2^+,1^+,1^-)$$</p> <p>You'll notice that the coefficients squared sum up to 1, so all appears normalized and well. Now we apply the lowering operator again to give $\sqrt{(L-M_L+1)(L+M_L)} = \sqrt{(5-4+1)(5+4)} = \sqrt{(2)(9)} = \sqrt{18}$ times the function for $M_L=3$ $^1 H$. </p> <p>Applying to the RHS using $\sqrt{(l-m_l+1)(l+m_l)}$. We are working with $d$ orbitals, therefore $l=2$. So for the case of $m_l=2$ we get $\sqrt{(2-2+1)(2+2)}=\sqrt{4}$ and for $m_l=1$ we get $\sqrt{(2-1+1)(2+1)} = \sqrt{6}$ and finally for $m_l=0$ we get $\sqrt{(2-0+1)(2+0)}=\sqrt{(3)(2)} = \sqrt{6}$. Applying this gives</p> <p>$$ \sqrt{18} \psi(5,3) = \sqrt{3/5} [ \sqrt{4} (1^+, 2^-, 0^+) + \sqrt{4} (2^+,1^-,0^+) + \sqrt{6} (2^+, 2^-, -1^+)] $$ <br/> $ - \sqrt{2/5} [ \sqrt{4} (1^+,1^+,1^-) + \sqrt{6} (2^+,0^+, 1^-) + \sqrt{6} (2^+,1^+,0^-) ]$</p> <p>Simplification results in:</p> <p>$$ \sqrt{18} \psi(5,3) = \sqrt{12/5} (1^+, 2^-, 0^+) + \sqrt{12/5} (2^+,1^-,0^+) + \sqrt{18/5} (2^+, 2^-, -1^+)$$ <br/> $ - \sqrt{8/5}(1^+,1^+,1^-) - \sqrt{12/5} (2^+,0^+, 1^-) - \sqrt{12/5}(2^+,1^+,0^-) $ <br/></p> <p>The fourth term cannot exist by the Pauli Principle, so we have instead,</p> <p>$$ \sqrt{18} \psi(5,3) = \sqrt{12/5} (1^+, 2^-, 0^+) + \sqrt{12/5} (2^+,1^-,0^+)$$ <br/> $ + \sqrt{18/5} (2^+, 2^-, -1^+) - \sqrt{12/5} (2^+,0^+, 1^-) - \sqrt{12/5}(2^+,1^+,0^-) $ <br/></p> <p>Now we need to fix the ordering of the first term and the fourth term to give,</p> <p>$$ \sqrt{18} \psi(5,3) = -\sqrt{12/5} (2^-, 1^ +, 0^+) + \sqrt{12/5} (2^+,1^-,0^+) + \sqrt{18/5} (2^+, 2^-, -1^+)$$ <br/> $ + \sqrt{12/5} (2^+,1^-, 0^+) - \sqrt{12/5}(2^+,1^+,0-) $ <br/></p> <p>Now we divide thru by $\sqrt{18} $ the like terms yielding,</p> <p>$$ \psi(5,3) = -\sqrt{12/90} (2^-, 1^ +, 0^+) + \sqrt{12/90} (2^+,1^-,0^+) + \sqrt{18/90} (2^+, 2^-, -1^+)$$ <br/> $+ \sqrt{12/90} (2^+,1^-, 0^+) - \sqrt{12/90}(2^+,1^+,0^-) $ <br/></p> <p>Our problem is that 12/90 + 12/90 +18/90 + 12/90 +12/90 =11/15, instead of 15/15. I'm sure my mistake is stupid somewhere, can somebody point out where I've gone wrong?</p>
<p>It was just an arithmetic error:<br> $$\psi(5,3) = -\sqrt{12/90} (2^-, 1^+, 0^+) + \sqrt{12/90} (2^+,1^-,0^+) + \sqrt{18/90} (2^+, 2^-, -1^+)$$<br> $$+ \sqrt{12/90} (2^+,1^-, 0^+) - \sqrt{12/90}(2^+,1^+,0^-)$$<br> needs to be simplified as the second and fourth terms are the same. One has:<br> $$\psi(5,3) = -\sqrt{12/90} (2^-, 1^+, 0^+) + 2\sqrt{12/90} (2^+,1^-,0^+) + \sqrt{18/90} (2^+, 2^-, -1^+)$$<br> $$ - \sqrt{12/90}(2^+,1^+,0^-)$$<br> which is normalized:<br> $$12/90 + 4(12/90) + 18/90 + 12/90 = 1.$$</p>
158
quantum mechanics
Heisenberg&#39;s Uncertainty Forms
https://physics.stackexchange.com/questions/12636/heisenbergs-uncertainty-forms
<p>Can Heisenberg's Uncertainty principle be rewritten in terms of energy density</p> <p>writing $$\Delta E \Delta T \geqslant \hbar/2$$ in factors of energy density $\Delta \sigma \text{ }= \frac{3\Delta E}{4\pi r^3}$</p> <p>I get $$\Delta \sigma \frac{3\text{$\Delta $T}}{4\pi r^3} \geqslant \hbar/2$$</p> <p>does $$\frac{3\text{$\Delta $T}}{4\pi r^3}$$ represent something?</p> <p>Or does it make more sense to write $$\text{$\Delta $x} \frac{\text{$\Delta $E}}{c^2} \geqslant \frac{\hbar}{2}$$</p> <p>EDIT after noting Willie's comment I've corrected the form to $\frac{\hbar}{2}$</p> <p>However, now I see c is constant and can be moved to the right side - is it mathematical correct to write $$\text{$\Delta $x} \text{$\Delta $E} \geqslant \frac{\hbar c^2}{2}$$ or does this violate the Cauchy–Schwarz inequality? This is the question I was trying to understand at the beginning.</p>
<p>It seems that you are missing what $\Delta T$ is here. It is not an interval of time. It is time uncertainty. Which is not what usually called $\Delta T$. The most standard application for this uncertainty principle is a linewidth. From this inequality you know that lifetime in upper state limits lower bound of energy dispersion. The larger lifetime the more precise your energy measurement (may be). </p> <p>As a consequence, there is no use to convert this $\Delta T$ to $\Delta x$. It will mean not what you expect. </p> <p>The same holds for $\Delta E$. It is not a real energy of some object. </p>
159
quantum mechanics
What is the basic postulate on which QM rests
https://physics.stackexchange.com/questions/13639/what-is-the-basic-postulate-on-which-qm-rests
<p>What is the basic postulate on which QM rests. Is it that the position of a particle can only be described only in the probabilistic sense given by the state function $\psi(r)$ ? We can even go ahead and abandon the particle formalism as well. So what is the QM all about ? A probabilistic description of the physical world ? and nothing more ?</p>
<p>Existence of non-compatible observables: measuring one of them (say, coordinate) leads to an unavoidable uncertainty in the result of a subsequent measurement of the other (say, momentum). This is the essence of the Heisenberg uncertainty principle in the kinematics of your system. There is a detailed discussion along these lines in the beginning of the Quantum Mechanics volume (volume III) in the Course of Theoretical Physics by Landau and Lifshitz. Any measurable (physical) system, be it particle, atom or anything else, is quantum only if you can identify a manifestation of Heisenberg uncertainty principle (non-commutativity of observables).</p>
160
quantum mechanics
weight function and the metric
https://physics.stackexchange.com/questions/15051/weight-function-and-the-metric
<p>The weight function comes from Dirac's book, PRINCIPLES OF QUANTUM MECHANICS. On page 66,he says that sometimes it is more convenient not to normalise the eigenvectors, i.e. $$\langle\xi_1&#39;...\xi_u&#39;|\xi_1&#39;&#39;...\xi_u&#39;&#39;\rangle=\rho&#39;^{-1}\delta(\xi_1&#39;-\xi_1&#39;&#39;)...\delta(\xi_u&#39;-\xi_u&#39;&#39;)$$</p> <p>Thus we have $$\iiint|\xi_1&#39;...\xi_u&#39;\rangle\langle\xi_1&#39;...\xi_u&#39;|\rho&#39;d\xi_1&#39;...d\xi_u&#39;=1$$</p> <p>Here Dirac says that $\rho&#39;d\xi_1&#39;...d\xi_u&#39;$ is the weight attached to the volume element of the space of the variables $\xi_1&#39;,...,\xi_u&#39;$.</p> <p>If we consider the space of the variables $\xi_1&#39;,...,\xi_u&#39;$ as a Riemann manifold $M$ with metric $g$, then the volume element of $(M,g)$ is $\sqrt{\det(g)}d\xi_1&#39;...d\xi_u&#39;$. Does the two volume element equal? i.e. do we have that $\rho&#39;=\sqrt{\det(g)}$?</p> <p>In his book,he then gives an example where the $\xi$'s are the polar and azimuthal angles $\theta,\phi$ giving a direction in three-dimensional space. In this case we have $\rho&#39;=\sin{\theta&#39;}$. And if we calculate the $\sqrt{\det(g)}$ of the unit sphere(in coordinates $\theta,\phi$),it is $\sin{\theta&#39;}$,too!</p> <p>Is it just coincidence? Or is there an explanation?</p> <p>Would someone be kind enough to give me some hints on this? Thank you very much!</p>
<p>Without a separate definition of a volume element, you can just take Dirac's $\rho$ to define a volume element on the configuration space, and it works as such, because of the second formula you wrote.</p> <p>But in the example, you are transforming rectangular coordinates to different coordinates, the volume form from Dirac's $\rho$ coincides with the volume form from the metric, just because you choose it that way. The normalization of states is up to you, as always.</p> <p>$$ dx^i = {\partial x^i \over \partial q^j} dq^j$$</p> <p>and the reverse transformation is nonsingular:</p> <p>$$ dq^i = {\partial q^i \over \partial x^j} dx^j$$</p> <p>So that the induced metric on q is given by the dot product below:</p> <p>$$ g_ij dq^i \cdot dq^j = {\partial q^i \over \partial x^j} {\partial q^j\over \partial x^j} \delta_{ij}$$</p> <p>The volume form transforms as the wedge product of the $dx^i$</p> <p>$$ dV_x = |\mathrm{det} {\partial x_i\over \partial q_j}| dV_q$$</p> <p>Now you can transform the completeness relation for x:</p> <p>$$ \int d^dx |x\rangle\langle x| = \int d^dq\sqrt{|\mathrm{det} g|} |x(q)\rangle\langle x(q)| $$</p> <p>And if you want to replace the $|x\rangle$ states with $|q\rangle$ states, you need to choose a normalization for the $|q\rangle$ states. This allows you to absorb the volume factor into the normalization of the states. But you can use any normalization you want for q,</p> <p>$$ \langle q|q&#39;\rangle = \rho(q) $$</p> <p>and then the completeness relation will be</p> <p>$$ \int d^dq \sqrt{|det g|} |x(q)\rangle\langle x(q)| = \int d^dq {1\over \rho(q)} |q\rangle\langle q| $$</p> <p>It is convenient to choose $\rho(q)=\sqrt{|det g|}^{-1}$, because then the $\rho$ absorbs the volume factor, and the states |q> are the same as the states |x(q)> (these are not just the position delta-functions, because a change of variables changes delta functions).</p> <h3>In terms of explicit wavefunctions in 1d</h3> <p>If you write the $|x\rangle$ ket explicity as a delta function wavefunction</p> <p>$$\psi(x&#39;) = \delta(x&#39;-x)$$</p> <p>and you write the $|q\rangle$ normalized ket explicity as a delta function wavefunction in q.</p> <p>$$\psi(q&#39;) = \delta(q&#39;-q)$$</p> <p>Then you can understand the change of variables formula from the derivative factors that go into a delta function from a change of variables. This is tedious, and reproduces the above, so I won't do it, but it's good for building up the intuition about where all the factors go.</p> <h3>Momentum space 2 pi</h3> <p>One nice, but trivial, use of this is to absorb the $2\pi$ constant from Fourier transforms into the momentum state normalization. The completeness relation for momentum space then reads</p> <p>$$\int {d^3 p\over (2\pi)^3} |p\rangle\langle p| = \int dp |p\rangle\langle p|$$ </p> <p>This convention for the measure of $d^3p$ absorbs a $2\pi$ factor into each $dp$. Then delta functions for $p$'s are</p> <p>$$ \delta(p-p&#39;) = (2\pi)^3 \delta(p_x - p_x&#39;) \delta(p_y-p_y&#39;) \delta(p_z -p_z&#39;)$$</p> <p>The transformation is very simple, because it is just a scaling, and it is a good way to practice Dirac's formalism until you know where the $2\pi$'s go without thinking. This trivial case is probably the best way to internalize the measure issues. This is also the best convention for Fourier transforms, and is almost universal in physics.</p> <p>In engineering and mathematics, people often normalize the Fourier transform so that it can be thought of as taking a function space to itself, and if you do it twice (up to conjugation), you get the same function back. This introduces $\sqrt{2\pi}$ factors into the transform. These square-root factors are what you multiply the basis states $|p\rangle$ by in order to move from this convention to the normalized p-state convention.</p> <p>The Dirac formalism is designed to effectively get rid of these square roots, which are very annoying, by putting them into the integration measure, where they belong.</p> <h3>Relativistic normalization</h3> <p>The best example is relativistic noramlization, where you use non-normalized basis vectors for parametrizing momentum states in relativistic collisions. Here the states $|p\rangle$ in the nonrelativistic normalization obey</p> <p>$$ \int d^3p |p\rangle\langle p| = 1 $$ </p> <p>But in the relativistic theory $d^3p$ is not a relativistic invariant. The p states are restricted to a mass-shell hyperbola, where $p^2=m^2$ (the four-dimensional length of the energy-momentum), and while $p_x,p_y,p_z$ are good coordinates for the hyperbola, the hyperbola isn't flat, and the amount of invariant area per unit p-volume is different at different p's.</p> <p>The invariant measure is easiest to work out using a delta-function, since</p> <p>$$ \int d^4p\;\; \delta(p^2 - m^2) f(p) = \int {d^3 p \over 2|p_0|} f(p) = \int dp_3 f(p) $$</p> <p>So the right relativistically invariant state normalization is to make</p> <p>$$ |p\rangle\langle p| = 2|p_0| $$</p> <p>or equivalently:</p> <p>$$ \langle p | p&#39; \rangle = 2|p_0|\delta^3(p-p&#39;)$$</p> <p>And this can be achieved by multiplying the states by the square root of $2p_0$. That this is the right way to normalize will be shown using the usual free field expansion in terms of creation and annihilation operators</p> <p>$$ \phi(x) = \int {d^3p \over (2\pi)^{3/2} \sqrt{2p_0}} ( a_p e^{-ip\cdot x} + a^{\dagger}_p e^{ip\cdot x} ) $$ </p> <p>This formula is incomprehensible only because the momentum states are not normalized correctly for relativity and for absorbing 2pi factors. But using relativistic 2pi normalization, the matrix elements of $a$ and $a^{\dagger}$ become simple, and this becomes</p> <p>$$ \phi(x) = \int dp_3\;\; a(p) e^{ip\cdot x} + a^{\dagger}(p) e^{-ip\cdot x} $$</p> <p>Where the relativistically normalized creation and annihilation operators create a relativistically normalized, $2\pi$ normalized, state. The integration measure is relativistically invariant and absorbs the 2pi factors. This is the correct way to expand relativistic fields in operators, but not a single QFT book does it to my knowledge.</p>
161
quantum mechanics
separation of variables
https://physics.stackexchange.com/questions/15949/separation-of-variables
<p>I'm a math student who's dabbled a little in physics, and one thing I'm a little confused by is separation of variables. Specifically, consider the following simple example: I have a Hamiltonian $H$ which can be written as $H_x + H_y + H_z$ depending only on $x,y$, and $z$ , respectively, and I want to find the eigenfunctions. Now, it is clear that the product of eigenfunctions of $H_x, H_y$, and $H_z$ will be eigenfunctions for $H$. But why can <em>all</em> eigenfunctions be expressed this way?</p> <p>I suspect this is not even strictly true, but can somebody give a physics-flavored plausibility argument for why we only need to concern ourselves with separable eigenfunctions?</p>
<p>Actually, it's not true that all eigenfunctions are separable. Consider the 3D isotropic harmonic oscillator, whose Hamiltonian is a sum of three 1D SHO Hamiltonians,</p> <p>$$H = \frac{p^2}{2m} + \frac{m\omega^2 r^2}{2} = \sum_{i\in\{x,y,z\}}\hbar\omega\biggl(a_i^\dagger a_i + \frac{1}{2}\biggr) = H_x + H_y + H_z$$</p> <p>You can create eigenfunctions of $H$ by multiplying the eigenfunctions of the individual 1D SHOs. But because there are three identical dimensions you get degeneracies, for example</p> <p>$$E_{0,0,1} = E_{0,1,0} = E_{1,0,0}$$</p> <p>Since these eigenvalues are equal, any linear combination of the corresponding eigenfunctions will still be an eigenfunction of $H$. But an arbitrary combination like, say, $\frac{1}{\sqrt{3}}(\psi_{0,0,1} + \psi_{0,1,0} - \psi_{1,0,0})$, can't be factored into the form $X(x)Y(y)Z(z)$.</p> <p>What you can say is that all eigenfunctions of $H$ with a given eigenvalue $E_{\{n\}}$ constitute a subspace of the Hilbert space, and for each such subspace, it is possible to choose a complete basis of functions which can be factorized into $X(x)Y(y)Z(z)$. If you're looking for a proof of that fact, I couldn't give it to you offhand, since it's something physicists tend to take for granted (when necessary), but I wouldn't be surprised if you could find it discussed on the <a href="http://math.stackexchange.com">math site</a>. It'd definitely be in books on linear algebra or probably PDE analysis.</p>
162
quantum mechanics
functional determinant and WKB approximation
https://physics.stackexchange.com/questions/16477/functional-determinant-and-wkb-approximation
<p>let be a Hamiltonian in one dimension, i would like to evaluate the functional determinant $ det(E-H) $ in onde dimension</p> <p>i believe that $ det(E-H)= Cexp(iN(E)) $ here $ N(E)$ is the number of energy levels less than a given number 'E'</p> <p>my steps</p> <ol> <li><p>i use the identity $ logDet(E-H)=TrLog(E-H)$</p></li> <li><p>i replace the sum $\sum_{n} log(E-E_{n})$ by an integral over the phase space $ \iint_{D}dpdp log(E-p^{2}-V(x))$</p></li> <li><p>I take the derivative respect to 'E'</p></li> <li><p>I use the identity $ \int_{-\infty}^{\infty} \frac{dx}{x^{2}-a^{2}} = \frac{\pi i}{a}$</p></li> <li><p>I use the Bohr-Sommerfeld quantization condition $\int_{C}dx (E-V(x))^{1/2} = (n(E)+1/2)h$</p></li> <li><p>i use integration respect to 'E' again</p></li> </ol> <p>7 i take the exponential</p> <p>is this semiclassically correct :) thanks.</p>
<p>The formula doesn't work. Most of the manipulations are formally ok, although it is probably best to start right at step 3--- the derivative of the logarithm of the determinant is the (trace of the) Green's function, which is better behaved than the determinant itself.</p> <p>Step 5 is incorrect--- there is no reduction using the WKB condition, because the quantity $\sqrt{E-V}$ is in the denominator, and the integration is unbounded. The correct semiclassical expansion for the Green's function is given by the Gutzwiller trace formula.</p> <p>The best way to check all this is to try it out on the Harmonic oscillator. The formula you give doesn't work, although the semi-classical bit is nice. The semiclassical HO green's function is</p> <p>$$\int dp dq {1\over E - p^2 - q^2} $$</p> <p>Which is elementary (up to being divergent--- you can move E a little), and evaluates to $log(E) \pm i\pi (E&gt;0)$ , where $\pm$ means either add, or subtract, or ignore depending on how you deal with the divergent point. plus a divergent constant, which is irrelevant.</p>
163
quantum mechanics
Is decoherence due to coarse graining or coupling with the environment?
https://physics.stackexchange.com/questions/16297/is-decoherence-due-to-coarse-graining-or-coupling-with-the-environment
<p>In the literature, sometimes one reads that decoherence is due to the coupling of the system to the external environment, and sometimes one reads that it is due to coarse graining over the microscopic degrees of freedom. Are these two different cases of decoherence, or is one more fundamental than the other?</p>
<p>The more conventional way is to describe decoherence as being due to the "coupling to the environmental degrees of freedom" that are traced over. However, the "environmental degrees of freedom" may also include geometrically internal degrees of freedom of a physical system such as a cat – unmeasurably complicated correlations in the properties of the individual atoms.</p> <p>Because the degrees of freedom you may track – e.g. whether or not a cat is alive – become entangled with many degrees of freedom you can't track – e.g. a jungle of phonons propagating through the cat – you may derive that the density matrix for the latter becomes quickly diagonalized. In this picture, the phonons' degrees of freedom would be considered "environment" by those who say that decoherence is due to the coupling to the environment. The other group wouldn't call it the environment. Instead, it would refer to coarse-graining in which all microstates of a cat which are alive, regardless of the state of the phonons etc., are clumped together. One would derive the appearance of the non-pure density matrix via another approximation but the qualitative outcome would be the same: decoherence.</p>
164
quantum mechanics
Does frequentism require exponentially many trials in some cases?
https://physics.stackexchange.com/questions/17189/does-frequentism-require-exponentially-many-trials-in-some-cases
<p>Frequentism is the philosophy that probabilities are statistical in the sense that they give the limiting frequency ratios of outcomes as the number of trials is large enough. For tiny probabilities like exponentially small probabilities, would this require exponentially many trials?</p> <p>I know, I know, you are thinking if the probability of an outcome is exponentially small, then we won't expect it to happen in any trial if the number of trials is some reasonable practical number. This overlooks the situation where say the possible outcomes are some long strings of characters taken from some alphabet in the computer science sense, and the probability distribution is such that the probability for any particular string is exponentially small. In such a case, if someone were to specify some specific string in advance, and the number of trials is some realistic number, no one would expect to find that string in any trial outcome. On the other hand, if you were to look at the trial outcomes, they would all correspond to strings which if chosen a priori, would be considered practically improbable. In practice, what experimenters do in such a situation is conduct statistical randomness tests upon the strings obtained, but does this take us out of the realm of frequentism?</p> <p>Can frequentism be saved in such a case?</p>
<p>One way to address this Question is to respond that <em>frequentism</em> is <em>not</em> the standard interpretation of probability in Physics, so it doesn't have to be saved. See Section 3.3 of <a href="http://plato.stanford.edu/entries/probability-interpret/" rel="nofollow">this Stanford Encyclopedia of Philosophy page</a>, for example, for an account of the woes of frequentism.</p> <p>Instead, some form of Statistical hypothesis testing is the <em>de facto</em> standard. That's why one hears that an experiment is consistent or not consistent with a given (probabilistic) Physical theory at "the 3 sigma level", or "at 5 sigmas", etc., which is a concise way of reporting levels of statistical significance. If experimental results fall outside 5 sigmas, that is very strong but not absolutely compelling motivation to introduce a different probabilistic model that makes predictions that are more consistent with the statistics of experimental results.</p> <p><a href="http://plato.stanford.edu/entries/epistemology-bayesian/" rel="nofollow">Bayesian analysis</a> fits into this understanding as a formulaic way to modify a given statistical hypothesis on the basis of experimental data, on the assumption that the statistical model is broadly correct, it's just the parameters that have to be determined. If the structure of a probabilistic model is not in sympathy with the data, however, Bayesian rules will not suggest, for example, a different space-time structure that will work better.</p>
165
quantum mechanics
Pauli Matrices in orthogonal space
https://physics.stackexchange.com/questions/18018/pauli-matrices-in-orthogonal-space
<p>In some literature there is reference to $\tau$ matrices which are the same pauli matrices in an orthogonal space. I have not seen any explicit constructions of this anywhere. Could someone tell me or point to literature on how to find the matrix elements of these $\tau$ matrices. </p>
<p>Georgi is in Exercises 3D, 3E and 6C using the word <em>orthogonal</em> in a non-standard sense. Basically, he means <em>independent copies of sigma matrices that act in different spaces.</em> In detail, first let us define the $gl(2,\mathbb{C})$ Lie algebra as the span of the sigma matrices and the unit matrix $\sigma_0:={\bf 1}_{2\times 2}$, </p> <p>$$ gl(2,\mathbb{C})~:=~ {\rm span}_{\mathbb{C}}\{\sigma_0, \sigma_1, \sigma_2, \sigma_3 \}. $$</p> <p>Then Georgi is considering another 'orthogonal' copy of the Lie algebra, call it</p> <p>$$ gl(2,\mathbb{C})^{\prime}~:=~ {\rm span}_{\mathbb{C}}\{\tau_0, \tau_1, \tau_2, \tau_3 \}. $$</p> <p>And then he is basically considering the tensor product</p> <p>$$ gl(2,\mathbb{C})\otimes gl(2,\mathbb{C})^{\prime}$$</p> <p>as a new $4 \times 4=16$ dimensional Lie algebra with Lie bracket</p> <p>$$ [\sigma_i\otimes\tau_a, \sigma_j\otimes\tau_b]~:=~\sigma_i\sigma_j\otimes\tau_a\tau_b - \sigma_j\sigma_i\otimes\tau_b\tau_a. \qquad (1)$$</p> <p>There exist well-known formulas to reduce products $\sigma_i\sigma_j=\sum_k f_{ij}^k\sigma_k$ of sigma matrices, etc., so that the rhs. of eq. (1) again belong to the Lie algebra. In this sense, the $\sigma$'s and the $\tau$'s commute.</p>
166
quantum mechanics
Does an interaction of entangled particles with each-other cause decoherence?
https://physics.stackexchange.com/questions/19464/does-an-interaction-of-entangled-particles-with-each-other-cause-decoherence
<p>I'll apologize in advance if this is not an appropriate place for my question. My background is not in physics, and my understanding of quantum mechanics is extremely rudimentary at best, so I hope you'll be forgiving of my newbish question.</p> <p>Given a system of entangled particles (eg, 2 or more electrons), possibly in a superposition state: if the particles interact with each-other, what effect does this have on their quantum state? Is their state now determined (but perhaps unknown until observed)?</p>
<p>That depends on the interaction. Consider two spins interacting with a Heisenberg type interaction</p> <p>$$H = -J \vec{S}_1 \cdot \vec{S}_2$$</p> <p>which basically means that the spins want to be parallel if $J &gt; 0$ and anti-parallel if $J &lt; 0$. </p> <p>For anti-ferromagnetic coupling, $J &lt; 0$, the ground state is a singlet, which for spin-1/2 will look like </p> <p>$$\frac{1}{\sqrt{2}} \left[ |\uparrow \downarrow\rangle - |\downarrow \uparrow\rangle \right]$$</p> <p>which is one of the famous Bell-states, i.e., it's entangled. </p> <p>So here we have interaction and the ground-state is entangled.</p> <p>What if we had ferromagnetic coupling? Then the ground-state is degenerate. It could be the state above but with a + sign in the superposition, which would again be an entangled state, or it could be either $|\uparrow \uparrow\rangle$ or $|\downarrow \downarrow\rangle$ and these two states are not entangled. </p> <p>So interactions "with each other" can either lead to an entangled or disentangled state.</p>
167
quantum mechanics
What are the practical applications of decoherence?
https://physics.stackexchange.com/questions/20400/what-are-the-practical-applications-of-decoherence
<p>Let me clarify this question somewhat. I know decoherence is ubiquitous in nature and explains the emergence of a classical world from quantum physics. My question is really about how a knowledge of how decoherence actually works can be put to use in a practical application. An application we can't design in the absence of such a knowledge, even though decoherence is still happening all the time.</p> <p>Thanks</p>
<p>Since very weak interactions are sufficient to significantly decohere a quantum system, <strong>quantum systems can potentially be used as very sensitive force sensors if their decoherence is monitored</strong>. This monitoring can take the form of interferometric measurements in which the fringe visibility is measured as a function of time or some experimental parameter. The main challenge would presumably be to isolate the system well enough from all the other interactions which also cause decoherence but which you don't want to measure. </p> <p>A google search for "decoherence microscopy" will bring up some proposals. Another good starting point are the papers of the group of Markus Arndt, e.g. "Quantum interference of clusters and molecules" in Rev.Mod.Phys., in particular the section on "interference-assisted measurements".</p>
168
quantum mechanics
How to interpret the derivative in the momentum operator in quantum mechanics?
https://physics.stackexchange.com/questions/23036/how-to-interpret-the-derivative-in-the-momentum-operator-in-quantum-mechanics
<p>Given a stationary 1-D wave function $\psi(x)$, how is the derivative in the momentum operator interpreted?</p> <p>$$ \int_{-\infty}^\infty \psi^*(x) \hat{p} \psi(x) dx = \int_{-\infty}^\infty \psi^*(x) (-i\hbar\nabla) \psi(x) dx $$</p> <p>Should the integral be interpreted as</p> <p>$$-i\hbar\int_{-\infty}^\infty \psi^*(x) \psi&#39;(x) dx$$</p> <p>where $\psi&#39;(x)$ is the derivative with respect to $x$?</p>
<p>Yes. In mathematics, the symbol $f&#39;$ means $$ f&#39;(x) = \frac{{\rm d}f}{{\rm d}x} = \lim_{\epsilon\to 0} \frac{f(x+\epsilon)-f(x)}{\epsilon} $$ This notation using ${\rm d}$ was introduced by Leibniz; the notation with the prime was introduced by Lagrange.</p> <p>You also ask whether there is some problem with noncommutativity. There is absolutely nothing noncommutative in the integrals you write down. They're integrals with ordinary functions and their derivatives. The only objects that don't commute in quantum mechanics are observables i.e. operators such as $\hat x$ and $\hat p$. </p> <p>However, the derivative $\psi&#39;$ isn't really an operator. More precisely, it is proportional to the operator $\hat p$ acting on the wave function $\psi$ but $\psi$ isn't an operator so you can't really move it to the left side from the derivative.</p> <p>If we have things like $V&#39;(x)$ in quantum mechanics, the derivative of the potential energy, then the potential energy $V(x)$ may be interpreted as an operator. Then $V&#39;(x)$ may be rewritten as $$ V&#39;(x) = [\frac{\rm d}{{\rm d}x},V(x)] = \frac{i}{\hbar} [\hat p,\hat V(x)] $$ However, when we choose the explicit integrals over $x$ etc., they always mean the operations with ordinary commuting functions as they do in the calculus. One only has noncommuting objects when we write the actions of the operators more abstractly, in a way that doesn't depend on the representation of the wave function.</p>
169
quantum mechanics
In $H_2^+$, what is the Hamiltonian of the movement of the electron?
https://physics.stackexchange.com/questions/23569/in-h-2-what-is-the-hamiltonian-of-the-movement-of-the-electron
<p>An electron is orbiting two protons. With the Born-Oppenheimer approximation that the protons do not move, I'd write the Hamiltonian of the electron's movement as:</p> <p>$$ \mathbf{H} = -\frac{\hbar^2}{2m}\nabla^2 + E_p$$</p> <p>with</p> <p>$$E_p = -\frac{e^2}{4\pi \epsilon_0}\left(\frac{1}{r_1}+\frac{1}{r_2}\right) $$</p> <p>as the potential caused by the protons, with $r_1$ and $r_2$ denoting distances to the protons. Apparently it should be</p> <p>$$E_p = -\frac{e^2}{4\pi \epsilon_0}\left(\frac{1}{r_1}+\frac{1}{r_2}-\frac{1}{r_0}\right), $$</p> <p>where $r_0$ denotes the distance between the protons. How can that term be explained?</p>
<p>The $\frac{e^2}{4\pi\epsilon_0}\frac{1}{r_0}$ term appears in the potential for the electron motion, as Luboš and Vijay point out, to keep the whole energy accounting in place so that the nuclear motion can be properly quantized. The key point is that this potential does not involve the electron coordinates, so that as far as the electronic wavefunction is concerned it acts like a constant and therefore does not affect the solution to the electronic eigenvalue equation.</p> <p>If you do include that term, and write the potential energy of the molecule as $$E_p=-\frac{e^2}{4\pi\epsilon_0}\left(\frac{1}{r_1}+\frac{1}{r_2}-\frac{1}{r_0}\right)$$ then you can write the potential for the nuclear coordinates directly as $$E_n=\langle\psi(\mathbf{R}_1,\mathbf{R}_2)|E_p|\psi(\mathbf{R}_1,\mathbf{R}_2)\rangle$$ where the total wavefunction is split into electronic and nuclear parts as $$\langle \mathbf{r},\mathbf{R}_1,\mathbf{R}_2|\Psi\rangle=\langle\mathbf{r}|\psi(\mathbf{R}_1,\mathbf{R}_2)\rangle\langle\mathbf{R}_1,\mathbf{R}_2|\phi\rangle.$$ The Schrödinger equation for the nuclear coordinates is then $$\left(\frac{\mathbf{p}_1^2}{2M}+\frac{\mathbf{p}_2^2}{2M}+E_n\right)|\phi\rangle=E|\phi\rangle.$$</p>
170
quantum mechanics
Pertinence of the wave function of the universe, or complete description of system with massive number of dof
https://physics.stackexchange.com/questions/24032/pertinence-of-the-wave-function-of-the-universe-or-complete-description-of-syst
<p>I have heard couple of times about the concept of wave function of the universe, an object that would capture every degrees of freedom inside it (every particle, me, even you dear reader, etc...) and it always sounded fallacious or at least non pertinent, what would be the point of using that gigantic object to describe our universe. From my first classes of statistical mechanics, I learned that there is no point in trying to monitor $10^{23}$ and more degrees of freedom, that we need to look for emergent pertinent quantities (pressure, temperature, etc...). Even more, I am know reading an article where the author (there is no point in giving you the title/name) takes the example of a local QFT completely describing the Solar system, so roughly $10^{60}$ degrees of freedom, plus the possibility that an observer could be monitoring all those in real time, which is totally unrealistic. Does this make sense to you?</p>
<p>Yes, it is a logically positivistically meaningless notion, so in an absolute sense it is complete bullshit--- you can't measure the wavefunction of the universe, nor give a sense to the idea that it is A and not B when the overlap of A and B is nonzero. But it is <em>useful</em> bullshit, as a figure of speech, used as a conceptual aid, to get you to understand how the Everett interpretation works, how cosmology can give a complete description, how the dynamics of vacuum selection could be working, and how a nonsymmetric universe can emerge from a symmetric initial conditions. The point is that it is an imprecise but good crutch for the intuition, like the idea of an infinitesimal quantity in mathematics, or the idea of negative coupling phi-4 theory in quantum field theory, or a bazillion other things which have precise analogs, but don't need precise analogs to be useful as figures of speech.</p> <p>Here is a question which is clarified by considering the wavefunction of the universe. Suppose the laws of physics are rotationally invariant, and the universe started out completely rotationally symmetric. Would we observe a symmetric state?</p> <p>The answer is no, because the wavefunction of the universe would be in a superposition of states which would be symmetric, but no observer's perception would be symmetric. Now to make this precise, you could go to an observer's point of view, and consider the universe relative to this observer, and see that it is not symmetric. Or you could say that you collapsed the wavefunction of the universe by looking around, or whatever you prefer. The end result is talking about sense-impressions, but the philosophical crutch allows you to understand that this does not require a breaking of symmetry in any fundamental law or initial condition.</p> <p>The wavefunction of the universe is also used to give predictions on the likelihood of different states in quantum gravity. Here the point is to make sure that we are not making models whose a-priori probability is too low to be plausible. In this context, the wavefunction of the universe is a useful figure of speech.</p> <p>It is not a mistake to use positivistically unverifiable notions, so long as you always know how to translate this into sense impressions at the end, so that you eliminate the metaphysical looking things.</p>
171
quantum mechanics
what is difference between these two expectation values?
https://physics.stackexchange.com/questions/30155/what-is-difference-between-these-two-expectation-values
<p>what is difference between these two expectation values $\langle \hat A \hat B\rangle$ and $\langle \hat B \hat A\rangle$? where the $\hat B$ and $\hat A$ are two operators.</p>
<p>$$\langle \hat{A}\hat{B} \rangle -\langle \hat{B}\hat{A} \rangle = \langle \hat{A}\hat{B}-\hat{B}\hat{A} \rangle$$</p> <p>So it is simply the expectation of the commutator.</p>
172
quantum mechanics
Feynman&#39;s &#39;diamond jumping out of a box&#39; parody, how would this work?
https://physics.stackexchange.com/questions/30172/feynmans-diamond-jumping-out-of-a-box-parody-how-would-this-work
<p>I have been told that Feynman deduced from a path integral formulation an equation that predicts the amount of time it would take for a diamond to 'jump' out of a box:</p> <p>$t &gt; \dfrac{x \Delta{x} m}{ h} $</p> <p>where $x$ is the size of the box, $\Delta x$ is the distance the diamond must travel to leave the box and $m$ the mass of the diamond.</p> <p>How would it be possible for a diamond to leave the box? What are the processes that happen over the stupendous time gap to allow the diamond to do this?</p>
<p>The diamond must become quantum as a unit, and the wave function of the quantum diamond must then disperse sufficiently to extend outside the box. At that point the diamond as a whole unit has a probability of jumping outside the box.</p> <p>The first criterion is by far the most difficult, because it can only be achieved by keeping the diamond in total and absolute information isolation from the rest of the universe. That is... unlikely to say the least. If even a single photon or phonon &quot;detects&quot; its location, then from that point forward the diamond is classical in the sense that the photon has pinpointed its location.</p> <p>The second criterion is just abysmally slow. Because even a small diamond has a lot of mass, its wave function disperses very, very slowly.</p> <p>Both of these criteria can be expressed in terms of path integrals, which provide a precise way to quantify the issues I only described conceptually.</p> <hr /> <p><strong>Addendum 2012-06-16</strong></p> <p>@OllyPrice very reasonably asked for clarifications on:</p> <blockquote> <p>(1) what does total isolation from the universe mean?, and</p> <p>(2) what does it mean that the diamond must be kept quantum as a unit?</p> </blockquote> <p>The most concrete way I can think of to quantify isolation is that the diamond can neither emit or receive particles of matter or energy.</p> <p>Keeping out particles of matter is the easier part of that, since it means you &quot;just&quot; need the most absolute vacuum every created, including removal of all high-energy particles such as cosmic rays. Preventing energy exchanges is much, much harder. The vacuum keeps you from exchanging phonons (sound quanta), so you get two (matter and phonons) for the price of one with that one. Your suspension system would need to be phonon-free, however, at least if you do the experiment here on earth.</p> <p>That leaves mostly electromagnetic radiation. Radio frequencies whose wavelengths are a lot larger than the distance you want the diamond to jump are not a big problem, although if you get enough of them you may start locating the diamond too well and thus &quot;lose coherence&quot; as they say these days (it's the same idea).</p> <p>So, that leaves mostly the higher frequencies of very short microwaves through infrared and light. Infrared is going to be the biggest issue, so you want both the diamond and the cavity in which you do the experiment to be <em>cold</em>, as close to absolute zero as possible, to prevent stray space-locating infrared photons from traveling in either direction.</p> <p>... and having said all that, I must also say: Hmm! That portfolio of preconditions is not <em>quite</em> as impossible as I had always assumed. So, someone could possibly do this for real in a high-end lab, using something like, say, a bacterium (huge!)... or much better, something a lot smaller, such as a low-end nanodiamond. The suspension system for the earthbound version would be the trickiest part. Hmm. Maybe a single embedded electron charge? No, much easier: A superconducting particle over a dish magnet. Or maybe even better: A very small piece of <a href="http://en.wikipedia.org/wiki/Pyrolitic_graphite" rel="nofollow noreferrer">pyrolytic carbon</a> similarly suspended by a magnetic field, though I don't know for sure what would happen to that materials extreme form of <a href="http://en.wikipedia.org/wiki/Diamagnetic" rel="nofollow noreferrer">diamagnetism</a> at such low temperatures.</p> <p>That is a truly fascinating possibility! Does anyone here know if anyone has ever tried this?</p> <p>I certainly don't know. But wow, what a fascinating possibility: A material object, albeit a very very very tiny one, quantum-jumping through a physical barrier? Now <em>that</em> would be a thesis paper!</p> <hr /> <p>On to question (2), what makes the diamond a &quot;unit&quot;?</p> <p>That one's easy! Bonding. The covalent bonds of the diamond keep it internal components aligned strongly with each other, and so unable to deviate over time into less certain relationships. That's not to say you can't get some weird stuff going on internally, but it won't in general be capable of disrupting the diamond's internal structure.</p> <p>Notice the inverse relationship here between isolation and cohesion (bonding).</p> <p>That is, anything that bonds two objects together physically (phonon-mediated) or via information keeps them from &quot;going quantum&quot; relative to each other. Conversely, it is the lack of such bonding (isolation) that enables relative quantum behavior.</p> <p>When the universe as a whole is one of those two partners, &quot;relative quantum behavior&quot; becomes a pretty one-sided concept, since we in the universe just stay classical.</p> <p>However, for something like two very small objects isolated both from each other and from the universe, the concept of relative quantum behavior becomes a testable idea. That would be where <em>each</em> system sees the <em>other</em> as being the quantum one, whenever they finally do interact with each other.</p> <p>(Fair warning: I just now made up the phrase &quot;relative quantum behavior&quot;; it's not standard. However, the fully quantum frameworks for studying systems such as <a href="http://en.wikipedia.org/wiki/Positronium" rel="nofollow noreferrer">positronium</a> would necessarily contain an equivalent concept, since for example in positronium the electron and positron are necessarily equally &quot;quantum&quot; relative to each other, whereas in hydrogen it's easy to approximate the proton as being classical. But I don't know if the idea has ever been explored as a stand-alone concept, especially as it would apply to larger systems.)</p> <p>Finally, and even more interesting (to me at least) is the idea that might be able to encapsulate a &quot;hot spot&quot; within a sufficiently large, cold piece of matter. By recording over time, this <em>internal observer</em> could watch itself even as the diamond as a whole &quot;goes quantum&quot; and starts doing things like being in two places at once.</p> <p>The idea that a classical observer could nonetheless still be subject to quantum non-locality at a larger level just fascinates me, in part because it's so counter-intuitive to how we usually view quantum mechanics.</p>
173
quantum mechanics
Partial decoherence from interaction between two qubits
https://physics.stackexchange.com/questions/409753/partial-decoherence-from-interaction-between-two-qubits
<p>$\renewcommand{\ket}[1]{\left\lvert #1 \right \rangle}$</p> <p>If a quantum system $A$ becomes entangled with another quantum system $B$, then $A$ can no longer be described by a pure quantum state. For example, given a Bell state $$ \ket{00} + \ket{11} \, , $$ the state of either qubit by itself is a classical mixture of $\ket{0}$ and $\ket{1}$. In this case where we have complete entanglement, the state of each qubit uniquely determines the state of the other and the quantum coherence of each individual qubit is completely lost. However, if the two qubits interact only weakly, the effect on each individual qubit state should be equivalent to a weak measurement and result in only partial loss of coherence.</p> <p>Starting with an initial product state of two qubits $$ \ket{\psi(0)} = \frac{1}{2} ( \underbrace{\ket{0} + \ket{1}}_A ) \otimes (\underbrace{\ket{0} + \ket{1}}_B) \, , $$ can we show that physical interaction through an interaction Hamiltonian $$ H/\hbar = \frac{g}{2} \left( \sigma_x \otimes \sigma_x + \sigma_y \otimes \sigma_y \right) = g \left( \begin{array}{cccc} 0 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 \end{array} \right) $$ can result in partial decoherence of qubit $A$?</p> <p>We use the basis state ordering $\{ \ket{00}, \ket{01}, \ket{10}, \ket{11} \}$ implicit in the definition of the Kronecker product. Note, therefore, that qubit $A$ is the most significant bit in the two-qubit kets. Furthermore, in the chosen basis state ordering, $\ket{\psi(0)}$ is represented as $$ \frac{1}{2} \left( \begin{array}{c} 1 \\ 1 \\ 1 \\ 1 \end{array} \right) \, . $$</p>
<p><span class="math-container">$\renewcommand{\ket}[1]{\left \lvert #1 \right \rangle}$</span> The propagator for this Hamiltonian is <span class="math-container">$$ U(t) = \exp(-iHt/\hbar) = \left( \begin{array}{cccc} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; \cos(gt) &amp; -i \sin(gt) &amp; 0 \\ 0 &amp; -i \sin(gt) &amp; \cos(gt) &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \end{array} \right) \, . $$</span> Applying <span class="math-container">$U(t)$</span> to the initial state <span class="math-container">$\ket{\Psi(0)}$</span>, we get <span class="math-container">$$ \ket{\psi(t)} = U \ket{\psi(0)} = \frac{1}{2} \left( \begin{array}{c} 1 \\ e^{-i g t} \\ e^{-i g t} \\ 1 \end{array} \right) \, . $$</span> The density matrix is therefore <span class="math-container">$$ \rho(t) = \frac{1}{4} \left( \begin{array}{cccc} 1 &amp; e^{igt} &amp; e^{igt} &amp; 1 \\ e^{-igt} &amp; 1 &amp; 1 &amp; e^{-igt} \\ e^{-igt} &amp; 1 &amp; 1 &amp; e^{-igt} \\ 1 &amp; e^{igt} &amp; e^{igt} &amp; 1 \end{array} \right) \, . $$</span> Now take the partial trace over qubit <span class="math-container">$B$</span> (the least significant bit). Doing this leaves the density matrix of qubit <span class="math-container">$A$</span> (the most significant bit) as <span class="math-container">$$ \rho_A(t) = \frac{1}{2} \left( \begin{array}{cc} 1 &amp; \cos(gt) \\ \cos(gt) &amp; 1 \end{array} \right) \, . $$</span> After a time <span class="math-container">$t = \pi/(2g)$</span>, the coherence of qubit <span class="math-container">$A$</span> is completely gone. This time is precisely the time it takes for the two qubits to "swap" their excitations, i.e. for the process <span class="math-container">$$\ket{1} \otimes \ket{0} \rightarrow \ket{0} \otimes \ket{1} \, .$$</span> Note, however, that because of the initial state where both qubits are in a superposition of excited and not excited, we observe in <span class="math-container">$\rho_A(t)$</span> no change in population of the two levels. Instead, we see only an oscillation in the qubit's coherences.</p> <hr> <p>Thanks to <a href="https://physics.stackexchange.com/users/8563/emilio-pisanty">Emilio Pisanty</a> for help with this question and answer.</p>
174
quantum mechanics
Difference between state function and eigenfunction
https://physics.stackexchange.com/questions/425519/difference-between-state-function-and-eigenfunction
<p>Is there any difference when we say in quantum mechanics the <em>eigenfunction</em> of an operator and the <em>state function</em>? Can we use the two terms as supplementary to one another?</p>
175
quantum mechanics
Does color filter change the frequency of light?
https://physics.stackexchange.com/questions/426844/does-color-filter-change-the-frequency-of-light
<p>If I glow a white LED bulb and then put a color filter around it (for example, a red color filter or a violet color filter) , then will it change the frequency of light.</p>
<p>The frequency will not be changed, but rather the color that is reflected/transmitted (depending on the nature of the filter). If you observe the white light THROUGH the filter and it appears to be red, then the filter either reflects or absorbs the other wavelengths while it lets the red ones through. The frequency you are observing are the same that was emitted by the diode, just that the rest of them have been scattered or absorbed elsewhere.</p>
176
quantum mechanics
How light source effects the nature of electron in double slit experiment
https://physics.stackexchange.com/questions/430914/how-light-source-effects-the-nature-of-electron-in-double-slit-experiment
<p>I know that electrons behaves as a particle instead of waves once observed under a light source. But what I really want to know is that how the photon particles are forcing the electrons to deviate from their wave character?</p>
177
quantum mechanics
Error in paper showing internal inconsistency in QM?
https://physics.stackexchange.com/questions/431053/error-in-paper-showing-internal-inconsistency-in-qm
<p>In a recent <a href="https://www.nature.com/articles/s41467-018-05739-8.pdf" rel="nofollow noreferrer">paper</a>, “Quantum theory cannot consistently describe the use of itself”, D. Frauchiger and R. Renner describe a modified Schroedinger's Cat <em>gedankenexperiment</em>, in which there are two boxes (A and B), containing observers (Oa and Ob). Observer Oa does a quantum measurement to obtain a 1 or 0 result. He sets the spin of a particle up or down, corresponding to 1 or 0 respectively, and sends the particle to Ob. Ob measures the spin of the particle and records the result. </p> <p>In the meantime, there are two external observers outside the boxes, Ea and Eb. Ec measures the state of the whole box A, while Eb measures the state of the whole box B. All of the observers are assumed to know how the experiment is set up. The authors of the paper assert that the measurements made by Ea and Eb do not always agree, and that therefore quantum mechanics is internally inconsistent.</p> <p>It seems to me that there may be a fundamental error in the reasoning described in the paper. The error would be in the assumption that Oa can send a particle, whose state accurately reflects the state of Oa, to Ob. It seems that this would amount to a violation of the no-cloning theorem in quantum mechanics, because in essence the sent particle's quantum state would be a clone of the quantum state initially measured by Oa. </p> <p>I'd be grateful if someone in the PhysicsSE community could shed some light on this.</p>
178
quantum mechanics
How do you calculate the probabilities associated with eigenfunctions of a wave function?
https://physics.stackexchange.com/questions/433443/how-do-you-calculate-the-probabilities-associated-with-eigenfunctions-of-a-wave
<p>I'm watching <a href="https://www.youtube.com/watch?v=EftlcEdaO_8&amp;list=PLy73XOgPrzuYW-RBSJz8IbdanXgIUuwsR&amp;index=17" rel="nofollow noreferrer">Lecture 03-05 of the MIT 3.024 lecture series on Electronic, Optical and Magnetic Properties of Materials</a> by Polina Anikeeva, specifically the discussion from the 23:30 mark onwards.</p> <p>In her lecture Prof. Anikeeva states that if we do not know the probabilities of different eigenvalues of momentum then we can measure the momentum many many times and approximate the probability from that. However, before that, she mentions that any measurement causes the collapse of wavefunction into one of its eigenfunctions associated with that value. From that point onwards, the new wave function describing that particle is the previous eigenfunction.</p> <p>My question is why don't repeated measurements of momentum change the probability distribution and will give us probability distribution due to the original wave function.</p>
<p>Yes, the wavefunction does change when you make a measurement, and hence the probability of the system ending up in a given eigenstate. You would need to reset the experiment each time as you made these repeated measurements. If you just repeatedly measured the same system without resetting it, then you would always get the same answer (assuming no time evolution), because the wavefunction is now exactly one of the eigenstates, and so measuring it will return that eigenstate with 100% probability.</p>
179
quantum mechanics
Distinguishability and energy of a system
https://physics.stackexchange.com/questions/440262/distinguishability-and-energy-of-a-system
<p>I'm studying distinguishability in quantum mechanics but I'm confused with the calculation of energies.</p> <p>Suppose we are given a hamiltonian for 1 particle with two possible sites<br> <span class="math-container">$$ H = \begin{bmatrix} 0 &amp; t \\ t &amp; 0 \end{bmatrix}$$</span></p> <p>With this hamiltonian we have of course two eigenstates <span class="math-container">$\psi_S$</span> ans <span class="math-container">$\psi_A$</span>. Assume this particle now has also spin 1/2. Assume this particle is under the same influence of the hamiltonian introduced above. Then the 4x4 bew hailtonian for 1 particle is</p> <p><span class="math-container">$$ H' = \begin{bmatrix} 1 &amp; 0 \\0 &amp; 1 \end{bmatrix} \otimes \begin{bmatrix} 0 &amp; t \\t &amp; 0 \end{bmatrix} $$</span></p> <p>Where the first part of the tensor product corresponds to the spin space and the second to the site space (Could someone tell me if this is correct?).</p> <p>Now assume instead of one particle, now we have two DISTINGUISHABLE particles. This is where I get confused. Since we have two quantum systems not interacting between them, then the eigenstates for the two particle system may be given by tensor products of the 1 particle states. In other words, since we had 4 eigenstates for the one particle system, in the two distinguishable particle system we have 16 eigenstates, composed of all possible combinations. Also is this is the case, the hamiltonian for the 2 particle (distinguishable) system is given by <span class="math-container">$H' \otimes H'$</span> (I'm not sure this is correct, although I would assume since both systems dont interact then the hamiltonian acts independantly on each particle). If this is the case then the eigenvalues will be of order <span class="math-container">$t^2$</span>. For example consider <span class="math-container">$\psi_{S \uparrow} \otimes \psi_{S\uparrow}$</span></p> <p>Yet I can think of an argument so that they are only of order <span class="math-container">$t$</span>. If we have two particles that don't interact with each other, then the total energy will be just the sum of the energy of both systems. Thus for <span class="math-container">$\psi_{S \uparrow} \otimes \psi_{S\uparrow}$</span> the energy will be <span class="math-container">$2t$</span>. I'm very confused about this and any guidance will be greatly appreciated.</p>
<p>The Hamiltonian of composite system is not a tensor product, but a sum</p> <p><span class="math-container">$$ H_{comp} = H'_1 + H'_2 $$</span></p> <p>where <span class="math-container">$H_1'$</span> acts on the part of state vector that describes the first system, and <span class="math-container">$H_2'$</span> acts on the part that describes the second one.</p> <p>This is consistent with your argument that total energy of noninteracting two systems should be sum of their energies.</p> <p>The tensor product is present in the composite Hamiltonian in the sense that</p> <p><span class="math-container">$$ H_1' = H' \otimes 1_2 $$</span></p> <p><span class="math-container">$$ H_2' = 1_1 \otimes H' $$</span></p> <p>where <span class="math-container">$1_1$</span> is unit matrix acting on the coordinates of the first system, and <span class="math-container">$1_2$</span> is unit matrix acting on the coordinates of the second system.</p>
180
quantum mechanics
Seeming energy paradox in quantum system?
https://physics.stackexchange.com/questions/441452/seeming-energy-paradox-in-quantum-system
<p>Imagine an electron in atomic or ascillator potential - any bound electron state. The WF has bell like shape fading at infinity - where classical energy of the electron by all means greatly exceeds the state energy eigenvalue. Now, if we place a trap for the electron at this distance and finally get the electron trapped there - we will end up with the electron energy much higher than its energy eigenvalue, and this seems to violate energy conservation principle. Where is a mistake? == The following is an update regarding "leaking into the trap" model == WF is a complex amplitude of probability TO FIND A PARTICLE IN A SPECIFIC SPACE LOCATION. How do we measure probability? We place a trap. Good trap is the one the beast is unaware of. And it's a shutter-like device - when the shutter closes the particle is either in or out. So we activate the trap by closing shutter and find our particle trapped. Violation of energy? OR - Does it follow that closing the trap is not an energy free process? Then, closing the trap interfears with the particle state, pumping it with energy? And what do we mean then by saying probability density amplitude for WF? (we can t measure it!)</p>
<p>I guess you need first to offer a description of the trap. For example, if this trap is just a deep potential well at a significant distance from the minimum of the initial potential, then the electron can indeed tunnel into the trap, but if the minimum of the trap well is below the minimum of the initial potential, the energy conservation will not be violated. If, however, the minimum of the trap is too high, the electron just will not tunnel there. </p>
181
quantum mechanics
Planetary-sized pure quantum states
https://physics.stackexchange.com/questions/442432/planetary-sized-pure-quantum-states
<p>Picture a planet wandering intergalactic space. Such a planet would only couple to vacuum flucuations and the cosmic microwave background. (Ignore stray Hydrogen atoms.)</p> <p>If this planet started as a pure quantum state, how fast would that state lose its coherence?</p> <p>In such a system, clearly there are many more degrees of freedom that are isolated from the environment compared with those coupled to the outside. So I want to know if those isolated DOF somehow protect the purity of the quantum state. </p>
<p>I am just going to quote Schlosshauer as being pertinent to this question and discussion in comments.</p> <p>Reference: Decoherence and the Quantum-to-Classical transition (page 84):</p> <p>To summarize, we have distinguished three different cases for the type of preferred pointer states emerging from interactions with the environment:</p> <ol> <li>The quantum-measurement limit. When the evolution of the system is dominated by <span class="math-container">$H_{int}$</span>, i.e. by the interaction with the environment, the preferred states will be eigenstates of <span class="math-container">$H_{int}$</span> (and thus often eigenstates of position).</li> <li>The quantum limit of decoherence. When the environment is slow and the self-Hamiltonian <span class="math-container">$H_S$</span> dominates the evolution of the system, a case frequently encountered in the microscopic domain, the preferred states will be energy eigenstates, i.e., eigenstates of <span class="math-container">$H_S$</span></li> <li>The intermediary regime. When the evolution of the system is governed by <span class="math-container">$H_{int}$</span> and <span class="math-container">$H_S$</span> in roughly equal strengths, the resulting preferred states will represent a compromise between the first two cases. For instance in quantum Brownian motion the interaction Hamiltonian <span class="math-container">$H_{int}$</span> describes monitoring of the position of the system. However, through the intrinsic dynamics induced by <span class="math-container">$H_S$</span> this monitoring also leads to indirect decoherence in momentum. This combined influence of <span class="math-container">$H_{int}$</span> and <span class="math-container">$H_S$</span> results in the emergence of preferred states localized in phase space, i.e. in both position and momentum.</li> </ol>
182
quantum mechanics
Bose Einstein Condensate
https://physics.stackexchange.com/questions/447059/bose-einstein-condensate
<p>In Bose Einstein condensate photons stop moving, but with reference to what frame of reference? Photons move at constant speed in all reference frames so what happens to Maxwell equations at zero degree kelvin? </p>
<p>In a Bose-Einstein condensate the electrical field of the light interacts with the condensate and the photons and the condensate become entangled. Once this happens the light forms a quasiparticle called a <a href="https://en.wikipedia.org/wiki/Polariton" rel="nofollow noreferrer">polariton</a>. The velocity is no longer the velocity of the photon but instead it is the velocity of the polariton. The polariton behaves like a massive particle and therefore it moves at speeds less than <span class="math-container">$c$</span>.</p> <p>The speed referred to in these experiments is the speed measured in the rest frame of the condensate, which is generally the laboratory frame since the condensate is usually at rest in the laboratory.</p> <p>It sounds very odd that in these experiments the speed of light can be reduced to a few metres per second, but the underlying principle is really no different from the slowing of light that occurs in media like glass or water. In both cases it is the interaction between the photons and the medium that causes the slowing. It's just that in a BEC the interaction is many orders of magnitude stronger than in e.g. glass so the slowing is much more drastic.</p>
183
quantum mechanics
How can we use usual notations in relational quantum mechanics?
https://physics.stackexchange.com/questions/449641/how-can-we-use-usual-notations-in-relational-quantum-mechanics
<p>in this <a href="https://arxiv.org/abs/quant-ph/9609002" rel="nofollow noreferrer">relational quantum mechanics paper</a> Rovelli tries to reconstruct the theory avoiding to use the notion of a state associated to a particle. he says that the only correct thing would be to speak of the information tbat an observer O has about a system S. he also says that it is an interpretation of quantum mechanics. I wonder if we could use the usual notations in the spirit of QRM by always writing a subsript O to the state of the observed system (<span class="math-container">$ \phi_O$</span>) Please look at this <a href="https://physics.stackexchange.com/questions/130849/what-is-the-philosophy-behind-relational-quantum-mechanics">answer</a> we have a two level system S observed by P and Q. it is written ton consider the states of O + P + S. we have to get things like uuu and ddd. it is possible to rewrite this as <span class="math-container">$u_O. u_P.u$</span> but i have a problem with the third term. it seems to be "absolute". i think that there are additional rules to use in the notations. </p>
184
quantum mechanics
Quantum numbers in spherical symmetric potential
https://physics.stackexchange.com/questions/450330/quantum-numbers-in-spherical-symmetric-potential
<p>Can we proof the relation that the principal quantum number <span class="math-container">$n$</span> and azimuthal quantum number <span class="math-container">$l$</span> have the relation <span class="math-container">$l=0,1,...n-1$</span> in any spherical symmetric potential <span class="math-container">$V(r)$</span> or this just apply to Coulomb potential?</p> <p>I would appreciate reference to books or articles</p>
<p>This relationship does not hold for all spherically-symmetric potentials. For example, for the <a href="https://en.m.wikipedia.org/wiki/Quantum_harmonic_oscillator#Example:_3D_isotropic_harmonic_oscillator" rel="nofollow noreferrer">3D harmonic oscillator</a> the relationship between <span class="math-container">$n$</span> and <span class="math-container">$l$</span> is</p> <p><span class="math-container">$$l=0,2,4,...,n$$</span></p> <p>for even <span class="math-container">$n$</span> and</p> <p><span class="math-container">$$l=1,3,5,...,n$$</span></p> <p>for odd <span class="math-container">$n$</span>.</p>
185
quantum mechanics
Deriving time-dependent Hamiltonian
https://physics.stackexchange.com/questions/452481/deriving-time-dependent-hamiltonian
<p>Consider a two state system <span class="math-container">$\rho$</span> with some Hamiltonian <span class="math-container">$H$</span>. I am interested in the specifics of the systems time-evolution.</p> <p>The article I am reading gives me the following time evolution:</p> <p><span class="math-container">$$\lvert \psi(t) \rangle = \cos(\Omega t) e^{i \omega_1 t} \lvert 1 \rangle + \sin(\Omega t) e^{i \omega_2 t} \lvert 2 \rangle $$</span></p> <p>This makes it fairly obvious that the corresponding Hamiltonian for this unitary transformation is time-dependent. </p> <p>Can anyone help me derive the Hamiltonian and show me how it's done?</p> <p>As far as I know for time-dependent Hamiltonians we have the relation</p> <p><span class="math-container">$$U = e^{i \int H(t) dt}$$</span></p> <p>I've looked but wasn't able to find help. Any help will be greatly appreciated (:</p>
<p>You can directly use Schrödinger's equation: <span class="math-container">$$ i \hbar \frac{d}{dt} \lvert\psi(t)\rangle = H \lvert\psi(t)\rangle $$</span></p> <p>Start with the given time evolution: <span class="math-container">$$ \begin{align} \lvert\psi(t)\rangle &amp;= \cos(\Omega t) e^{i \omega_1 t} \lvert 1\rangle \\ &amp;+ \sin(\Omega t) e^{i \omega_2 t} \lvert 2\rangle \tag{1} \end{align} $$</span></p> <p>Heading for Schrödinger's equation, you can straight-forwardly calculate its time-derivative: <span class="math-container">$$ \begin{align} i \hbar \frac{d}{dt} \lvert\psi(t)\rangle = &amp;- i\hbar \Omega \sin(\Omega t) e^{i\omega_1 t} \lvert 1\rangle \\ &amp;- \hbar \omega_1 \cos(\Omega t) e^{i\omega_1 t} \lvert 1\rangle \\ &amp;+ i\hbar \Omega \cos(\Omega t) e^{i\omega_2 t} \lvert 2\rangle \\ &amp;- \hbar \omega_2 \sin(\Omega t) e^{i\omega_2 t} \lvert 2\rangle \tag{2} \end{align} $$</span></p> <p>Now let's make the most general approach for a time-dependent Hamiltonian in the given two-state system: <span class="math-container">$$ \begin{align} H(t) &amp;= H_{11}(t) \lvert 1\rangle \langle 1 \rvert \\ &amp;+ H_{12}(t) \lvert 1\rangle \langle 2 \rvert \\ &amp;+ H_{21}(t) \lvert 2\rangle \langle 1 \rvert \\ &amp;+ H_{22}(t) \lvert 2\rangle \langle 2 \rvert \end{align} $$</span></p> <p>Apply this Hamiltonian to the given time evolution (1), use the orthonormality relations between <span class="math-container">$\lvert 1\rangle$</span> and <span class="math-container">$\lvert 2\rangle$</span>, and you get:</p> <p><span class="math-container">$$ \begin{align} H(t) \lvert\psi(t)\rangle &amp;= H_{11}(t) \cos(\Omega t) e^{i\omega_1 t} \lvert 1\rangle \\ &amp;+ H_{12}(t) \sin(\Omega t) e^{i\omega_1 t} \lvert 1\rangle \\ &amp;+ H_{21}(t) \cos(\Omega t) e^{i\omega_2 t} \lvert 2\rangle \\ &amp;+ H_{22}(t) \sin(\Omega t) e^{i\omega_2 t} \lvert 2\rangle \tag{3} \end{align} $$</span></p> <p>According to Schrödinger's equation you can equate (2) and (3). By comparing the coefficients you get the final result: <span class="math-container">$$ \begin{align} H_{11}(t) &amp;= - \hbar \omega_1 \\ H_{12}(t) &amp;= -i\hbar \Omega e^{ i(\omega_1 - \omega_2) t} \\ H_{21}(t) &amp;= +i\hbar \Omega e^{-i(\omega_1 - \omega_2) t} \\ H_{22}(t) &amp;= - \hbar \omega_2 \end{align} $$</span> Notice that the Hamiltonian turned out to be Hermitian as it should be.</p>
186
quantum mechanics
Photoelectric Effect (and Electric Current)
https://physics.stackexchange.com/questions/452798/photoelectric-effect-and-electric-current
<blockquote> <p><strong>When intuition fails: photons to the rescue!</strong><br> When experiments were performed to look at the effect of light amplitude and frequency, the following results were observed:</p> <ul> <li>The kinetic energy of photoelectrons increases with light frequency.</li> <li>Electric current remains constant as light frequency increases.</li> <li>Electric current increases with light amplitude.</li> <li>The kinetic energy of photoelectrons remains constant as light amplitude increases.</li> </ul> </blockquote> <p>In this article talking about the photoelecric effect, it notes that while kinetic energy of photoelectrons increases with light frequency and that electric current remains constant as light frequency increases.</p> <p>I'm fairly sure that an increasing light frequency causes an increase in the energy of the photon (and thus the increase in the energy of the photoelectron). However, I'm not sure what electric current means in this experiment, and why it stays constant.</p>
<p><a href="https://physics.stackexchange.com/questions/333813/are-number-of-photons-in-an-incident-radiation-proportional-to-its-intensity">This question and its answer</a> may help explain the relationship between intensity and number of photons. As Thomas Fritsch pointed out, electric current is the number of electrons emitted per unit time. Assuming the photons' frequency and hence energy is high enough to knock an electron loose, more incident photons per unit time means more photoelectrons per unit time, which is a higher current.</p> <p>EDIT: The second answer to the linked question is helpful for your comment. From the first answer we know that intensity is proportional to the product of number of photons and photon energy: <span class="math-container">$$I\propto n\nu$$</span> where <span class="math-container">$I$</span> is intensity, <span class="math-container">$n$</span> is number, and <span class="math-container">$\nu$</span> is frequency. From the second we know that intensity is proportional to the amplitude squared: <span class="math-container">$$I\propto E^2$$</span> where <span class="math-container">$I$</span> is intensity and <span class="math-container">$E$</span> is amplitude (I use E here because the first answer in the other question already used A for area.) Therefore <span class="math-container">$$E^2\propto n\nu$$</span> That shows how to relate amplitude to the other parameters. <br><br>Increasing frequency <em>does</em> increase intensity, but it doesn't increase <span class="math-container">$n$</span>, the number of photons, so it doesn't increase the number of photoelectrons and hence doesn't increase the current.</p> <p>EDIT 2: Above I gave the relationship between intensity and the amplitude of a classical wave. Intensity <span class="math-container">$I$</span> is defined as power per unit area, and power <span class="math-container">$P$</span> is rate of energy flow. Each photon has energy <span class="math-container">$\hbar \nu$</span>, so if the rate at which photons land on your detector is <span class="math-container">$R$</span> and the area of your detector is <span class="math-container">$A$</span>, then <span class="math-container">$$I=\frac{P}{A}=\frac{R\hbar\nu}{A}$$</span></p>
187
quantum mechanics
Why the available parities of the infinite well solutions change as we change the boundaries positions?
https://physics.stackexchange.com/questions/453228/why-the-available-parities-of-the-infinite-well-solutions-change-as-we-change-th
<p>So, I have noticed something in the solutions of the infinite quantum well and I don't quite understand it. The solutions are of the form</p> <p><span class="math-container">\begin{equation} \phi_{n}(x) = A\cos(kx)+B\sin(kx) \end{equation}</span></p> <p>If the boundaries of the well are at <span class="math-container">$x=0$</span> and <span class="math-container">$x=L$</span> then the boundary condition <span class="math-container">$\phi_{n}(x=0)=0$</span> leads to <span class="math-container">$A=0$</span>, meaning that we only have odd-parity solutions available.</p> <p>If, however, the boundaries of the well are at <span class="math-container">$x=-a$</span> and <span class="math-container">$x=a$</span> we have</p> <p><span class="math-container">\begin{eqnarray} A\cos(ka)+B\sin(ka)=0 \\ A\cos(ka)-B\sin(ka)=0 \end{eqnarray}</span></p> <p>which leads us too</p> <p><span class="math-container">\begin{eqnarray} A\cos(ka)=0 \\ B\sin(ka)=0 \end{eqnarray}</span></p> <p>giving rise to odd and even parity solutions.</p> <p>How come we gain one type of solutions by moving the boundary? Shouldn't the system be the same independently of where we decide to put the origin of our system?</p> <p>Thank you very much.</p>
<p>The <em>energies</em> will be the same but the solutions - of course - will not. The simplest example is to compare <span class="math-container">$\cos(x)$</span> which is even, with <span class="math-container">$\sin(x)$</span>, which is odd. If you just translate by <span class="math-container">$\pi/2$</span>, then <span class="math-container">$\cos(x-\pi/2)=\sin(x)$</span>: by simply translating the origin you change the parity of the function. Of course, a sine function is just a cosine function shifted by <span class="math-container">$\pi/2$</span> so displacing (or shifting) the original does not affect the <em>shape</em> of the solution although it does affect its parity.</p> <p>The parity depends quite fundamentally where you place your origin, since parity is a reflection about a reference points: if your well extends from <span class="math-container">$0$</span> to <span class="math-container">$2a$</span>, it is <em>not</em> symmetric but if it extends from <span class="math-container">$-a$</span> to <span class="math-container">$a$</span>, then it certainly is symmetric. </p> <p>In fact, it is entirely possible to solve this problem with the well from <span class="math-container">$0$</span> to <span class="math-container">$2a$</span> without using parity arguments. The parity operation allows you to connect the boundary conditions, thus dividing the work by <span class="math-container">$2$</span>, but the cost is that you must solve for the even and odd solutions separately, so there is no real savings in terms of work, although there is additional insight into the solutions.</p>
188
quantum mechanics
Ehrenfest theorem and the distinction between moment ordinal 1 and moment ordinal 2 of the measured probability distributions
https://physics.stackexchange.com/questions/459903/ehrenfest-theorem-and-the-distinction-between-moment-ordinal-1-and-moment-ordina
<p>The proof of the statement of Ehrenfest theorem in the Schrodinger picture does not depend on the state vector. However, <a href="https://en.wikipedia.org/wiki/Ehrenfest_theorem" rel="nofollow noreferrer">Wikipedia claims</a> that:</p> <blockquote> <p>for states that are highly localized in space, the expected position and momentum will approximately follow classical trajectories</p> </blockquote> <p>Here is my question: Is it not the case that by the proof of the Ehrenfest theorem, <span class="math-container">$ \bf for \ all \ \text{states}$</span> and potentials we have the first ordinal moment (which is the expectation values) of <span class="math-container">$ \nabla V$</span> and <span class="math-container">$ \hat x$</span> operators being related by <span class="math-container">$$ m \left(\frac{\mathrm{d}^2}{{\mathrm{d}t}^2} \left&lt; \hat x \right&gt; \right) ~=~ -\nabla V \,? $$</span></p> <p>On the other hand it is possible to prepare systems in states that are initially localized, but get more and more de-localized with time which for the position operator could be described in terms of the second ordinal moments(the variances ) of the position operator as <span class="math-container">$$ \left&lt;{\left(\Delta x\right)}^2\right&gt;_t \left&lt;{\left(\Delta x\right)}^2\right&gt;_0 ~\le~ \frac{\hbar^2t^2}{4m^2} $$</span> Here, the subscript of the variances refer to the time variable.</p> <p>So we can say that there are states in which we can prepare our systems to be in, whose probability distributions in the measured values of <span class="math-container">$ \hat x$</span> at time <span class="math-container">$t=0$</span> and at time <span class="math-container">$t&gt;t_0$</span> can have the respective variance satisfying the uncertainty relation specified above. On the other hand for this system and for system in any state, the expectation values of <span class="math-container">$\hat x$</span> at any given time satisfies <span class="math-container">$$ m \left(\frac{\mathrm{d}^2}{{\mathrm{d}t}^2} \left&lt; \hat x \right&gt; \right) ~=~ -\nabla V \,. $$</span> Is there any error in what I have said so far?</p> <p>So, is the popular account including the Wikipedia one conflating variance with expectations of the measured probability distributions of the appropriate observables?</p>
189
quantum mechanics
Does the &quot;particle exchange&quot; operator have any validity?
https://physics.stackexchange.com/questions/464019/does-the-particle-exchange-operator-have-any-validity
<p>In introductory quantum mechanics books the topic of identical particles often introduces a "particle exchange" operator. This operator, when applied to a multi-particle wave-function, exchanges the positions of two identical particles.</p> <p>However, it seems to me that this is a non-physical thing. Particles can't really "exchange positions" can they? Does such an operator really have validity?</p>
<p>If I'm reading your question right, I think you're having a relatively common issue. Feynman himself had the same issue when confronted with a creation operator for an electron. "How can an electron really be created? It violates the conservation of charge!"</p> <p>The point is that not every operator needs to represent a physical realizable time evolution. Quite often, you just want to consider the formal properties of an abstract operator, regardless of whether it has any direct meaning itself. (In fact, at the level of quantum field theory, it certainly doesn't -- the particle exchange operator does nothing whatsoever, because the particles are identical. The simpler formalism of nonrelativistic quantum mechanics doesn't have that built in, which is why it's useful to consider the exchange operator.)</p> <p>You've already seen this attitude earlier in quantum mechanics. For example, the position operator <span class="math-container">$\hat{x}$</span> is not a "physical thing", in the sense that if you have a state <span class="math-container">$|\psi \rangle$</span>, <span class="math-container">$\hat{x} |\psi \rangle$</span> is not necessarily a sensible physical state. Instead <span class="math-container">$\hat{x}$</span> is useful because it represents an observable quantity. In quantum mechanics, operators are our basic tools. Some represent observables, some represent physical time evolutions, and some represent neither, and there's nothing wrong with those last ones.</p>
190
quantum mechanics
which waves give rise to interference pattern in a double slit diffraction experiment
https://physics.stackexchange.com/questions/467873/which-waves-give-rise-to-interference-pattern-in-a-double-slit-diffraction-exper
<p>In a double slit experiment,the fringes obtained on screen are due to superposition of single slit diffraction from each slit and the double slit interference pattern,</p> <p>when we talk about single slit diffraction,we say the single slit is divided into many number of slits and the waves originating from them superimpose to produce the diffraction pattern on the screen,this means all the rays originating from each of the slits are engaged within themselves,</p> <p>what waves are free from the slits which superimpose to give rise to the double slit interference pattern.</p>
191
quantum mechanics
As electrons are present in many places at the same time so how can it not violate conservation of energy?
https://physics.stackexchange.com/questions/469062/as-electrons-are-present-in-many-places-at-the-same-time-so-how-can-it-not-viola
<p>I was just wondering that as according to quantum mechanics electrons are present in many places at the same time, now as according to Einstein as <span class="math-container">$$E = mc^2$$</span> doesn't it violate energy conservation ?</p> <p><code>Edit</code>-</p> <p>I just meant by energy conservation that as electrons can be at many places at the same time and as electrons have mass. So E = mc2 now mass for different places m = (m1 + m2 + m3 .....), how can this equation be true now ? Because as a whole system E must be same for both instances before we measure the electron(m1c^2 + m2c^2 + m3c^2....) and after we measure the electron(mc^2) ?</p> <p><strong>Edit 2</strong></p> <p>Thanks a lot, really appreciate all of your efforts for enlightening me, just a question if the electron is actually not at any location before measure, then where does that energy go? How come energy is same before measure and after measure then ? </p>
<blockquote> <p>according to quantum mechanics electrons are present in many places at the same time</p> </blockquote> <p>This is definitely not what QM says at all. This is usually stated in pop-science articles to explain QM to the layperson, but this is not what the theory says. The electron is actually not at any location until measured, and the mean value of measurements of many identically prepared systems will follow classical results such as energy conservation.</p> <p>Essentially the electron is not located at multiple places, so its mass isn't spread across space and there is nothing wrong here.</p>
192
quantum mechanics
Electrons Fired One at a Time in Double-slit Experiment
https://physics.stackexchange.com/questions/469718/electrons-fired-one-at-a-time-in-double-slit-experiment
<p>I have read that electrons fired individually through a 2 or more slits still form an interference pattern. I think this may be due to the fact that moving electrons produce electromagnetic waves (like in a transmitter aerial), and EM waves move electrons (like in your TV aerial). While each electron can only pass through one slit, the wave it produces could pass through all slits simultaneously, forming an interference pattern. Could the individual electrons be getting deflected by their own EM wave and following the beams to form a pattern?</p>
<p>The wavefunction of an electron is intrinsic and dependent on its mass and momentum; it does not have any important connection to the electron's electromagnetic field, at least in the context of a double-slit interferometer.</p> <p>The electron's wavefunction, which represents (the square root of) the probability density for finding the electron at any given place and time, actually passes through both slits. When the electron is detected, it is detected at one point location so does not produce an interference pattern. But when the detected locations of a lot of individual electrons are plotted out, you will see the interference pattern in the plot.</p>
193
quantum mechanics
Can different types of properties be entangled with each other? Say, the spin of particle A with the polarization of particle B?
https://physics.stackexchange.com/questions/476238/can-different-types-of-properties-be-entangled-with-each-other-say-the-spin-of
<p>Does this question make sense? Can measuring the spin of one entangled particle 'determine' the polarization of the other?</p>
<p>When two particles become entangled, the whole new system will have a common wavefunction that will describe the whole system. This system in your case will have both particles, so this wavefunction will describe both their characteristics, and not just their spins, but all of their characteristics.</p> <p>So yes, if they are entangled, measuring particle A's spin will have an effect on the measurement on particle B's polarization.</p> <p>It is not the spin of the particles that is entangled, but it is the whole system of the particles that is entangled, and that includes all the particles' characteristics.</p> <p>If you entangle two electrons, and put them in a singlet state (spins entangled), and then let one of the electrons through a Stern-Gerlach magnet, this will determine the particle's position depending on its spin. So now one electron's spin is entangled with the other one's position.</p> <p>You can view this as if any property of one particle with the other's could be entangled, but like I wrote, the whole system is already entangled, all you do is select certain measurements in certain experiments. </p> <p>Please see here:</p> <p><a href="https://physics.stackexchange.com/questions/61748/is-it-only-the-spin-of-a-particle-that-can-be-entangled-with-another-particles-s?rq=1">Is it only the spin of a particle that can be entangled with another particles spin?</a></p>
194
quantum mechanics
not separable electron state
https://physics.stackexchange.com/questions/485410/not-separable-electron-state
<p>Consider electron spin operator. It acts on Hilbert space <span class="math-container">$\mathbb{C}^2$</span>. Next, electron position operator acts on space <span class="math-container">$\mathbb{L}^2$</span>. Can we describe all electron features in one, "joint" Hilbert space? Certainly, both spaces can be combined into tensor product <span class="math-container">$\mathbb{C}^2 \otimes \mathbb{L}^2 $</span>, but this looks like composition of the 2 subsystems. If this tensor product is legitimate construction, then the electron states can be divided into separable and entangled. Can particle be entangled with itself? </p>
<p>It is certainly possible to entangle the position state of a particle with its spin state. This is exactly what a Stern-Gerlach apparatus does, producing quantum correlations between position and spin. If you do not observe the output of the Stern-Gerlach experiment directly (and so do not collapse the state vector), the output of the apparatus is precisely an entangled spin-position state.</p>
195
quantum mechanics
Finding the eigenfunctions of the operator $x$
https://physics.stackexchange.com/questions/485951/finding-the-eigenfunctions-of-the-operator-x
<p>On pg 104 of "Introduction to Quantum Mechanics" by Griffiths, we are asked to find the eigenfunctions of the <span class="math-container">$x$</span> operator. Hence, we have to find functions such that <span class="math-container">$$x f(x)=\lambda f(x)$$</span> I have used the notation <span class="math-container">$\lambda$</span> instead of <span class="math-container">$y$</span> because it is less confusing for me. Clearly, any function that satisfies <span class="math-container">$f(x)=0$</span> for <span class="math-container">$x\neq \lambda$</span> will be an eigenfunction. However, Griffiths claim that the only eigenfunction is <span class="math-container">$\delta(x-\lambda)$</span>. Why is this true?</p>
<p>Here is a formal wisecrack to reassure you: work in momentum space.</p> <p>Up to normalization constants that do not matter that much for your un-normalizable wave function, consider <span class="math-container">$$ f_\lambda (x)=\langle x| f_\lambda\rangle= \int dp \langle x|p\rangle \langle p|f_\lambda\rangle = \int \frac{dp}{\sqrt{2\pi \hbar}} e^{ixp/\hbar} \langle p|f_\lambda\rangle ~. $$</span></p> <p>Now your strarting point was <span class="math-container">$$ \hat x | f_\lambda\rangle = \lambda |f_\lambda \rangle , $$</span> and the momentum representation of <span class="math-container">$\hat x$</span> is but <span class="math-container">$$ \hat x= \int dp ~|p\rangle ( i\hbar \partial_p )\langle p| ~, $$</span> so that <span class="math-container">$$ \int dp ~|p\rangle ( i\hbar \partial_p )\langle p|f_\lambda\rangle =\lambda |f_\lambda\rangle.$$</span></p> <p>Multiply on the left by <span class="math-container">$\langle p'|$</span>, collapse the δ-function, and relabel <em>p'</em> to <em>p</em>, to get <span class="math-container">$$ i \hbar \partial_p \langle p|f_\lambda\rangle= \lambda \langle p|f_\lambda\rangle. $$</span></p> <p>You may solve this by <span class="math-container">$$ \langle p|f_\lambda\rangle \propto e^{-i \lambda p/\hbar } , $$</span> readily leading to your <span class="math-container">$$ f_\lambda (x)= \int \frac{dp}{\sqrt{2\pi \hbar}} e^{i(x-\lambda) p/\hbar} \propto \sqrt{\hbar }~~\delta (x-\lambda) ~. $$</span></p> <p>Dirac, sublimely slyly, all but does something equivalent in his book, on the basis of his magnificent <em>standard ket</em>, the translationally invariant momentum-space ket. I reckon Griffiths should be more humble in his implicit characterizations there.</p>
196
quantum mechanics
What is the mechanism when a photon is absorbed causing an electron orbital expansion to a larger orbital?
https://physics.stackexchange.com/questions/489054/what-is-the-mechanism-when-a-photon-is-absorbed-causing-an-electron-orbital-expa
<p>We know a photon is absorbed when an electron's orbit expands, but what is the mechanism?</p> <p>We know that an electron is not simply a particle, so the photon can't be converted into only kinetic energy.</p> <p>Is the photon converted into space, thus increasing the orbital size?</p>
<p>The photon gives the electron the energy it needs to move further away from the nucleus, against the attractive electrostatic force. In doing so, it increases the electron’s electrostatic potential energy, making it <em>less negative</em>. The kinetic energy of the electron actually <em>decreases</em>, because when the electron is further away from the nucleus it moves more slowly.</p> <p>Let’s consider a specific example with the two lowest energy states of hydrogen. In the <span class="math-container">$n=1$</span> state, the electron has +13.6 eV of kinetic energy and -27.2 eV of potential energy, for a total energy of -13.6 eV. When a photon knocks it into the <span class="math-container">$n=2$</span> state, it has +3.4 eV of kinetic energy, -6.8 eV of potential energy, and -3.4 eV of total energy.</p> <p>The photon transferred 10.2 eV of energy to the electron. Of this, 20.4 eV went into changing its PE, and -10.2 eV went into changing its KE.</p> <p>The values I am talking about for the electron’s KE and PE are <em>quantum-mechanical expectation values</em> for these quantities. You are correct that the electron does not behave as a classical particle.</p> <p>In the quantum-mechanical treatment, the primary “mechanism” of excitation is that the electric field associated with the photon interacts with the electric dipole moment of the atom. This is very similar to the classical analysis of an antenna. The electric dipole moment is due to the nucleus being positively charged and the electrons being negatively charged.</p> <p>The photon is really interacting with the <em>atom</em>, not just with the <em>electron</em>. But because the nucleus barely moves compared with the electron, the kinetic energy is mainly that of the electron, and potential energy of their electrostatic interaction is often ascribed for simplicity to the electron.</p>
197
quantum mechanics
How is the mass of an electron distributed?
https://physics.stackexchange.com/questions/502287/how-is-the-mass-of-an-electron-distributed
<p>Since the position of an electron is not determined, how is it’s mass distributed ? In other words, how does an electron curve space time ?</p> <p>.</p>
198
quantum mechanics
Understanding quantum superposition of molecules
https://physics.stackexchange.com/questions/507168/understanding-quantum-superposition-of-molecules
<p>How can the results from <a href="https://www.nature.com/articles/s41567-019-0663-9.epdf?referrer_access_token=P6jczrHmpk_eNzSKdu3YCdRgN0jAjWel9jnR3ZoTv0MQ7n-_yUmyvBsKfkN6FLBu98G0Nrx30Fucun-h3w8WX9IlwWelmQLTVb70WHA4Y1pxsdPmhmq4QvZ3kk0dRycJILQYYHZN26qmO72UylFXKFgM9i70iUUDcZdPwblhMv0RzePEbjpRDvq9ofxAtGf2St2MnNM6St8f1-6gD-QwYzkZvijVVq7IL4Ooez8pU-m8K5t0y9M2Ww6qUYOP6EjDvaJWUyBsWCPgXP_iPn6uR2RPrOq7S9u0r_erbzusUs9zISPrmJElxNpIumoS7WKuIh_4lrrD-KVLNbjxbWL7sw%3D%3D&amp;tracking_referrer=www.scientificamerican.com" rel="nofollow noreferrer">this paper</a>, <em>Quantum superposition of molecules beyond 25 kDa</em> (Fein et. al, 2019) be best understood in the context of molecules? Here's a layperson <a href="https://www.scientificamerican.com/article/giant-molecules-exist-in-two-places-at-once-in-unprecedented-quantum-experiment/" rel="nofollow noreferrer">sci-pop article</a>, <em>Giant Molecules Exist in Two Places at Once in Unprecedented Quantum Experiment</em> (Letzter, Scientific American, 2019), on this as well (which is what I read). From my perspective, having studied some Biochemistry, what is interesting to me is that this result would be interpreted as the molecules "being split" on some dimension (time?) and being re-created at two different places. </p> <p>I was hoping someone more well-versed in Physics could shed light on how to interpret these results.</p>
<p>The double slit experiment, regardless of the size of the particles (electrons, neutrons, molecules) does not prove that those particles exist in two places at once, as claimed by the SciAm article. The difficulty of understanding this experiment in classical physics is caused by the use of an unsuitable classical model, rigid body Newtonian mechanics with interactions only by direct contact (bullets, biliard balls, etc.).</p> <p>The right classical model to be used is a field theory, like classical electromagnetism (CE). In such a theory the trajectory of the particle does depend on the entire distribution of field sources (in CE those would be electrons and nuclei), so opening a slit will influence the particle passing through the other slit, contrary to SciAm's claims. In other words, this experiment does not represent an obvious problem for classical physics. </p>
199