category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
statistical mechanics
what molecule would have molar entropy $R \ln 2$ at $0K$?
https://physics.stackexchange.com/questions/74972/what-molecule-would-have-molar-entropy-r-ln-2-at-0k
<p>I was browsing my friends old notes and I came across the following problem that I am not sure if it's correct.</p> <blockquote> <p>Q. Prove that the molar entropy of <strong>CO</strong> of $0K$ would be $R \ln 2$.</p> </blockquote> <p>Here, it is considered that the thermodynamic probability of $N$ molecules of <strong>CO</strong> is $$w = \frac{N!}{\left(N\over 2\right)!\left(N\over 2\right)!} $$ I was wondering what this <strong>CO</strong> molecule might be and why would it have thermodynamic probability as above in $0K$. Can any one explain that?</p> <p><strong>P.S.</strong> I am not sure if the problem was stated correctly.</p>
<p>CO is carbon monoxide. If you lay down these molecules in a lattice, each can have two orientations, almost equal in energy. Because of this, even at 0 K, carbon monoxide has <a href="http://en.wikipedia.org/wiki/Residual_entropy" rel="nofollow">residual entropy</a> because it's not a perfect crystal.</p> <p>There are $2^N$ microstates available. The number of ways to get the most probable situation - half the molecules with one orientation, half with the other - is given by the factorial formula you state. </p>
900
statistical mechanics
Boltzmann distribution with interaction between particles?
https://physics.stackexchange.com/questions/78524/boltzmann-distribution-with-interaction-between-particles
<p>First of all, I would like to apologize in advance if I make stupid mistakes. I am a mathematician and I am trying to apply the Boltzmann distribution to places where I am not sure if it is applicable (albeit I have no choice). </p> <p>The situation is: I have a system which consists in a discrete line of $M$ positions in which $N$ elements are distributed with a separation of at least $D$ positions between them. The state of each element should be its position in the line. Finally, (here's the fun part) each position in the line has an associated potential (so putting an element in the position $i$ means spending $\epsilon_{i}$ units of Energy). The usual approach to this problem as far as I have seen is just assigning $p_{i} = e^{-\epsilon_{i}/kT}$, where $p_{i}$ is the probability that there is an element in the position $i$. </p> <p>I don't understand this approach and I am trying to derive that new one, but I am stuck when trying to force the particles to be separated. Any insight or reference would be very much appreciated.</p> <p>Edit: If it is needed, we can also say that the particles might have a velocity (i.e. they can oscillate), but they should not be able to pass through each other.</p>
<p>Sure, it's no problem to do this. The thing that has to change is that $i$ should index over all possible configurations of the $N$ elements, and the energy in the Boltzmann distribution has to be the total energy of the system.</p> <p>So if $M=10$, $N=3$ and $D=2$ then, for example, $$ p([1,0,0,1,0,0,1,0,0,0]) = \frac{1}{Z}e^{-\frac{\epsilon_1 + \epsilon_4 + \epsilon_7}{kT}}, $$ but $$ p([1,0,1,0,0,0,1,0,0,0]) = 0 $$ because it's not allowed by the constraint. </p> <p>To calculate the normalising factor (or "partition function") $Z$, you have to sum over all allowed configurations of the system. It isn't immediately obvious (to me) how to do that analytically in this case, but you're the mathematician so I'm sure you can find an elegant way.</p> <p>Incidentally, you should be able to see that if there are no interactions between the $M$ positions then this reduces to the formula you originally quoted.</p>
901
statistical mechanics
Logical understanding of the canonical probability distribution (canonical ensemble)
https://physics.stackexchange.com/questions/83325/logical-understanding-of-the-canonical-probability-distribution-canonical-ensem
<p>I am having problems in understanding the logic of this distribution:</p> <p>$P(\Psi_{j})=\displaystyle\frac{e^{-E_{j}/kT}}{\displaystyle\sum_{j'}e^{-E_{j'}/kT}}$</p> <p>The book I am studying use the case of a sample in contact with a reservoir at thermal equilibrium to derive this distribution. I understand the derivation, but I don't understand the logic of the distribution itself. The aspect I'm having problem with is the fact that the lowest the energy of the sample, the highest the probability. What I don't understand is why this happens even thought there is an average energy given by the temperature which I thought should be more probable then any energy lower than this for a given particle. This doubt implies that I am looking at $P(\Psi)$ as the probability for a given particle, wich does not seem to be the case, but if I think of it as the probability for the hole sample, it makes even less sense for me since the energy should be totaly given by the temperature, so it wouldn't make sense to make a distribution of it if the temperature is considered constant. </p> <p>Thanks in advance</p>
<p>Maybe your intuition about energy and temperature need to be revisited. Your system can exchange energy with the reservoir at a given temperature. The system+reservoir will iterate through all microstates with equal probability (total energy being fixed), but you can show by using entropy arguments, that the probability of the system being in a state with energy $E$ is given by your first equation. The average energy of your system is then</p> <p>$\langle E \rangle = \sum_i E_i P(E_i) $.</p> <p>That is not the same as the most probable energy, which is $E_0$.</p> <p>When deriving the Boltzmann factor from the reservoir argument, there are corrections to the factor when the reservoir is finite. You can write, for the system in uniquely labelled states $i$ and $j$,</p> <p>$\frac{P(i)}{P(j)} = \frac{\Omega_R(i)}{\Omega_R(j)}$</p> <p>where $\Omega_R(i)$ is the number of microstates of the reservoir when the system is in state $i$. Writing this in terms of entropy gives a more fundamental form</p> <p>$\frac{P(i)}{P(j)} = e^{-\frac{S_R(i) - S_R(j)}{k_B} }$</p> <p>If you label the states of the system by energy, and allow it to be continuous, then you can Taylor expand $S_R(E_i)$ around $E=0$, since by assumption the energy of the reservoir is vastly bigger than that of the system. Changing variables to the energy of the reservoir $U_{res}$, the linear term is $-\frac{\partial S_R}{\partial U_{res}} E_i = -\frac{E_i}{T}$, from which the Bolzmann factor follows.</p>
902
statistical mechanics
Micro-canonical ensemble and classical reality
https://physics.stackexchange.com/questions/37740/micro-canonical-ensemble-and-classical-reality
<p>I seem to find a contradiction in the notion of probability density used by Landau and the notion of micro-canonical ensemble.</p> <p>To see this, take an isolated classical system and we know experimentally that its energy lies between $E-\Delta$ and $E+\Delta$. So, we take a hypershell corresponding to these energies in phase space and say that at equilibrium, the probability density is constant in the whole shell. Now, we know that the system would be, in reality, at a fixed energy E' and the hypersurface corresponding to this energy would lie in the previous hypershell. Also, as the system is isolated, the representative point of the system would move only on this hypersurface. Now, take a point in the shell which lies outside the surface. Choose a small enough neighborhood of it that doesn't intersect the surface. Because, the probability distribution is constant, the probability of finding the system in this neighborhood is some non-zero positive number. But, as the system always remains on the surface, it never visits that neighborhood and hence the probability of finding it in that neighborhood is zero.</p> <p>Am I doing something wrong?</p>
<p>No, you're not doing anything wrong, this is all correct. As an analogy, imagine I roll a die and hide it under a cup. Since you don't know which side of the die is facing upward, you represent it with a probability distribution, with an equal probability assigned to each of the six spaces. This probability distribution doesn't change over time, in this case for the trivial reason that the die isn't moving.</p> <p>You know that in reality, the die is sitting there with one particular side facing upward, and that it never "visits" any of the other sides. But unless I lift up the cup, you have no choice but to keep on thinking of it as being in a probability distribution, because you don't know which state is the true one.</p> <p>With the microcanonical distribution it's the same. There is indeed one "true" energy $E'$ that doesn't change, and the system cannot visit states with any other value of $E$. But the assumption is that you don't have any way to measure the energy beyond a certain level of accuracy. So, in the analogy, the die remains hidden under the cup and you have to keep representing it with a probability distribution. </p> <p>Although many text books fail to make this clear (because it was widely misunderstood for much of the 20th century), the probability distribution <em>doesn't</em> represent the set of states the system can visit, it just represents experimental uncertainty about which state the system is in. It is this uncertainty that remains invariant in equilibrium.</p>
903
statistical mechanics
Thermal equilibrium and non correlations
https://physics.stackexchange.com/questions/47709/thermal-equilibrium-and-non-correlations
<p>I read in a book on quantum fluctuations and quantum noise that, at thermal equilibrium the classical canonical variables are uncorrelated, ie: $$\langle xp\rangle=\langle x\rangle\langle p\rangle$$ But I am not sure to understand the sense of <em>at thermal equilibrium</em>, for me it just means $$\langle x^2\rangle=\langle p^2\rangle=\frac{T}{2}$$ in the correct units with $T$ the temperature. On the other hand what I can derive easily in the canonical ensemble is: $$\langle xp \rangle = Z^{-1}\int {\cal{DxDp}}\;\text{e}^{-\beta H(x,p)}xp$$ and $$\langle x\rangle\langle p \rangle = Z^{-2}\int {\cal{DxDpDx'Dp'}}\;\text{e}^{-\beta H(x,p)}\text{e}^{-\beta H(x',p')}xp'$$ which are equal if we have the following equality $$H(x,p) = H_1(x) + H_2(p)$$ separability of $x$ and $p$ in the Hamiltonian, which seems unrelated to the previous hypothesis of thermal equilibrium (used here by taking the mean over the canonical distribution of course).</p>
904
statistical mechanics
Examples of systems with energy as an intensive variable
https://physics.stackexchange.com/questions/49789/examples-of-systems-with-energy-as-an-intensive-variable
<p>I need to consider a couple of examples of systems which have energies that are intensive variables - not extensive. I'be been thinking about this and I am not coming up with anything. My understanding is that extensive variables (at least wrt usual energies) scales with mass or length (system size). It also seems that some 'energies' depend upon the model used, such as how strong the interactions are in neighbors of atoms or dipoles, etc., or whether one is considering chemical potential or not, etc.</p> <p>Any good suggestions?</p>
<p>The total internal energy of a system is completely out of the question as an answer, of course. I would even go as far as saying that it is the quintessential and most important extensive variable of a system. Therefore, most things that have to do with 'energies' within thermodynamics will also be extensive variables. I'm not sure of what is being expected from you as an answer, but as a tentative guess from a condensed matter bloke, without prior knowledge about the particular context in which this question was formulated, I would think of 'energy per <em>something</em>' quantities which are characteristic signatures of the thermodynamic state of a physical system.</p> <p>That would be the case of the bond energy per atom in a condensate (i.e., solid or liquid) system or the mean thermal energy per degree of freedom which is $\frac{1}{2}k_BT$ from the equipartition theorem (and an intensive quantity, as temperature is also intensive).</p> <p>Other possible answers that come to mind would be the Fermi energy of the gas of electrons in a neutral solid, or the characteristic gap energies of an insulating, semiconducting or superconducting material, although these are very solid state physics-centered answers.</p>
905
statistical mechanics
Statistical Mechanic
https://physics.stackexchange.com/questions/53381/statistical-mechanic
<p>One can define entropy as $$S=k\log{\omega(E)},$$ where $\omega(E)$ is the numbers of states with energy equal $E$; and the canonical partition function for a set of N particles is defined as$$Z_N=\sum_{\phi}e^{-\beta E[\phi]}=e^{-\beta F(\beta,N)},$$ where the sum run on states $\phi$ and the free energy is defined as $F(\beta,N)=U-TS.$ The mean value of the internal energy and the entropy i learned that would be $$\langle U\rangle=\frac{\partial(\beta F)}{\partial\beta}$$ $$\langle S\rangle=\beta^2\frac{\partial(F)}{\partial\beta}.$$ For definition, the mean value of any physical observable is $$\langle O\rangle=Z_N^{-1}\sum_{\phi}O[\phi]e^{-\beta E[\phi]}.$$ I'm quite sure that there will be a problem of amount $k$, if one verify the definition of $\langle S\rangle$. Am i wrong?</p>
906
statistical mechanics
microcanonical distribution
https://physics.stackexchange.com/questions/58102/microcanonical-distribution
<p>We know that in an isolated system, the density matrix is the microcanonical distribution matrix. That this the possibility for all the states with energy in a certain interval is a constant? But how can I deduce this from the postulate of equal probability? </p>
<p>The assertion that the density matrix for an isolated system is that of the microcanonical ensemble <em>implies</em> the postulate of equal a priori probabilities since, as you indicate, it assigns equal probabilities to each of the energy eigenstates of the system.</p> <p>I would then ask you the following question: On what grounds do you assert that the density matrix of an isolated system is microcanonical?</p> <p>One answer you could give is that the microcanonical density matrix is precisely the one that maximizes the (von-Neumann) entropy of the system.</p> <p>So then the question becomes: Why is the density matrix that which maximizes the entropy?</p> <p>The best answers to this come from understanding the von-Neumann entropy in the context of information theory. More on that <a href="http://en.wikipedia.org/wiki/Maximum_entropy_thermodynamics" rel="nofollow noreferrer">here</a>. You might also find the following related Physics.SE question interesting: </p> <p><a href="https://physics.stackexchange.com/questions/53147/why-is-von-neumann-entropy-maximized-for-an-ensemble-in-thermal-equilibrium">Why is (von Neumann) entropy maximized for an ensemble in thermal equilibrium?</a></p>
907
statistical mechanics
classical quantum particles in grand canonical ensemble
https://physics.stackexchange.com/questions/60256/classical-quantum-particles-in-grand-canonical-ensemble
<p>To derive Bose-Einstein and Fermi-Dirac distribution, we need to apply grand canonical ensemble:$Z(z,V,T)=\displaystyle\sum_{N=0}^{\infty}[z^N\sideset{}{'}\sum\limits_{\{n_j\}}e^{-\beta\sum\limits_{j}n_j\epsilon_j}]$. There is a constraint $\sideset{}{'}\sum\limits_{\{n_j\}}$ for quantum particles(bosons and fermions) in grand canonical ensemble:$\sum\limits_{j}n_j=N$, but why is there no such a constraint for classical particels?</p>
<p>For fermions, there is a constraint that each occupation number $n_i$ can only be either 0 or 1 because of the Pauli exclusion principle; no two fermions can occupy the same quantum state, but for bosons, there is no such constraint on the occupation numbers. For classical particles, namely those for which energy levels aren't quantized, there isn't a well-defined notion of occupation numbers.</p>
908
statistical mechanics
Number of particles in a microcanonical ensemble
https://physics.stackexchange.com/questions/62226/number-of-particles-in-a-microcanonical-ensemble
<p>Is it always assumed that, in a microcanonical ensemble, the number of particles is $N \gg 1$ ?</p> <p>If no, are all the theorems related to the microcanonical description true even if the number of particles is small ?</p>
<p>Numerical simulations are a good example to see with your own eyes that statistical mechanics results can be gotten without an infinite number of molecules in general and in particular in the microcanonical ensemble. However, one has to be aware of the finite size effects and see what is the difference with what you would expect from textbooks analytical calculations.</p> <p>For instance, if you ever wondered how can a scale measure the weight of a bottle filled with gas, then the answer is in the kinetic theory of gases and says that the scale will in fact record individual momenta exchange from the particles that collide elastically with the bottom of the bottle.</p> <p>In principle, you can put very few particles in a simulation and observe what the scale would actually measure. You would see sharp peaks every now and then but if you average over a very long time you will observe an average number for the weight that is that of the gas particles in your bottle.</p> <p>Increasing the number of particles basically increases the signal/noise ratio and in the thermodynamic limit you just observe one number even on very short time scales.</p> <p>Also, bear in mind that one of the goals of statistical mechanics is to give a rational for thermodynamics and this can only be done if one looks at the thermodynamic limit (to give a strong example, theoretical descriptions of phase transitions require non analycities in the free energy at the transition and these can only occur in the thermodynamic limit).</p>
909
statistical mechanics
Is this geometrical &#39;derivation&#39; of Brownian motion legitimate?
https://physics.stackexchange.com/questions/12297/is-this-geometrical-derivation-of-brownian-motion-legitimate
<p>Here's a simple 'derivation' of the Brownian motion law that after N steps of unit distance 1, the total distance from the origin will be sqrt(N) on average. It's certainly not rigorous, but I'm wondering if people think it's reasonable, or possibly even a commonly known.</p> <ol> <li><p>An object takes one step from the origin, so is at a distance 1: d = 1 for N=1.</p></li> <li><p>On average, the next step will be neither exactly toward or exactly away from the origin, so you compromise and say it steps along a direction that's perpendicular to the vector connecting the origin to its present location - sort of half way between walking backwards and walking forwards. By Pythagoras's theorem, the average distance will then be d = sqrt(1^2+1^2) = sqrt(2) for N=2.</p></li> <li><p>Likewise for N=3, stepping in a normal direction gives d = sqrt( sqrt(2)^2 + 1^2 ) = sqrt(3), so in general d = sqrt(N).</p></li> </ol> <p>This seems to work in dimensions 2 or higher.</p>
<p>As a heuristic description, this is exactly right and correctly captures the essence of the subject. To turn it from a heuristic "derivation" into an actual derivation, you just need to make precise the notion that the two vectors are perpendicular on average. The precise meaning of "perpendicular on average" that's useful in this context is that the dot product is zero on average. That is, if $\vec r_n$ is the position vector after $n$ steps, and $\vec s_n$ is the vector representing the $n$th step, then $$ \langle \vec r_n\cdot\vec s_{n+1}\rangle=0. $$ The angle brackets mean an ensemble average -- that is, an average over many trials.</p> <p>This statement is true -- the easiest way to prove it is that the probability distribution for $\vec s_{n+1}$ is symmetric about 0, so positive and negative contributions to the dot product occur equally. And it's sufficient to prove the standard formula. Since $\vec r_{n+1}=\vec r_n+\vec s_{n+1}$, $$ r_{n+1}^2=r_n^2+s_{n+1}^2+2\vec r_n\cdot\vec s_{n+1}. $$ In the ensemble average, the last term vanishes, so $r^2$ increases, on average, by $s^2$ (i.e., by 1 for unit steps) on each step.</p>
910
statistical mechanics
How can I derive the analog of the susceptibility sum rule for the specific heat?
https://physics.stackexchange.com/questions/434199/how-can-i-derive-the-analog-of-the-susceptibility-sum-rule-for-the-specific-heat
<p>How can I derive the analog of the susceptibility sum rule for the specific heat? Does an infinite correlation length imply an infinite specific heat? <span class="math-container">$$ \chi = \frac{\partial M}{\partial H} = \frac{1}{N}\sum_{i,j} \Gamma(i,j) $$</span></p>
<p>I'll restrict my answer to the nearest neighbour Ising model with <span class="math-container">$N$</span> spins <span class="math-container">$s_i=\pm1$</span>, for which the energy and magnetization are given by <span class="math-container">$$ E=-J\sum_{\langle j,k\rangle} s_j s_k, \qquad M=\sum_j s_j $$</span> and the notation <span class="math-container">$\langle j,k\rangle$</span> denotes nearest-neighbour pairs of spins.</p> <p>Recapping, the susceptibility formula comes from standard linear response theory <span class="math-container">$$ \chi = \frac{\partial M}{\partial H} \propto \langle M^2\rangle - \langle M\rangle^2 $$</span> where <span class="math-container">$\chi$</span> is the susceptibility of the whole system. Inserting the definition of <span class="math-container">$M$</span> <span class="math-container">$$ \chi \propto \sum_j\sum_k \left(\langle s_j s_k\rangle - \langle s_j\rangle\langle s_k\rangle \right) \propto N \sum_k \left(\langle s_0 s_k\rangle - \langle s_0\rangle\langle s_k\rangle \right) $$</span> where we use translational invariance to replace one of the sums by a factor <span class="math-container">$N$</span>; now <span class="math-container">$j$</span> has become an arbitrarily chosen spin <span class="math-container">$0$</span> which we can take as the origin of coordinates. The factor of <span class="math-container">$N$</span> gives the expected extensive behaviour of <span class="math-container">$\chi$</span>, and we can define the quantity inside the sum as the spin-spin correlation function <span class="math-container">$c$</span>, which is expected to depend only on the vector between the spins: <span class="math-container">$c(\mathbf{r}_k-\mathbf{r}_0)=c(\mathbf{r}_k)$</span>. So the susceptibility per spin is <span class="math-container">$$ \frac{\chi}{N} \propto \sum_{\mathbf{r}} c(\mathbf{r}) $$</span> where now we sum over all vectors from a lattice site at the origin to all other lattice sites. We can approximate the sum as an integral, and can often assume that <span class="math-container">$c$</span> has a finite range with a correlation length <span class="math-container">$\xi$</span>, so <span class="math-container">$c\sim \exp(-r/\xi)$</span>, and the integral will give a finite result even if we let <span class="math-container">$N\rightarrow\infty$</span>. However, if the correlation length diverges, for instance near the critical point, the integrand will not decay (fast enough) with distance <span class="math-container">$r$</span>, the integral will diverge, and we expect a diverging susceptibility.</p> <p>Formally we can go through a similar procedure for the heat capacity. Now <span class="math-container">$$ C_V = \frac{\partial E}{\partial T} \propto \langle E^2\rangle - \langle E\rangle^2 $$</span> where <span class="math-container">$C_V$</span> is the heat capacity of the whole system. This will be related to a correlation function of a different variable. Define, for each bond <span class="math-container">$b$</span> between a nearest neighbour pair <span class="math-container">$\langle j,k\rangle$</span>, the quantity <span class="math-container">$$ \varepsilon_b = -J s_j s_k $$</span> so that <span class="math-container">$$ E = \sum_b \varepsilon_b $$</span> The derivation follows exactly the same pattern, and we end up with the heat capacity per spin <span class="math-container">$$ \frac{C_V}{N} \propto \sum_b \left(\langle \varepsilon_0 \varepsilon_b\rangle - \langle \varepsilon_0\rangle\langle \varepsilon_b\rangle \right) \propto \sum_{\mathbf{r}} c'(\mathbf{r}) $$</span> We used translational invariance to sum over all the bond vectors that we could use as an origin: that's equal to the number of spins <span class="math-container">$N$</span> multiplied by <span class="math-container">$q/2$</span> where <span class="math-container">$q$</span> is the coordination number of the lattice, and this <span class="math-container">$q/2$</span> factor has been absorbed into the proportionality constant. The remaining sum is over all vectors connecting the centres of bonds, to the centre of the arbitrary bond that we have chosen as origin. This again can be treated as an integral. The correlation function <span class="math-container">$c'(\mathbf{r})$</span> is different from <span class="math-container">$c(\mathbf{r})$</span>, but again we expect it to have a characteristic correlation length <span class="math-container">$\xi'$</span>, and if this diverges, we expect to see a divergent <span class="math-container">$C_V$</span>.</p> <p>Naturally, near a critical point, the same phenomenon is giving rise to all these divergences. So we expect to see critical exponents for the correlation length(s), for <span class="math-container">$\chi$</span>, and for <span class="math-container">$C_V$</span>, which are all related to each other. That's the area covered by the scaling hypothesis, which is another story.</p> <p>For a more general physical model, the analysis may not be so direct, but the idea will be the same. Provided the interactions are short range, it should be possible to define a "local" energy, or energy density, which will have a correlation function in space. The magnitude of the total energy fluctuations will tend to diverge if the corresponding correlation length diverges, and hence the heat capacity will diverge. I'm not saying that all second-order phase transitions are accompanied by a diverging heat capacity, but this is what is expected for the Ising universality class on approach to a critical point.</p>
911
statistical mechanics
What is the sign of chemical potential of a noninteracting classical ideal gas obeying MB distribution?
https://physics.stackexchange.com/questions/454415/what-is-the-sign-of-chemical-potential-of-a-noninteracting-classical-ideal-gas-o
<p>The chemical potential of a noninteracting Bose gas can never be negative while that of a noninteracting Fermi gas can be both positive or negative. What can be said about the chemical potential of noninteracting classical ideal gas obeying MB distribution? </p>
<p>The simplest way to compute this is through the grand canonical ensemble. The partition function for a single gas molecule is <span class="math-container">$Z_1 = V/\lambda^3$</span>, where <span class="math-container">$\lambda$</span> is the thermal de Broglie wavelength. Then the grand partition function is <span class="math-container">$$\mathcal{Z} = \sum_N e^{\beta \mu N} \frac{Z_1^N}{N!} = \exp\left( \frac{e^{\beta \mu} V}{\lambda^3}\right).$$</span> The particle number is found by differentiating, <span class="math-container">$$N = \frac{1}{\beta} \frac{\partial \log \mathcal{Z}}{\partial \mu} =\frac{e^{\beta \mu} V}{\lambda^3}.$$</span> Therefore, solving for <span class="math-container">$\mu$</span>, we have <span class="math-container">$$\mu = k_B T \log \frac{\lambda^3 N}{V}.$$</span> The ideal gas only behaves classically if the occupancy of each state is small, so we must have <span class="math-container">$\lambda^3 \ll V/N$</span>. Then the logarithm is negative, so for a classical ideal gas <span class="math-container">$\mu &lt; 0$</span>.</p>
912
statistical mechanics
Lagrange multipliers in Maxwell-Boltzmann statistics
https://physics.stackexchange.com/questions/461145/lagrange-multipliers-in-maxwell-boltzmann-statistics
<p>I'm following Wikipedia's derivation of <a href="https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_statistics#Derivation_from_microcanonical_ensemble" rel="noreferrer">Maxwell-Boltzmann statistics</a>.</p> <p>After applying Lagrange multipliers, we arrive at this expression for energy:</p> <p><span class="math-container">$${\displaystyle E={\frac {\ln W}{\beta }}-{\frac {N}{\beta }}-{\frac {\alpha N}{\beta }}}$$</span></p> <p>with <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> as the constants emerging from the constraints.</p> <p>Next, it is explained that Boltzmann simply identified this as an expression of the fundamental thermodynamic relation:</p> <p><span class="math-container">$${\displaystyle E=TS-PV+\mu N}$$</span></p> <p>and just set the constants <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> equal to <span class="math-container">$-\mu/kT$</span> and <span class="math-container">$1/kT$</span> to set the expression so that they are equal.</p> <p>I can understand that setting the constants in this way does make the expressions same, but why is it physically or mathematically justified? I've been taught the method of Lagrange and we always had to solve a system of equations to figure out the constants and then finally solve the maxima or minima. But here we are simply setting the constants so that we can arrive at a nice expression, but why is it ok to just set the constants this way and conclude that we have arrived at something that represents reality?</p>
<p>Here we are not choosing some constant. We are arriving at the values of <span class="math-container">$\beta$</span> and <span class="math-container">$ \alpha$</span>. In the first equations <span class="math-container">$\ln(W)$</span> and <span class="math-container">$N $</span> are the variables which are arbitrary. Substituting <span class="math-container">$ S=k \ln(W)$</span> and <span class="math-container">$PV=NkT$</span> in the second equation gives <span class="math-container">$$Tk\ln W-kTN+\mu N=0$$</span><br> Comparing with first equation, since <span class="math-container">$\ln W$</span> and <span class="math-container">$N$</span> are arbitrary, gives the value <span class="math-container">$$\beta=\frac{1}{kT}$$</span> <span class="math-container">$$\alpha=\frac{-\mu}{kT}$$</span>.</p>
913
statistical mechanics
Electrons residing in an orbit with energy lower than the ground state energy
https://physics.stackexchange.com/questions/483835/electrons-residing-in-an-orbit-with-energy-lower-than-the-ground-state-energy
<p>Is it possible for an electron to reside in an energy level lower than that of the ground state? What happens to the electrons when an atom is brought down to 0K , do they come closer? What happens to the left of the orbitals ? </p>
<p>As pointed out in a comment by another user in a previous post by you, you cannot have an electron occupy a lower energy state than the ground state. The ground state is the lowest energy state by definition. </p> <p>As for the electron “coming closer” that’s more ambiguous in QM since we’re working with probabilities and not clearly defined positions. But in the ground state of say, the hydrogen atom, the radial probability for the electron is greater closer towards the center (although goes to zero again near the very center), meaning it is more probable to “measure” the electron around there. So in some sense this corresponds to your seemingly classical intuition about the electron coming closer to the center. </p> <p>It’s important to remember though that absolute zero is physically impossible to achieve.</p> <p>I’m not entirely sure what you mean by “to the left of orbitals”, however. </p>
914
statistical mechanics
Modern uses of classical statistical mechanics?
https://physics.stackexchange.com/questions/484734/modern-uses-of-classical-statistical-mechanics
<p>Most of the cases when I see applications of statistical mechanics is when Fermi-Dirac or Bose-Einstein statistic are used in condensed matter or the equilibrium equation of neutron stars.</p> <p>Besides the Poisson-Boltzmann equation for colloids and plasma screening, I would like to know what are the modern developments/applications of classical statistical mechanics.</p>
<p>If by "classical statistical mechanics" one means the equilibrium statistical mechanics, i.e. excluding applications of statistical mechanics to systems out of equilibrium, the last half century or so has witnessed many new developments/applications. It is difficult to make an exhaustive list, but certainly it should contain the following items:</p> <ul> <li>interacting systems: development of exact methods</li> <li>interacting systems: development of numerical simulation algorithms</li> <li>interacting systems: development of perturbation methods</li> <li>interacting systems: recasting of the theory as a Density Functional Theory</li> <li>applications to disordered systems (liquids, structural glasses, spin glasses,...)</li> <li>determination of phase diagrams</li> <li>applications to phase transitions</li> <li>theory and applications to critical phenomena, in particular the Renormalization Group approach.</li> </ul> <p>It depends on the exact definition of classical statistical mechanics to include or not in such a list the study of equilibrium dynamical properties (time correlation functions, transport coefficients, kinetic theory and alike). Personally, I would include them, since they all hinge on the basic principles of equilibrium statistical mechanics.</p>
915
statistical mechanics
Approximation of the total number of accessible microstates
https://physics.stackexchange.com/questions/547933/approximation-of-the-total-number-of-accessible-microstates
<p>So, here is a system having two subsystems <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> where the two subsystems can exchange energy between them, then the total number of accessible microstates of the whole system is given by, <span class="math-container">$$\Omega(E)=\sum_{E_{\alpha}}\Omega_{\alpha}(E_{\alpha})\Omega_{\beta}(E-E_{\alpha})$$</span></p> <p>which approximation did we use to get,<span class="math-container">$$\Omega(E) \approx \Omega_{\alpha}(\tilde E_{\alpha})\Omega_{\beta}(E-\tilde E_{\alpha})$$</span> where, <span class="math-container">$\tilde E_{\alpha}$</span> is the most probable value of <span class="math-container">$E_{\alpha}$</span></p>
<p>The approximation is <span class="math-container">$$ \Omega_\alpha(\tilde{E}_\alpha) \gg \sum_{E_\alpha \ne \tilde{E}_\alpha} \Omega(E_\alpha) $$</span> or in words: the number of microstates of the most occupied macrostate (which is also very close to the one having the mean energy) dominates not just some of the other macrostates, but all of them together. It is surprising at first, but when you look into it, it is indeed the case owing to the very large numbers involved. </p>
916
statistical mechanics
Probability of particle overcoming an energy barrier
https://physics.stackexchange.com/questions/619723/probability-of-particle-overcoming-an-energy-barrier
<p>I'm reading this article called:An experiment to demonstrate the canonical distribution(by M. D. Sturge and Song Bac Toha) Department of Physics, Dartmouth College, Hanover, New Hampshire 03755.</p> <p>They talked about the probability of a particle overcoming and energy barrier of height <span class="math-container">$\Delta E$</span>, they say is proportional to <span class="math-container">$\int_{0}^{\infty}g(\epsilon)e^{-(\epsilon+\Delta E)/kT}d\epsilon$</span></p> <p>I have 2 questions:</p> <p>-Where does this come from, why would integrating that result in a probability, wouldn't I obtain the total number of particles.</p> <p>-what factor must I include in order for this to be an equality , that is<span class="math-container">$ P(\Delta E)= factor \int_{0}^{\infty}g(\epsilon)e^{-(\epsilon+\Delta E)/kT}d\epsilon$</span></p>
<p>According to Boltzman distribution, the probability of finding the particle in a energy <span class="math-container">$E$</span> is propotion to <span class="math-container">$exp(-\frac{E}{KT})$</span>. Consider a degeneracy for energy <span class="math-container">$E$</span> is <span class="math-container">$g(E)$</span> the density of states, which means that there are <span class="math-container">$g(E) dE$</span> independent levels have their energy in between <span class="math-container">$E$</span> and <span class="math-container">$E+dE$</span>, the resultant probability of finding a particle in one of these energy levels is:</p> <p><span class="math-container">$$ p(E)dE \propto g(E) dE e^{-\frac{E}{KT}} $$</span></p> <p>Now, there is a energy valley with a barrier <span class="math-container">$\Delta E$</span>. For particle being able to cross the barrier, the particle must have energy greater that <span class="math-container">$\Delta E$</span>. The total probability of such particle (<span class="math-container">$E &gt; \Delta E$</span>):</p> <p><span class="math-container">$$ p(&gt;\Delta E) \propto \int_{\Delta E}^\infty g(\epsilon) d\epsilon e^{-\frac{\epsilon}{KT}} =\int_{0}^\infty g(\epsilon+\Delta E) d\epsilon e^{-\frac{\epsilon + \Delta E}{KT}} $$</span></p> <p>(In your expression, the variable inside the density of states <span class="math-container">$g$</span> is not correct.)</p> <p>To make it an equal sign:</p> <p><span class="math-container">$$ p(&gt;\Delta E) = \frac{1}{N} \int_{0}^\infty g(\epsilon+\Delta E) d\epsilon e^{-\frac{\epsilon + \Delta E}{KT}} $$</span></p> <p>The normalization constant:</p> <p><span class="math-container">$$ N = \int_0^\infty g(\epsilon) d\epsilon e^{-\frac{\epsilon}{KT}} $$</span></p>
917
statistical mechanics
Show that these definitions are equivalent
https://physics.stackexchange.com/questions/645667/show-that-these-definitions-are-equivalent
<ol> <li><strong>Consider the three definition of entropy namely <span class="math-container">\begin{eqnarray} S &amp;\equiv&amp; k\log\Gamma(E), \label{1.1}\\ S &amp;\equiv&amp; k\log\Sigma(E), \label{1.2}\\ S &amp;\equiv&amp; k\log\omega(E), \label{1.3} \end{eqnarray}</span> where <span class="math-container">$k$</span> is the Boltzmann constant, and <span class="math-container">\begin{eqnarray} \Gamma(E) &amp;\equiv&amp; \int_{E&lt;H(p,q)&lt;E+\Delta}d^{3N}\!p\,d^{3N}\!q; \quad \Delta \ll E, \label{1.4}\\ \Sigma(E) &amp;\equiv&amp; \int_{H(p,q)&lt;E}d^{3N}\!p\,d^{3N}\!q, \label{1.5}\\ \omega(E) &amp;\equiv&amp; \frac{\partial \Sigma(E)}{\partial E}, \label{1.6} \end{eqnarray}</span> With <span class="math-container">$H(p,q)$</span> being the Hamiltonian, and <span class="math-container">$(p,q)$</span> the set of canonically conjugate variables. Show that the three definitions are equivalent up to additive constants of order <span class="math-container">$\log N$</span>, where <span class="math-container">$N$</span> is the number of particles.</strong></li> </ol> <p>I have already answer one part of that question.</p> <p>Ans: Using the definition of <span class="math-container">$\Gamma(E)$</span> we get, <span class="math-container">\begin{align} \Gamma(E) &amp;\equiv \int_{E&lt;H(p,q)&lt;E+\Delta}d^{3N}\!p\,d^{3N}\!q \nonumber\\ &amp;= \int_{E&lt;H(p,q)}d^{3N}pd^{3N}q+\int_{H(p,q)&lt;E+\Delta}d^{3N}\!p\,d^{3N}\!q \nonumber\\ &amp;= -\int_{H(p,q)&lt;E}d^{3N}pd^{3N}q+\int_{H(p,q)&lt;E+\Delta}d^{3N}\!p\,d^{3N}\!q \nonumber\\ &amp;= \Sigma(E+\Delta) - \Sigma(E) \end{align}</span> Now expanding <span class="math-container">$\Sigma(E+\Delta)$</span> using Taylor series expansion where <span class="math-container">$\Delta \ll E$</span> we get, <span class="math-container">\begin{align} \Sigma(E+\Delta) = \Sigma(E)+ \Delta \frac{\partial \Sigma(E)}{\partial E} + ...\label{1.8} \end{align}</span> Therefore, <span class="math-container">\begin{align} \Gamma(E) &amp;= \Sigma(E+\Delta) - \Sigma(E) \nonumber\\ &amp;\approx \Sigma(E)+ \Delta \frac{\partial \Sigma(E)}{\partial E} - \Sigma(E) \quad \mbox{[retaining only 1st order term]}\nonumber\\ &amp;= \Delta \frac{\partial \Sigma(E)}{\partial E} \nonumber\\ &amp;= \Delta \omega \end{align}</span> Therefore from definition 1 we get, <span class="math-container">\begin{align} S &amp;\equiv k\log\Gamma(E) \nonumber\\ &amp;\equiv k\log(\Delta \omega) \nonumber\\ &amp;\equiv k\log(\Delta) + k\log(\omega) \nonumber\\ &amp;\equiv k\log(\omega) \quad \mbox{[$\Delta$ is small . So $\log\Delta \approx 0$]}\nonumber \end{align}</span> So, definition 1 and 3 are equivalent. Please mention how to show definition 1 and 2 are equivalent.</p>
<p>Let's show the definition 2 and 3 are equivalent. First, <span class="math-container">$$ \frac{\partial}{\partial E} \log \Sigma = \frac{\omega}{\Sigma}. $$</span> Therefore, <span class="math-container">$$ \log \omega = \log\Sigma + \log \left( \frac{\partial}{\partial E} \log \Sigma \right). $$</span> Since the second term of RHS is the order of <span class="math-container">$\log \log N$</span>, we can neglect it.</p>
918
statistical mechanics
Contrasting the microcanonical ensemble with the general approach to thermal equilibrium
https://physics.stackexchange.com/questions/711617/contrasting-the-microcanonical-ensemble-with-the-general-approach-to-thermal-equ
<p>For those wondering precisely what I am referencing throughout, I am contrasting the discussion in Reif Chapter 3.4 (general thermal equilibrium) with Chapter 6.2 (microcanonical ensemble).</p> <p>In the most general approach to thermal equilibrium (let us consider subsystems A and A' in which no external parameters for either change, so that A and A' only interact thermally), one finds the thermal equilibrium by finding respective energies <span class="math-container">$E$</span> and <span class="math-container">$E'$</span> of each (where <span class="math-container">$E+E'=E_o$</span>, the energy of the isolated composite system) such that the temperatures of the two subsystems are equal. For subsystems with relatively many degrees of freedom, it is then a simple matter to show that there is an overwhelming probability that energies are distributed at thermal equilibrium according to the energies found in equating temperatures in the aforementioned steps.</p> <p>Now fix A' as a heat reservoir. From the general formalism of the canonical ensemble, given some system A I can usually fix a temperature <span class="math-container">$T$</span> for A' (in particular, one that is large enough) which is such that many microstates of A have non-negligible probabilities associated with them.</p> <p>The two previous paragraphs seem to be at odds. In particular, I wonder:</p> <ol> <li>If I am at a high enough temperature, then how is it in general that the smaller system A will have an energy (or region of energies) which gives a number of states function <span class="math-container">$\Omega(E)$</span> with &quot;sufficient derivative&quot; <span class="math-container">$\partial \Omega(E)/\partial E$</span> to achieve the fixed temperature of the heat reservoir?</li> <li>Supposing that such a temperature is &quot;achievable&quot; by A, the following still seems to be at odds between the two paragraphs. The first paragraph seems to indicate that there will be one state of A which is overwhelmingly probable, and yet the second paragraph seems to suggest that many states of A can be non-negligibly probable. Here I think the resolution is simply that the energies <span class="math-container">$E$</span> of A are small compared to those <span class="math-container">$E'$</span> of A', so &quot;fluctuations&quot; to different A energies are within the tight range dictated by the first paragraph.</li> </ol> <p>Hopefully someone can explain (1) to me and make my comment in (2) (if correct) more precise.</p>
919
statistical mechanics
Sharpness of multiplicity function
https://physics.stackexchange.com/questions/222543/sharpness-of-multiplicity-function
<p>This is quoted from Daniel Schroeder's <em>An introduction to thermal Physics</em>:</p> <blockquote> <p><span class="math-container">$$\Omega= \left(\frac{e}{N}\right)^{2N} \; e^{N\ln (q/2)^2} e^{-N(2x/q)^2}\;=\; \Omega_\text{max} \cdot e^{-N(2x/q)^2}\;. $$</span></p> <p>A function of this form is called <strong>Gaussian</strong>; it has a peak at <span class="math-container">$x=0$</span> and a sharp fall-off on either side. The multiplicity falls off to <span class="math-container">$1/e$</span> of its maximum value when <span class="math-container">$$N\left(\frac{2x}{q}\right)^2= 1\;\; \text{or} \;\; x= \frac{q}{2\sqrt N}\;.$$</span></p> </blockquote> <p><a href="https://i.sstatic.net/T0QbE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/T0QbE.png" alt="enter image description here" /></a></p> <blockquote> <p><em><strong>This is actually a rather large number. But if <span class="math-container">$N= 10^{20},$</span> it's only one part in ten billion of the entire scale of the graph! On the scale used in the figure, where the width of the peak is about <span class="math-container">$1~\text{cm},$</span> the full scale of the graph would have to stretch <span class="math-container">$10^{10}~\text{cm}$</span> - more than twice around the earth. And near the edge of the page, where <span class="math-container">$x$</span> is only ten times larger than <span class="math-container">$q/2\sqrt N$</span>, the multiplicity is less than its maximum value by a factor <span class="math-container">$e^{-100}\approx 10^{-44}\;.$</span></strong></em></p> </blockquote> <p>I've not really understood what he is talking in the bold lines above especially the phrase used here, <em><strong>one part in ten billion of entire scale of the graph.</strong></em></p> <p>Could anyone please explain what he is actually intending to say in those lines?</p>
<p>The concept the paragraph is trying to drive home is that although the absolute size of the Gaussian peak is very large, it's small when compared to the length of the graph. Since $q$ is going to be of comparable size to $N$, the total scale of the Gaussian will be about $N$ while the peak will cover $\sqrt{N}$. </p> <p>So if you were to look at a case where $N=100$, then if the peak were scaled to $1$cm the entire graph of microstates would only span $10$cm. In that case the peak would be a full ten percent of the entire graph. In absolute terms the peak would only cover $10$ units, but in relative terms the peak covers $10\%$ of the graph. </p> <p>But in statistical mechanics we're interested in large numbers like $N=10^{20}$. In that case, when the Gaussian peak is scaled to $1$cm the entire graph would cover $10^{10}$cm. The absolute size of the peak is now much much larger and covers $10^{10}$ units, but the peak now only spans $.00000001\%$ or one part in ten billion of the graph.</p>
920
statistical mechanics
How does $\rho(\dot{q_1}\mathrm dt)(\mathrm dq_2, \ldots,\mathrm dp_f )$ represent the no. of systems that would enter the volume in $\mathrm d t\;?$
https://physics.stackexchange.com/questions/237826/how-does-rho-dotq-1-mathrm-dt-mathrm-dq-2-ldots-mathrm-dp-f-represe
<p>I've been following Reif's <em>Fundamentals of Statistical and Thermal Physics</em>; there I came before the derivation of <a href="https://en.wikipedia.org/wiki/Liouville%27s_theorem_(Hamiltonian)" rel="nofollow noreferrer">Liouville's theorem</a>:</p> <p><a href="https://i.sstatic.net/puKHUl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/puKHUl.png" alt="enter image description here"></a></p> <p>There I couldn't understood few things.</p> <p>I could conceive the change in the number of systems in $\mathrm dt$ is given by $$\frac{\partial \rho}{\partial t}\; \mathrm dt\; (\mathrm d q_1,\mathrm dq_2, \ldots,\mathrm d q_f; \mathrm d p_1, \mathrm dp_2,\ldots, \mathrm d p_f)$$ where </p> <ul> <li>$$\rho(q_1,q_2,\ldots, q_f; p_1,p_2,\ldots, p_f ; t)\;\mathrm d q_1,\mathrm dq_2, \ldots,\mathrm d q_f; \mathrm d p_1, \mathrm dp_2,\ldots, \mathrm d p_f = \textrm{no of systems in the ensemble at $t$ in the phase-space volume}\;(\mathrm d q_1,\mathrm dq_2, \ldots,\mathrm d q_f; \mathrm d p_1, \mathrm dp_2,\ldots, \mathrm d p_f)$$</li> </ul> <p>But then, I couldn't understand why the number of systems '<em>entering this volume in time $dt$ through the face $q_1$= constant</em>' is given by the quantity $\rho(\dot{q_1}\mathrm dt, \mathrm dq_2, \ldots,\mathrm dp_f )\;.$ </p> <p>My questions are:</p> <p>$\bullet$ How does $\rho(q_1,q_2,\ldots, q_f; p_1,p_2,\ldots, p_f ; t)((\dot{q_1}\mathrm dt)(\mathrm dq_2, \ldots,\mathrm dp_f ))$ represent the number of systems that would enter the volume in time-interval $\mathrm d t\;?$</p> <p>$\bullet$ How does the evaluation of $\dot q_i$ at $q_1+\mathrm dq_1$ yield $\dot q_i +\dfrac{\partial \dot q_i}{\partial q_1}\mathrm dq_1 \;?$</p>
<p>In the figure the volume element is moving from right to left. So the shaded region on the left multiplied by the "coarse-grained" density at that region is the number of systems entering the volume element. </p> <p>The width of the region on the left is equal to the change in $q_1$ in time $dt$ is equal to $\dot{q_1}dt$. So the area is given by $(\dot{q_1}\mathrm dt)(\mathrm dq_2, \ldots,\mathrm dp_f )$. So the number of particles entering the volume element is given by \begin{equation} \rho\times (\dot{q_1}|_{q_1}\mathrm dt)(\mathrm dq_2, \ldots,\mathrm dp_f ) \end{equation}</p> <p>At the same instant the systems in the shaded region on the right are leaving the volume element. That number is given by $\rho\times(\dot{q_1}|_{q_1+dq_1}\mathrm dt)(\mathrm dq_2, \ldots,\mathrm dp_f )$.</p> <p>Suppose $\dot{q}=f(q)$ some function of $q$. </p> <p>$$\dot{q}|_{q+dq}=f(q+dq)=f(q)+\frac{\partial f}{\partial q}dq=\dot{q}+\frac{\partial \dot{q}}{\partial q}dq\;.$$</p> <p>Putting this in the expression for shaded area (right) and subtracting this from expression for shaded area (left) will give the change in particle number in the volume element. </p>
921
statistical mechanics
Average value of Force in rotating bead using statistical physics
https://physics.stackexchange.com/questions/241561/average-value-of-force-in-rotating-bead-using-statistical-physics
<p>Consider a mass m fixed to the middle point of a string of length $L$ whose extremities are a distance $$l$$ apart, and pulled with a tension $$F$$. The system is in thermal equilibrium, and one supposes that the only effect of thermal fluctuations is to make the system rotate about the horizontal (dashed) axis. As a result of this rotation, a tension force F arises along the string.</p> <p>My hope is to show that $$&lt;F&gt; = \frac{l}{L^2-l^2}k_{B}T$$</p> <p>First I succeeded in showing that $F = \frac{mv^2}{rsin\theta}$. Now I have to get the average of $F$. How and by which approach should I take?</p> <p>The situation is: <a href="https://i.sstatic.net/cYDTZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cYDTZ.png" alt="enter image description here"></a></p>
<p>You only need to take the average of the expression you have found for $F$. Since $r\sin\theta$ is a constant, you can simply take it out of the average. That means $&lt;F&gt; = {1 \over r\sin\theta} &lt;mv^2&gt;$. Write $r\sin\theta$ in terms of $l$ and $L$, put the average of $mv^2$ in its place, and you will be done.</p>
922
statistical mechanics
how to interprete that the random forces in Langevin Equation are assumed to be delta-correlated
https://physics.stackexchange.com/questions/271440/how-to-interprete-that-the-random-forces-in-langevin-equation-are-assumed-to-be
<p>I mean that, is there anything more fundamental to yields the result that the random force in Langevin Equation is delta-correlated?<br/> As is shown in the picture of a textbook below, its formula (3.4) is given by the assumption that "impacts are independent".However, it is still daunted for me to derive delta-correlated function from it.<br/> Maybe there are fundamental concepts or derivation steps I should work on, which I will be appreciated of if you could point out generously. <a href="https://i.sstatic.net/GTMgy.png" rel="noreferrer"><img src="https://i.sstatic.net/GTMgy.png" alt="enter image description here"></a></p>
<p>Delta-correlation is just an approximation. The actual forces that they represent are <em>not</em> truly delta-correlated. However, typical atomic-scale force autocorrelations last ~ 1 picosecond, so it's a pretty good approximation.</p> <p>EDIT. To clarify, imagine dividing time up into tiny slices (~ 1 ps). The stochastic force experienced by the particle at time $t_i$ will be random with a mean of zero (i.e. Eq. 3.3). At the next time slice, $t_{i+1}$, a different stochastic force will be acting also with a mean zero. If the collisions that cause these two forces are independent then their product must have zero mean. This follows from the fact that when two random variables $X$ and $Y$ are independent of each other, the expectation of their product is the product of their means, $$ \langle XY\rangle=\langle X\rangle\langle Y\rangle $$ or, in this instance, $$ \langle F_a(t_i)F_a(t_{j\neq i}) \rangle = \langle F_a(t_i)\rangle\langle F_a(t_{j\neq i})\rangle=0 $$ See the <a href="https://en.wikipedia.org/wiki/Product_distribution#Expectation_of_product_of_random_variables" rel="nofollow">Expectation of product of random variables</a>.</p>
923
statistical mechanics
How to understand the map for the statistical mechanics?
https://physics.stackexchange.com/questions/309878/how-to-understand-the-map-for-the-statistical-mechanics
<p>Recently I found <a href="https://ocw.mit.edu/courses/physics/8-044-statistical-physics-i-spring-2013/readings-notes-slides/MIT8_044S13_L1.pdf" rel="nofollow noreferrer">a very interesting map</a> which seems to contain all these elements one will meet in the statistical mechanics.</p> <p>So does anybody want to share their understanding of this map?</p> <p><a href="https://i.sstatic.net/j46Ct.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j46Ct.png" alt="enter image description here" /></a></p>
<p>From reading the PDF, issued by MIT OpenCourseWare, (that map is the first page of the PDF), it seems a straightforward outline of the key points of an SM - TD course, stressing the importance of probability, (the language reference) and then showing how it will be structured.</p> <p>I can't see any more significance than that in it..</p>
924
statistical mechanics
What is the concept of Energy level in Maxwell-Boltzmann statistics?
https://physics.stackexchange.com/questions/587942/what-is-the-concept-of-energy-level-in-maxwell-boltzmann-statistics
<p>In Statistical thermodynamics Maxwell-Boltzmann statistics is considered a pre-quantum statistics. However in the mathematical treatment in all textbooks, and also in <a href="https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_statistics" rel="nofollow noreferrer">Wikipedia article</a>, there is the concept of 'energy level' (<span class="math-container">$E_i$</span>) involved in it. As far as I understand, energy level implies a quantization, which obviously is a quantum idea. How is this possible? What is it that I missed to understand?</p>
<p>Yes, Maxwell and Boltzmann produced their theories well before quantum ideas were dreamed of, and they were developed by <a href="https://en.wikipedia.org/wiki/Elementary_Principles_in_Statistical_Mechanics" rel="nofollow noreferrer">Gibbs</a> into the form we know today using purely classical ideas. This involves some quite deep assumptions about what macrostates are equally probably in an ensemble. Introducing quantum microstates makes it a <em>lot</em> easier to understand. So today it is taught that way round, and a good thing too.</p>
925
statistical mechanics
Confusion about the Second Law of Thermodynamics and statistical mechanics
https://physics.stackexchange.com/questions/592443/confusion-about-the-second-law-of-thermodynamics-and-statistical-mechanics
<p>Suppose you have a container of volume V containing some gas with energy E and N particles. Let's assume the container to be isolated for now.</p> <p>The microcanonical ensemble tells us that all microstates are equally likely. So a specific state in which all the molecules are at the top is as likely as a specific state in which all the molecules are evenly spread out. We calculate thermodynamic quantities by averaging over the ensemble and assuming this to be equal to the time-average of the system. And we do this because we assume that system goes through all these states and therefore over a period of time they will both be equal.</p> <p>So, essentially, we are saying that the gas goes through various configurations, i.e. the gas of volume V also goes through a configuration where the volume is v/4 at the top corner. However, when the system is in this particular state, shouldn't the macrostate then be defined as (V/4, N, E)? And does this not violate the second law?</p> <p>Essentially I'm confused about how exactly we define a macrostate and I think I've jumbled up concepts of entropy the second law in this process. Could someone explain what exactly the second law is trying to say at the level of statistics? TIA</p>
<p>Your question touches on some subtleties about which people very often get puzzled, and indeed there may not be universal agreement on the best way to describe the situation. I think the main point is that a macrostate should be defined in terms of the things which are contrained---so in your example, <span class="math-container">$V$</span>, <span class="math-container">$E$</span> and <span class="math-container">$N$</span>. Here <span class="math-container">$V$</span> refers to the volume of the chamber where the gas molecules are free to move, not the volume which they happen to take up at any one instant of time. So on those extremely rare occasions where the gas happens to all be on one side of the chamber, just by a thermal fluctuation, without anything constraining it to stay on that side, then we should say that <span class="math-container">$V$</span> has <em>not</em> changed. And, by similar reasoning, the entropy <span class="math-container">$S$</span> has not changed either. This is because the phrase &quot;the entropy of the gas&quot; refers to either the maximal or the average entropy after averaging over the <em>available</em> microstates, and the microstates where the gas fills the volume <span class="math-container">$V$</span> are still available. Notice that thermodynamic entropy <span class="math-container">$S$</span> does not refer to some other quantity, such as the entropy which the gas would have if it were constrained to stay in a smaller volume.</p> <p>The puzzle now concerns what should be said about a gas which has been constrained by a barrier to stay in one side of a chamber, with the other side evacuated, and then the barrier is removed. The question is, does the volume immediately double or does it grow as the gas expands? These are questions about a non-equilibrium situation, but it is the very same situation that the gas would be in if, after a very rare thermal fluctuation, all the molecules happened to be in one half of a chamber. So according to what I said above, we should say that the <em>equilibrium</em> volume <span class="math-container">$V$</span> doubles immediately, because it refers to the constraint the gas is under, but this is not necessarily the same quantity as the volume occupied by the molecules of the gas in a dynamic, non-equilibrium situation.</p> <p>Really the answer to all such questions is to define your own quantities as carefully as you can, and then ask of other peoples' work whether they are using the same quantities. In particular, the second law of thermodynamics is not broken by thermal fluctuations, but once you allow for thermal fluctuations it has to be stated more carefully. One way to state it would be to say that attempts to exploit thermal fluctuations (such as Maxwell's daemon) for purposes of converting heat from a single-temperature reservoir into work do not succeed once you take everything into account.</p>
926
statistical mechanics
Mean values of position in Van der Waals coupled systems
https://physics.stackexchange.com/questions/593611/mean-values-of-position-in-van-der-waals-coupled-systems
<p>I'm trying to solve a problem regarding two systems interacting through a coupling hamiltonian: <span class="math-container">$$ H(x,y) = H_1(x) + H_2(y) + kxy $$</span> I am supposed to express the mean values <span class="math-container">$\langle x \rangle$</span> and <span class="math-container">$\langle y \rangle$</span> in terms of their independent momenta <span class="math-container">$\langle x^n \rangle _0 = \int dx \rho_1(x) x^n$</span> (and the same for y), which are calculated on the isolated systems. It is not specified in the text of the exercise but I'm assuming to be in the canonical ensemble.<span class="math-container">$$$$</span> My idea was to find an expression of the momenta by manipulating the integral for the expectation value: <span class="math-container">$$\langle x \rangle = \frac{1}{Z}\int d\Gamma x\cdot\rho$$</span> Where i suppose <span class="math-container">$\rho = \frac{Z_1Z_2}{Z} \rho_1\rho_2e^{-\beta kxy}$</span>. So I tried to separate the integral with respect to the two variables and noted that maybe I could use the fact that <span class="math-container">$xe^{-\beta kxy} = -\frac{1}{\beta k}\frac{\partial }{\partial y} e^{-\beta kxy}$</span> to integrate with respect to y and get to: <span class="math-container">$$\langle x\rangle = \frac{Z_1Z_2}{Z\beta k}\int d\Gamma_1 \rho_1 e^{-\beta kxy}$$</span> But from here I have no idea on how to proceed, how to write this in term of the independent moment of x as it's asked. In fact I am not even sure that what I did so far is correct.</p> <p>EDIT: What I did is surely not correct, as I have integrated over y discarding the fact that the density function was in the integral, too. So I tried to integrate by parts, but that just gave me a term: <span class="math-container">$$\int d\Gamma_2 \dot{p_y} e^{-\beta kxy}e^{-\beta H_2}$$</span> The other term of the integration by parts I supposed to be <span class="math-container">$0$</span> as it is: <span class="math-container">$$(e^{-\beta (kxy + H_2)})_{y= \pm \infty}$$</span> So now I'm kind of back where I started..</p>
927
statistical mechanics
Is it possible for a particle to have all the energy of the Isolated System of particles?
https://physics.stackexchange.com/questions/596165/is-it-possible-for-a-particle-to-have-all-the-energy-of-the-isolated-system-of-p
<p>We have read the Fundamental postulate of statistical mechanics which says that :</p> <blockquote> <p>In a state of thermal equilibrium, All the accessible microstates of the system are equally probable.</p> </blockquote> <p>Suppose a system in thermal equilibrium with total energy to be <span class="math-container">$E$</span>. Now as every microstate is equally probable, Is the state in which all the energy is possessed by one particle is equally probable than other states? If yes, Why this isn't seen very often?</p>
<p>Because the macrostate is what you observe. Each macrostate is associated to a set o microstates. The probability of observing a macrostate is the sum of the probabilities of the equally likely microstates. Having all the particles at a single energy level has only one possibility. But having the particles spread over several energy levels gives rise to several microstates that correspond to a single macrostate.</p> <p>For example, if you have 5 particles and you find a way to put them in a single energy level, then you have only one microstate and one macrostate.</p> <p>However, if you find a way to put 3 particles in one energy level and 2 in another, then you have</p> <p><span class="math-container">$$\frac{5!}{3!2!}$$</span></p> <p>different ways of doing this. All these different ways are microstates but macroscopically they are the same. So observing on of these states is much more probable.</p> <p>note: I am assuming you are talking about distinguishable particles here.</p>
928
statistical mechanics
Apparent contradiction in Feynman&#39;s treatment of the velocity distribution of a gas
https://physics.stackexchange.com/questions/522369/apparent-contradiction-in-feynmans-treatment-of-the-velocity-distribution-of-a
<p>In Feynman's treatment of <a href="https://www.feynmanlectures.caltech.edu/I_40.html" rel="nofollow noreferrer">statistical mechanics</a>, <em>40–4 The distribution of molecular speeds</em>, Feynman found that the number of gas molecules per unit area per second who have an upper velocity component <span class="math-container">$v_{z}$</span> greater than <span class="math-container">$u$</span>, <span class="math-container">$n_{&gt;u}$</span>, is proportional to <span class="math-container">$ \exp(-\frac{mu^2}{2kT})$</span>, where <span class="math-container">$m$</span> is the mass of the molecule. </p> <p>He then introduces a function <span class="math-container">$f(u)$</span>,</p> <blockquote> <p>let <span class="math-container">$f(u)du$</span> be the <strong>fraction</strong> of all the molecules which have velocities between <span class="math-container">$u$</span> and <span class="math-container">$u+du$</span> ... <span class="math-container">\begin{equation} \label{Eq:I:40:5} \int_{-\infty}^{\infty} f(u)\,du = 1. \end{equation}</span></p> </blockquote> <p>Later, he tries to connect <span class="math-container">$f(u)$</span> with <span class="math-container">$ \exp(-\frac{mu^2}{2kT})$</span>, he said,</p> <blockquote> <p>First we ask, what is the number of molecules passing through an area per second with a velocity greater than <span class="math-container">$u$</span>, expressed in terms of <span class="math-container">$f(u)$</span>? At first we might think it is merely the integral of <span class="math-container">$\int_{u}^{\infty} f(u)du$</span>, but it is not, because we want the number that are passing the area per second. The faster ones pass more often, so to speak, than the slower ones, and in order to express how many pass, you have to multiply by the velocity. (We discussed that in the previous chapter when we talked about the number of collisions.) In a given time <span class="math-container">$t$</span> the total number which pass through the surface is all of those which have been able to arrive at the surface, and the number which arrive come from a distance <span class="math-container">$ut$</span>. So the number of molecules which arrive is not simply the number which are there, but the <strong>number that are there per unit volume</strong>, multiplied by the distance that they sweep through in racing for the area through which they are supposed to go, and that distance is proportional to <span class="math-container">$u$</span>. Thus we need the <strong>integral of <span class="math-container">$u$</span> times <span class="math-container">$f(u)du$</span></strong>, an infinite integral with a lower limit <span class="math-container">$u$</span>, and this must be the same as we found before, namely <span class="math-container">$ \exp(-\frac{mu^2}{2kT})$</span>, with a proportionality constant which we will get later: <span class="math-container">$$\begin{equation} \label{Eq:I:40:6} \int_u^\infty uf(u)\,du = \text{const}\cdot e^{-mu^2/2kT}. \end{equation}$$</span> </p> </blockquote> <p>So according to the paragraph above <span class="math-container">$constant\times u$</span> represents a distance, and <span class="math-container">$f(u)du$</span> represents a <strong>density</strong> (number of molecules per unit volume), whereas before it was a <strong>fraction</strong> (unitless) of the total number of molecules. Am I missing something?</p> <p>N.B. the <em>const</em> just above is the same as the proportionality constant between <span class="math-container">$n_{&gt;u}$</span> and <span class="math-container">$ \exp(-\frac{mu^2}{2kT})$</span>.</p>
929
statistical mechanics
Why when $-(\frac{\partial p}{\partial V})_T\geq 0$ we can say $-\frac{1}{V}(\frac{\partial V}{\partial p})_T\geq 0$?
https://physics.stackexchange.com/questions/535936/why-when-frac-partial-p-partial-v-t-geq-0-we-can-say-frac1v-fr
<p>Why when <span class="math-container">$-(\frac{\partial p}{\partial V})_T\geq 0$</span> we can say <span class="math-container">$-\frac{1}{V}(\frac{\partial V}{\partial p})_T\geq 0$</span> where V is the volume, p is the mean pressure of the system under consideration and T is the temperature which is kept fixed? Or why <span class="math-container">$\frac{\partial V}{\partial p}$</span> is the reciprocal of <span class="math-container">$\frac{\partial p}{\partial V}$</span>? I think in calculus there is no such theorem or statement, but in Reif's statistical mechanics I find such a statement and I don't know why.</p>
<p>Assume a function <span class="math-container">$f{\left(x\right)}$</span> is invertible, e.g. by assuming that it is monotonic. Let its inverse be called, suggestively, <span class="math-container">$X{\left(F\right)}$</span>. We have</p> <p><span class="math-container">$$X{\left(f{\left(x\right)}\right)} = x,$$</span></p> <p>and now differentiate both sides and use the chain rule to get</p> <p><span class="math-container">$$\left.\frac{dX}{dF}\right|_{F=f{\left(x\right)}} \frac{df}{dx} = 1.$$</span></p> <p>Identifying <span class="math-container">$P{\left(V\right)}$</span> with the inverse function of <span class="math-container">$V{\left(P\right)}$</span> thus allows us to obtain the relation you desire. </p> <p>We don't really need to assume the function is globally invertible. See <a href="https://en.wikipedia.org/wiki/Inverse_function_theorem" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Inverse_function_theorem</a></p>
930
statistical mechanics
QHO in Microcanonical Ensemble: Problem with alternate derivation
https://physics.stackexchange.com/questions/101406/qho-in-microcanonical-ensemble-problem-with-alternate-derivation
<p>I am working through Franz Schwabl's book on Statistical Mechanics, and he has a number of derivations of thermodynamic quantities that are different than those I have seen before. I am also having difficulty finding them repeated elsewhere.</p> <p>In particular, he has a method for calculating <span class="math-container">$\Omega(E)$</span>, the number of states with a given energy <span class="math-container">$E$</span>, of a series of <span class="math-container">$N$</span> independent Quantum Harmonic Oscillators (<span class="math-container">$\mathcal{H} = \sum_{j=1}^N\hbar\omega(n_j+\frac{1}{2})$</span>) that I hadn't seen before. Proceeding from the result</p> <p><span class="math-container">$$\Omega(E) = \mathrm{Tr}\,\delta(\mathcal{H}-E)=\sum_{n_1=0}^{\infty}\cdots\sum_{n_N=0}^{\infty}\delta\left(E - \hbar\omega\sum_{j=1}^N\left(n_j+\frac{1}{2}\right)\right),$$</span></p> <p>my strategy would be combinatoric: the delta-function turns the unrestricted sums over <span class="math-container">$n_j$</span> to a constraint on the total number of quanta. Calculating the number of ways you can partition <span class="math-container">$n=\sum_{j=1}^Nn_j$</span> quanta of energy among <span class="math-container">$N$</span> oscillators gives you <span class="math-container">$\Omega(E)$</span>. This is the way we did it in undergraduate stat mech.</p> <p>Schwabl's approach proceeds differently: by taking the Fourier Transform of the delta function, one obtains</p> <p><span class="math-container">$$\Omega(E) = \int \frac{dk}{2\pi}e^{ikE}\prod_{j=1}^N\left(e^{-ik\hbar\omega/2}\sum_{n_j=0}^{\infty}e^{-ik\hbar\omega n_j}\right)=\int\frac{dk}{2\pi}e^{ikE}\left(\frac{e^{-ik\hbar\omega/2}}{1-e^{-ik\hbar\omega}}\right)^N,$$</span></p> <p>where this last step involves <strong>summing a divergent geometric series,</strong> declaring <span class="math-container">$$\sum_{\ell=0}^{\infty}e^{-i\alpha\ell} = \frac{1}{1-e^{-i\alpha}}$$</span> and ignoring the fact that this series doesn't converge in a conventional sense.</p> <p>This simplifies to <span class="math-container">$$\Omega(E) = \int\frac{dk}{2\pi}e^{N(ik(E/N) - \log(2i\sin(k\hbar\omega/2)))}$$</span></p> <p>which is solved using the saddle-point approximation. The maximum of the argument of the exponential occurs at a value</p> <p><span class="math-container">$$k_0 = \frac{1}{\hbar\omega i}\log\frac{\frac{E}{N}+\frac{\hbar\omega}{2}}{\frac{E}{N}-\frac{\hbar\omega}{2}}$$</span></p> <p>Which is clearly imaginary, <strong>despite the fact that in a Fourier Transform <span class="math-container">$k$</span> is supposed to be a real number!</strong></p> <p><em>In spite of all this</em>, if you evaluate the integral using the saddle point approximation at <span class="math-container">$k=k_0$</span>, you get the same form for <span class="math-container">$\Omega(E)$</span> that one derives through the garden-variety combinatorial argument!</p> <p>My question(s) are</p> <blockquote> <p>Why does this work? Specifically:</p> <ul> <li><p>Why does it make sense to write the convergence of a divergent geometric series in the form given here (is this relying on some sense of convergence other than the typical one, and if so, what? And what does that imply about convergence in stat mech?), and</p> </li> <li><p>Why can you use the saddle point approximation when the maximum value does not occur in the space over which you are integrating?</p> </li> </ul> </blockquote> <p>Answers to this question might rely on appeals to other situations in which this math occurs and has been rationalized, physically if not mathematically.</p>
<p>I) If we expect $\Omega(E)$ to depend analytically on the variable $\hbar\omega&gt;0$ extended to (parts of) the complex plane, then we may regularize by introducing an $i\epsilon$ prescription, and substitute </p> <p>$$\tag{1} \hbar\omega ~\longrightarrow ~ \hbar\omega (1-i\epsilon). $$</p> <p>The variable </p> <p>$$\tag{2} q~:=~ e^{-i\hbar\omega k}~\longrightarrow ~ e^{-(i+\epsilon)\hbar\omega k} $$</p> <p>in the <a href="http://en.wikipedia.org/wiki/Geometric_series" rel="nofollow">geometric series</a></p> <p>$$\tag{3} \sqrt{q}\sum_{n=0}^{\infty} q^n $$</p> <p>will then have</p> <p>$$\tag{4} |q|~&lt;~1,$$</p> <p>so that the geometric series (3) is convergent. Then all steps in Schwabl's derivation of $\Omega(E)$ are mathematically well-defined. At the end of the calculation of $\Omega(E)$, we may put $\epsilon=0$.</p> <p>II) Concerning a complex (as opposed to real) stationary solution in the <a href="http://en.wikipedia.org/wiki/Method_of_steepest_descent" rel="nofollow">method of steepest descent</a>/ stationary phase method/saddle-point method, this is just part of the method. For a rigorous argument, one would have to consult the proof of the method. Heuristically, it is because when one evaluates the Gaussian integral over 'quantum fluctuations'</p> <p>$$\tag{5} \int_{\mathbb{R}} \! dx~ e^{-\frac{a}{2}x^2+bx} ~=~\sqrt{\frac{2\pi}{a}}\exp\left(\frac{b^2}{2a}\right),$$</p> <p>for two complex constants $a,b\in\mathbb{C}$, one only needs the condition </p> <p>$$\tag{6} {\rm Re}(a)~&gt;~0$$</p> <p>to ensure convergence of the integral (5). There is no need to also assume that the stationary solution $\frac{b}{a}$ is real. Equation (5) follows from the fact that </p> <p>$$\tag{7} \alpha\int_{\mathbb{R}} \! dx~e^{-\frac{1}{2}(\alpha x+\beta)^2} ~=~\int_{\gamma} \! d(\alpha x+\beta)~ e^{-\frac{1}{2}(\alpha x+\beta)^2} ~=~ \int_{\mathbb{R}} \! dx~ e^{-\frac{1}{2}x^2}~=~\sqrt{2\pi}$$ </p> <p>for any straight line in the complex plane </p> <p>$$\tag{8} \gamma(x)~=~\alpha x+\beta, \qquad \alpha,\beta~\in~\mathbb{C},\qquad x~\in~\mathbb{R}, $$ </p> <p>with slope </p> <p>$$\tag{9} |\arg(\alpha)|&lt;\frac{\pi}{4},$$ </p> <p>because in that case, it is possible to close the contour along exponentially suppressed arcs.</p>
931
statistical mechanics
Calculating $C_v$ of a canonical ensemble
https://physics.stackexchange.com/questions/565081/calculating-c-v-of-a-canonical-ensemble
<p>I am writing code to find the heat capacity <span class="math-container">$C_v$</span> of a canonical <span class="math-container">$NVT$</span> ensemble. We know that, <span class="math-container">$$C_v = \frac{\langle U^2 \rangle - \langle U \rangle ^2}{k_B T^2}$$</span></p> <p>I have written a Metropolis algorithm to see how a standard <span class="math-container">$NVT$</span> system evolves. This is how I plan to calculate <span class="math-container">$\langle U \rangle$</span>: <span class="math-container">$$\langle U \rangle = \frac{\sum_{m} P_m U_m}{\sum_m P_m} \quad \quad \ldots(1)$$</span> where <span class="math-container">$m$</span> is a microstate the system as determined by my Metropolis algorithm, and <span class="math-container">$$P_m = e^{-\frac{U_m}{k_BT}}$$</span></p> <p>Same applies for <span class="math-container">$\langle U^2 \rangle$</span>, except in I replace <span class="math-container">$U_m$</span> with <span class="math-container">$U^2_m$</span> in equation <span class="math-container">$(1)$</span>.</p> <p>Is this the right approach?</p>
<p>The <span class="math-container">$P_m$</span> factors are unnecessary. Why? Because a state <span class="math-container">$i$</span> will be returned <span class="math-container">$\approx Np_i$</span> times in <span class="math-container">$N$</span> samples. Since you have a record of samples (indexed by <span class="math-container">$m$</span> and not to be confused with <span class="math-container">$i$</span>, the state index) the sample record itself have the probability distribution embedded in it. You should just use</p> <p><span class="math-container">$$\langle U \rangle = \frac{1}{N} \sum_m u_m$$</span></p> <p>You can see this quantitatively by realizing <span class="math-container">$p_i$</span> (the probability <span class="math-container">$\exp(-u_i/kT)/Z$</span>) can be expressed as</p> <p><span class="math-container">$$p_i = \frac{1}{N} \sum_m \delta_{u_i,u_m}$$</span></p> <p>and</p> <p><span class="math-container">$$\langle U \rangle = \sum_i u_i p_i = \sum_i u_i \frac{1}{N} \sum_m \delta_{u_i,u_m} = \sum_i \sum_m u_i \frac{1}{N} \delta_{u_i,u_m} = \sum_m \sum_i u_i \frac{1}{N} \delta_{u_i,u_m} = \frac{1}{N} \sum_m u_m$$</span></p> <p>(in the last equality: <span class="math-container">$\sum_i u_i \delta_{u_i,u_m} = u_m$</span> since the samples correspond to exactly one state).</p> <p>I don't want to check if your eq. (1) is correct or not, but this way is simpler to implement anyway.</p>
932
statistical mechanics
Microcanonical Ensemle
https://physics.stackexchange.com/questions/244657/microcanonical-ensemle
<p>The energy of a particle is given by E=|p|+|q|, where p and q are generalized momentum and coordinate respectively. All the states with E less than equal to E0 are equally probable and states with E greater than equal to E0 are inaccessible. What is the probability density of finding the particle at coordinate q with q >0 ?</p>
933
statistical mechanics
How can we show that the BBGKY hierarchy is time symmetric?
https://physics.stackexchange.com/questions/249676/how-can-we-show-that-the-bbgky-hierarchy-is-time-symmetric
<p>I am trying to mathematically show that the BBGKY hierarchy for s particles is time symmetric by setting $t\rightarrow -t$. Using the Wikipedia notation for the s-particle we have</p> <p>$\frac{\partial f_s}{\partial t} + \sum_{i=1}^s \dot{\mathbf{q}}_i \frac{\partial f_s}{\partial \mathbf{q}_i} + \sum_{i=1}^s \left( - \frac{\partial \Phi_i^{ext}}{\partial \mathbf{q}_i} - \sum_{j=1}^s \frac{\partial \Phi_{ij}}{\partial \mathbf{q}_i} \right) \frac{\partial f_s}{\partial \mathbf{p}_i} = (N-s) \sum_{i=1}^s \frac{\partial}{\partial \mathbf{p}_i} \int \frac{\partial \Phi_{is+1}}{\partial \mathbf{q}_i}\cdot f_{s+1} \,d\mathbf{q}_{s+1} d\mathbf{p}_{s+1}.$</p> <p>But the problem is I don't know how the negation of time in the density $f_s(q_1...q_s,p_1..p_s,t)$ comes out to fit in with the rest of the equation. This is important since I want to track how we lose time symmetry under the assumption of molecular chaos that sets the two particles density as the prodcut of two one particle densities. </p>
934
statistical mechanics
Probability distribution of two particle types system
https://physics.stackexchange.com/questions/108566/probability-distribution-of-two-particle-types-system
<p>Suppose that particles of two different species, A and B, can be chosen with probability $p_A$ and $p_B$, respectively. </p> <p>What would be the probability (and distribution) $p(N_A;N)$ that $N_A$ out of $N$ particles are of type A? </p> <p>I'm trying to apply the Binomial distribution here but am bothered by the fact that it applies for N trials (whereas here we only do 1).</p>
<p>These sorts of problems are easiest to think about if you build up from simpler problems.</p> <p>Probability that n particles are all of type A: $p_A^{n}$.</p> <p>Probability that, with two particles chosen, the first is of type A, and the second of type B: $p_A p_B$.</p> <p>Probability that, with two particles chosen, one is of type A, and the other of type B (any order): $2 p_A p_B$. The $2$ is there because we have to cover the different orderings - it's actually a $2!$.</p> <p>Probability that, with three particles chosen, two are of type A: $\frac{3!}{2!1!} p_A^2 p_B$. The factorial in the denominator is to cover the fact that the first and second A-type particle can be swapped.</p> <p>Re-thinking the factorials, since they're easy to mess up: the product of the probabilities gives us the probability of a single ordering of As and Bs. We need to count the number of orderings with $N_A$ As, and multiply that by the probability. That's $\binom N {N_A}$, or $\frac{N}{N_A!(N-N_A)!}$.</p> <p>Probability that a particular configuration, with $N_A$ A-types particles, will be chosen: $p_A^{N_A}p_B^{N-N_A}$</p> <p>The general formula should now be pretty clear: $\frac{N!}{N_A!(N-N_A)!}p_A^{N_A}p_B^{N-N_A}$</p>
935
statistical mechanics
state occupation rate $n_{i}=\frac{1}{e^{\beta (\varepsilon _{i}-\mu )}+{[1/-1/0]}}$ &amp; density matrix $\rho _{m}=\frac{e^{-\frac{E_{m}}{kT}}}{Z(T)}$
https://physics.stackexchange.com/questions/114718/state-occupation-rate-n-i-frac1e-beta-varepsilon-i-mu-1-1-0
<p>Three kinds of distributions. The <strong>states occupation rates</strong>:</p> <h2><strong>F.D.</strong> $n_{i}=\frac{1}{e^{\beta (\varepsilon _{i}-\mu )}+1}$ <strong>B.E.</strong> $n_{i}=\frac{1}{e^{\beta (\varepsilon _{i}-\mu )}-1}$ <strong>Boltzmann</strong> $n_{i}=e^{-\beta (\varepsilon _{i}-\mu )}$</h2> <p>$(i$ labels different energies for a single particle, for the same $i$ there are still $G_{i}$ degeneracy.$)$</p> <p>While, the <strong>density matrix</strong> for <strong>all three kinds of distribution</strong> is $\rho _{m}=\frac{e^{-\frac{E_{m}}{kT}}}{Z(T)}$</p> <p>$(m$ labels different states of the $N$ particle system.$)$</p> <p><strong>It seems that Boltzmann distribution is more close to the density matrix expression. why is that?</strong></p> <p>$1$. If there is only one particle in the system, then</p> <p>$\varepsilon _{i}=E_{i}$</p> <p>$n_{i}=e^{-\beta (\varepsilon _{i}-\mu )}=\rho _{i}=\frac{e^{-\frac{E_{i}}{kT}}}{Z(T)}$</p> <p>$Z(T)=e^{-\mu }$</p> <p>$2$. If there are $N$ particles in the system, then</p> <p>$\sum_{all\, particles} \varepsilon _{i}=E_{n}$</p> <p>$\rho _{n}=\frac{e^{-\frac{E_{n}}{kT}}}{Z}= \frac{e^{-\frac{\sum \varepsilon _{i}n_{i}G_{i}}{kT}}}{Z}=\frac{\prod \left (e^{-\beta \varepsilon_{i} } \right )^{n_{i}G_{i}}}{Z}=\prod \left ( e^{-\beta (\varepsilon _{i}-\mu )} \right )^{n_{i}G_{i}}=\prod_{all\, particles}n_{i}$</p> <p>$Z(T)=e^{-\mu N }$</p> <p>$(i$ labels both different particle and its own energy number.$)$</p> <p>Boltzmann distribution has more obvious analogy with density matrix distribution. </p> <p><strong>My question is: Is the analogy right? Can we still get the result</strong></p> <p>$Z(T)=e^{-\mu N }$</p> <p>$\sum_{all\, particles} \varepsilon _{i}=E_{n}$</p> <p>$\rho _{n}=\prod_{all\, particles}n_{i} $ </p> <p><strong>for F.D. and B.E. distributions?</strong></p>
936
statistical mechanics
ensembles and lagrange multipliers
https://physics.stackexchange.com/questions/137433/ensembles-and-lagrange-multipliers
<p>In the derivation of maxwell-boltzmann distributions, the method of Lagrange multiplier is </p> <p>$\sum n_i = N$</p> <p>$\sum n_i E_i = E$</p> <p>where $N$ is the total number of particles, and $E$ is the total energy. And we try to find the macrostate with the most microstates, I think the derivation is familiar to most.</p> <p><a href="http://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_statistics#Derivation_from_microcanonical_ensemble" rel="nofollow">http://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_statistics#Derivation_from_microcanonical_ensemble</a></p> <p>My question is : By fixing the energy $E$, it's actually describing an isolated system without any exchange in energy with the surroundings, i.e. a micro-canonical ensemble. If you fix the energy of a system, it's trajectory should be on a constant-energy surface in phase space. What does this have to do with the canonical ensemble?</p> <p>If you want to derive the results for the canonical ensemble, shouldn't you be letting the energy to vary?</p>
937
statistical mechanics
Exponentially increasing $\Omega(E)$
https://physics.stackexchange.com/questions/137501/exponentially-increasing-omegae
<p>If I choose the number of microstates for energy $E$ to be $\Omega(E) = e^{aE}$ ($a&gt;0$), its temperature is constant: $$ kT = \left( {d\ln \Omega \over dE} \right)^{-1} = 1/a $$ If I choose $\Omega(E) = e^{aE^2}$ ($a&gt;0$), its temperature decreases as the energy goes up: $$ kT = \left( {d\ln \Omega \over dE} \right)^{-1} = {1 \over 2aE} $$ These pathological results seem to imply that something is problematic with this kind of exponentially (or more) rapidly increasing $\Omega(E)$. What prevents Nature to avoid such systems? Or are such systems known to exist in reality?</p>
<p>In string theory appears the so-called <em>Hagedorn behavior</em>, which is an exponential behavior of the density of states. As you point out, in this case, temperature does not vary with energy (so it means that the system has an infinite heat capacity!)</p> <p>This behavior appears in Little String Theory, e.g. <a href="http://arxiv.org/abs/hep-th/0010169" rel="nofollow">http://arxiv.org/abs/hep-th/0010169</a>.</p>
938
statistical mechanics
The number of states for fermions, bosons, and Boltzman in statistical mechanics
https://physics.stackexchange.com/questions/142723/the-number-of-states-for-fermions-bosons-and-boltzman-in-statistical-mechanics
<p>This is related with Equation 8.58 in Kerson Huang's 2nd edition of Statistical Mechanics. </p> <p>The partition functions for the ideal gases are given as $ Q_N (V,T) =\sum_{\{ n_p \}} g\{n_p \}e^{-\beta E\{n_p \}} $ where $E\{n_p \} =\sum_p \epsilon_p n_p$ and the occupation numbers are subject to the $\sum_p n_p =N$</p> <p>The textbook says, for a bose gas and a boltzmann gas $n_p=0,1,2, \cdots $ and for the fermi gas $n_p=0,1$. I understand this is due to the pauli-exclusion principles. But what i couldn't understand is eq 8.58 which is following</p> <p>$g\{n_p\} = 1$ for both bose and fermi gas </p> <p>$g\{n_p\} = \frac{1}{N!} \left( \frac{N!}{\prod_p n_p !} \right)$ for boltzaman gas</p> <p>Please explain why we have this results. </p>
<p>Bose particles <strong>cannot</strong> be identified as different in a given state, whereas boltzmann particles can (even though both types can occupy a given energy state with more than one particle). Thus boltzmann statistics need to take into account the <strong>permutations</strong> ($n!$) of the $n$ particles into a given state, in contrast to the bose particles (which are not identified as separate).</p> <p>For example for Bose-Einstein statistics the configurations:</p> <p>(ike, mike) and (mike, ike) are considered the same.</p> <p>Whereas for Maxwell-Boltzmann statistics the configurations:</p> <p>(ike, mike) and (mike, ike) are considered different (each particle "ike" or "mike" can be identified as "ike" or "mike").</p> <p>wikipedia <a href="http://en.wikipedia.org/wiki/Identical_particles" rel="nofollow">Identical particles</a></p>
939
statistical mechanics
What does Born Green equation signify physically?
https://physics.stackexchange.com/questions/149561/what-does-born-green-equation-signify-physically
<p>What does Born Green equation obtained from YBG hierarchy for the equilibrium particle densities signify? I mean how can you model the equation into a physical problem?I understood the steps involved in the derivation of the expression, but am still unsure if I understood it physically, am unable to explain it properly.</p> <p>The book is <a href="http://books.google.co.in/books?id=Uhm87WZBnxEC&amp;pg=PA83&amp;lpg=PA83&amp;dq=born%20green%20equation%20for%20equilibrium%20particle%20density&amp;source=bl&amp;ots=SDkTkgS_w8&amp;sig=Wfw9ii626CX-7jgBdFqGEVp3-10&amp;hl=en&amp;sa=X&amp;ei=bGl6VLLgMJWouQSIz4LgAw&amp;ved=0CCIQ6AEwAQ#v=onepage&amp;q=born%20green%20equation%20for%20equilibrium%20particle%20density&amp;f=false" rel="nofollow">this.</a></p>
<p>I'm not sure if there is much to physically understand in the equation itself, the derivation is where all the physical insight takes place (Kirkwood's superposition approximation). </p> <p>What of the equation, then? Why is it important and why should anyone care? It's because the BBGKY hierarchy, while exact, cannot be solved: It is a relation between the (reduced phase-space) distribution functions $f^{(n)} = \mathcal{F} (f^{(n+1)})$, and somehow this infinite dependence must be brought to an end if we wish to solve for anything. Making assumptions about the matter, Born and Green wrote a closure, $f^{(3)} = \mathcal{G}(f^{(2)})$, and now that $f^{(2)} = \mathcal{F} (f^{(3)}) = \mathcal{F} (\mathcal{G}(f^{(2)}))$, we have an equation for $f^{(2)}$ that we can in principle solve. Using the BBGKY hierarchy on $f^{(2)}$, then, we get all the other distribution functions. </p> <p>So, moving on to the subject matter, the Born-Green equation reads $$-k_BT\nabla_1(\log g(r_{12}) + \beta v(r_{12})) = \rho\int\nabla_1v(r_{13})g(r_{13})(g(r_{23})-1)\mathrm{d}\mathbf{r}_3$$ where I've made the simplifying assumption that the potential $v$ and the radial density distribution function $g$ only depend on the distance of the two particles. So given the potential $v$, we can solve this for the structure of the fluid, <em>i.e.</em> for $g$ (from which all the thermodynamic properties can then be calculated). The latter can in fact be obtained very simply from the structure factor, which in turn is the result of SAXS and similar experimental techniques ($g$ can be directly, and trivially, deduced from molecular simulations).</p> <p>Hill in <em>Statistical Mechanics</em> goes through the motions of solving this analytically using a couple of simplifying approximations. The basic idea is to do the expansion in density $g(r) = e^{-\beta v(r)}(1+\rho g_1(r))$, where $g_1(r)$ is some function (which can be related to the third virial coefficient). Substituting this along with the hard sphere potential (say) to the Born-Green equation, we finally get: $$g(r) = \left\{\begin{array}{cc}0 , &amp; r \leq a \\ 1 + \frac{4\pi}{3}(\rho a^3)\left(1-\frac{3}{4}\left(\frac{r}{a}\right)+\frac{1}{16}\left(\frac{r}{a}\right)^3\right), &amp; a &lt; r &lt; 2a \\ 1, &amp; a \leq r\end{array}\right.$$ <img src="https://i.sstatic.net/SO2ah.png" alt="RDF of hardsphere in YBG"></p> <p>Finally it should be mentioned that Born-Green is not a particularly good approach: the superposition approximation breaks down quite fast (the book by Hill I mentioned earlier goes on about this for several pages). Other, similar, methods work better. </p> <h2><strong>Addendum</strong></h2> <p>I decided to solve the equation without linearizing it (<em>i.e.</em> without the dilute approximation). This requires a numerical method, so I tried what first came to mind, fixed-point iteration, and it seems to have worked. First, I recast the integral into the form (or more accurately, I lifted this form from Hill) $$k_BT \log g(r) = -v(r) + \pi\rho \int_0^\infty v'(s)g(s)\int_ {-s}^s (s^2-y^2) \frac{y+r}{r} (g(y+r) - 1) \mathrm{d}y\mathrm{d}s$$ I massaged this a tad by throwing in the hard sphere potential. I <a href="http://digitizer.sourceforge.net/" rel="nofollow noreferrer">engauged</a> the data off <a href="http://www.sklogwiki.org/SklogWiki/index.php/Hard_sphere_model" rel="nofollow noreferrer">SklogWiki</a> for densities 0.2 and 0.6 and here's the results: <img src="https://i.sstatic.net/bzRlc.png" alt="BG vs data"></p> <p>The blue line is 0.6 from SklogWiki, the green the B-G prediction. Red is data for 0.2, and cyan the B-G prediction. </p> <p>Finally, the (horrible-looking-but-hopefully-understandable) code I wrote to produce the graphs (I'm not promising it's correct):</p> <pre><code>L = 10. rho = .2 g = concatenate((zeros(128), ones(128*9))) dx = 1.*L/(len(g)-1) r = dx*arange(len(g)) y = arange(-128, 129)*dx gn = g.copy() for j in range(300): g = .95*g + .05*gn for i in range(len(g))[128:]: if (i &gt;= len(g)-128): kk = i - (len(g)-128) + 1 grng = concatenate((g[arange(-128, 129-kk) + i], ones(kk))) else: grng = g[arange(-128, 129) + i] gn[i] = exp( -rho*pi*g[128]*sum((1.-y**2) * (y+r[i])/r[i] * (grng-1) * dx) ) </code></pre> <p>I used Python with the roots of SciPy and NumPy imported into the main namespace (or whatever it is that <code>ipython2 --pylab 'qt4'</code> does on my computer).</p>
940
statistical mechanics
Statistical mechanics: What is a &quot;microscopic realization&quot; of a system?
https://physics.stackexchange.com/questions/133720/statistical-mechanics-what-is-a-microscopic-realization-of-a-system
<p>What is a "microscopic realization" of a system?</p> <p>The context is statistical mechanics. The microscopic system consists of many atoms (too many to track individually) with an assigned probability density function <code>f(x,y,z,Vx,Vy,Vz,t)</code>. </p> <p>The macroscopic system consists of the atoms taken together, with macroscopic quantities computed as expectations of microscopic quantities. </p>
<p>Statistical mechanics relies on a probabilistic understanding of the world and as such one needs to define a probability space. In classical statistical mechanics the probability space consists of a domain which is the set of all possible <em>microstates</em> (that is the position and velocity vectors of all the particles in the system) and a probability measure associated to this space. This probability measure ensures that all microstates incompatible with the constraints on your system (e.g. fixed number of particles, fixed volume etc...) have probability zero. The set of constraints on your system define what is called a <em>statistical ensemble</em>.</p> <p>In equilibrium classical statistical mechanics, the probability measure is invariant under the law of classical mechanics and is therefore time independent. Traditionally, we tend to equate the operational notion of statistical ensembles and the probability measures they correspond to in equilibrium statistical mechanics and we use essentially four measures/ensembles (microcanonical ,canonical, grand canonical and canonical-isobaric).</p> <p>Now, the fact that we use probabilities is in essence no different than our use of probabilities when we cast a dice, except that here the dice has many many faces (which correspond to all the microstates one can imagine).</p> <p>Now, in the same way that when you cast a dice, a realization would be any number between 1 and 6 (for instance 3), then for a thermodynamic system with say fixed $(E,N,V)$, a microscopic realization can be any microstate compatible with these constraints (for instance all the particles in the corner of the box with one particle having all the energy $E$ of the system and the rest of the particles having no motion). You just have to imagine that you have in your hands a god like device that can tell you the instantaneous positions and velocities of all the particles in your system. Each time you perform a measurement with this device, you will be observing a microstate compatible with the statistical ensemble and hence observing a microscopic realization of this ensemble.</p>
941
statistical mechanics
The BBGKY Hierarchy
https://physics.stackexchange.com/questions/133787/the-bbgky-hierarchy
<p>The collision term in the Boltzmann equation can be derived from the BBGKY hierarchy. </p> <p><a href="http://en.wikipedia.org/wiki/BBGKY_hierarchy" rel="nofollow">Wikipedia</a> says:</p> <blockquote> <p>In statistical physics, the BBGKY hierarchy [...] is a set of equations describing the dynamics of a system of a large number of interacting particles. The equation for an s-particle distribution function (probability density function) in the BBGKY hierarchy includes the (s + 1)-particle distribution function thus forming a coupled chain of equations.</p> </blockquote> <p>Does this mean, if I have a system consisting of s particles, that there is an interaction with a particle outside my system? So my system is not closed?</p>
<p>No, this is talking about correlations between s random particles. The s-particle distribution function is a 2*d*s (so 6s in 3 dimensional space) dimensional PDF that statistically describes s particles. For s=1, this is just the normal density in phase space. For s=2, this might show, for example, that more often than not two particles are traveling away from each other (maybe they just collided). A good source for this is Ch. 2 in "The Statistical Physics of Particles" by Mehran Kardar.</p>
942
statistical mechanics
How to derive the Bhatnagar-Gross-Krook collision integral from Boltzmann one?
https://physics.stackexchange.com/questions/154371/how-to-derive-the-bhatnagar-gross-krook-collision-integral-from-boltzmann-one
<p>Let's have Boltzmann collision integral: $$ I_{coll} =\int d \sigma d^{3}\mathbf p_{1}(ff_{1} - f{'}f{'}_{1})|\mathbf v_{rel}|.\tag{1}\label{1} $$ How to transform $\eqref{1}$ to BGK collision integral, $$ I_{coll} = \frac{1}{\tau}(f - f_{0})?\tag{2}\label{2} $$ Here $\tau = \frac{|v_{rel}|}{l} \sim v_{rel}N\sigma $. Particularly I don't understand how to "linearize" $\eqref{1}$ for getting $\eqref{2}$.</p>
943
statistical mechanics
Number of states of a simple system
https://physics.stackexchange.com/questions/166369/number-of-states-of-a-simple-system
<p>I am trying working on a problem in which there are two energy states $E_{1}&lt;E_{2}$, and three different (i.e. distinguishable) particles. </p> <p>I cannot decide if the order of the particles matters. If it doesn't, then there are 8 states. If order does matter, there are 24. My problem is not knowing the logic necessary to make the decision. </p> <p>How many states does this system have? Is it ambiguous, given the information here, or is it definitely 8 or definitely 24?</p>
<p>first you have to specify what ensemble you are working in. in case of microcanonical ensemble (total energy fixed). no. of microstates will be decided by no. of different ways total energy can be E.then as only two energies are possible . let n1 particle are in state e1 n2 in state e2 with constraint that n1+n2=3 and n1*E1+n2*E2=E. total no. of microstates will be (3!/(n1)!*(n2)!).</p>
944
statistical mechanics
how will the distribution of the no. of particles be in a system ,(N,V,E) if N tends to infinity?
https://physics.stackexchange.com/questions/171499/how-will-the-distribution-of-the-no-of-particles-be-in-a-system-n-v-e-if-n-t
<p>MB distribution is followed if there are N no. of non interacting and distinguishable particles. But if N tends to infinity why does the no. of micro states reduces? Is there any peak in the graph?</p>
945
statistical mechanics
Statistical Mechanics deals with the same systems that Thermodynamics does?
https://physics.stackexchange.com/questions/174627/statistical-mechanics-deals-with-the-same-systems-that-thermodynamics-does
<p>Thermodynamics deals with "equilibrium states of macroscopic matter", that is, considering macroscopic systems there are states which can be characterized fully by a few number of measured degrees of freedom and on such states we are not able, through macroscopic measurements to see the fact that the molecules and atoms are not really in equilibrium states. Those are the equilibrium states and Thermodynamics deals with those states of macroscopic systems.</p> <p>Now I'm starting to study Statistical Mechanics and I wonder if the situation is the same. This question is because the book talks about the "thermodynamic limit" which is attained when we let $E,V,N\to \infty$ with finite $u = E/N$ and $v = V/N$.</p> <p>This led me to think that Statistical Mechanics can deal with some more systems than the ones thermodynamics is concerned with. So Statistical Mechanics just drops the "macroscopic matter" part and deals with equilibrium states of general systems or it deals just with the same systems considered in thermodynamics but with another viewpoint?</p>
<p>Indeed, statistical mechanics in principle deals with completely general systems.</p> <p><a href="https://en.wikisource.org/wiki/Elementary_Principles_in_Statistical_Mechanics" rel="nofollow">From the man himself who coined the term "statistical mechanics"</a>:</p> <blockquote> <p>The laws of thermodynamics, as empirically determined, express the approximate and probable behavior of systems of a great number of particles, or, more precisely, they express the laws of mechanics for such systems as they appear to beings who have not the fineness of perception to enable them to appreciate quantities of the order of magnitude of those which relate to single particles, and who cannot repeat their experiments often enough to obtain any but the most probable results. The laws of statistical mechanics apply to conservative systems of any number of degrees of freedom, and are exact.</p> </blockquote> <p>Moreover, stat mech applies to any states that involve uncertainty, not only equilibrium states. That said, however, the most popular applications of stat mech, and easiest to calculate, tend to involve thermodynamic systems, and so many textbooks will focus on that subcase.</p>
946
statistical mechanics
Strange vector matrix operation (in &quot;A Modern Course in Statistical Physics&quot; by Reichl)
https://physics.stackexchange.com/questions/182773/strange-vector-matrix-operation-in-a-modern-course-in-statistical-physics-by
<p>I am reading "A Modern Course in Statistical Mechanics" by Linda E. Reichl. Where i encountered this notation:</p> <p>$$\Delta S = \bar g : \vec \alpha \vec \alpha$$</p> <p>Here $\bar g$ is $$ g_{i,j}=-{ \partial^2 S \over \partial A_i \partial A_j}\bigg|_{A_i = A_i^0, A_j = A_j^0 } $$ a matrix of second derivatives of the entropy S derives after thermodynamic variables $A_i$ around their equilibrium $A_i^0$. And the vectors $ \vec \alpha $ are defined as deviations of the thermodynamic variables from their equilibrium: $\alpha_i= A_i-A_i^0$.</p> <p>From the equation in question this $$ -\left({ \partial \Delta S \over \partial \alpha}\right)= \bar g \alpha$$ follows immedeatley. Maybe that helps anyone?</p> <p>I have not yet found an explanation of $\Delta S = \bar g : \vec \alpha \vec \alpha$ anywhere and i would be really glad if anyone can explain this. If you need more information i am of course willing to provide. </p>
<p>I think what they could mean is that $\vec{\alpha}\vec{\alpha}$ is a second rank tensor that is contracted with $\overline{g}$. I saw this notation being used in the context of electrodynamics before. It is used to get a simple notation for multi-dimensional Taylor series. So we get</p> <p>$$\Delta S=\overline{g}:\vec{\alpha}\vec{\alpha} \equiv \sum_{ij} -\frac{1}{2!}g_{ij} \alpha_i \alpha_j$$</p> <p>The minus takes into account that $g_{ij}$ contains the negative derivative in contrast to the ordinary taylor expansion. From this would follow that</p> <p>$$\frac{\partial\Delta S}{\partial\vec{\alpha}} =(\partial_{\alpha_i}\Delta S)\ \vec{e}_i = -\sum_{kl} \frac{1}{2!}g_{kl} \ \partial_{\alpha_i}(\alpha_k \alpha_l)\ \vec{e}_i =-\sum_{kl}\frac{1}{2!}g_{kl} (\delta_{ik} \alpha_{l} + \delta_{il} \alpha_k)\ \vec{e}_i $$</p> <p>And therefore</p> <p>$$\frac{\partial\Delta S}{\partial\vec{\alpha}} = -\sum_{k} \frac{1}{2!}(g_{ik} \alpha_{k} + g_{ki} \alpha_k)\ \vec{e}_i = -\frac{1}{2!}(\overline{g}\vec{\alpha}+\overline{g}^T\vec{\alpha}) = -\overline{g}\vec{\alpha} $$</p> <p>Which is the desired result. The last step follows from the symmetry of $\overline{g}$.</p> <p>I finally consulted my notes on electrodynamics where I found the multivariate taylor expansion of a scalar field $s(\vec{x})$ around $\vec{x}_0$ written as</p> <p>$$s(\vec{x}) = \sum_{n=0}^\infty \frac{1}{n!} (\vec{x}-\vec{x}_0)^n \vdots \nabla^n s(\vec{x}_0)$$</p> <p>where</p> <p>$$\nabla^n s(\vec{x}_0) \equiv (\partial_{i_1} \ldots \partial_{i_n}s(\vec{x}_0))e^{(i_1)}\ldots e^{(i_n)}$$</p> <p>and</p> <p>$$(\vec{x})^n\equiv x^{i_1}\ldots x^{i_n} e_{(i_1)}\ldots e_{(i_n)}$$</p> <p>Note that here it was distinguished between co- and contravariant components and the sum-convention has been used. So that $\vdots$ represents the n-fold contraction of the two tensors.</p>
947
statistical mechanics
Bending moment and Shear force
https://physics.stackexchange.com/questions/187147/bending-moment-and-shear-force
<p>Do bending moment and shear force of a beam depend on it's cross sectional dimentions?? Since all the diagrams which I have draw so far don't involve any cross section details. So I think they do not depend on them and don't do any influences on the shear force diagrams and the bending moment diagram.</p>
<blockquote> <p>Do bending moment and shear force of a beam depend on it's cross sectional dimentions??</p> </blockquote> <p>I never underestimate the power of experiment, and that indeed, physics and all of science is rooted deeply in experiment. Is it easier for me to snap a foot-long twig or the one foot high stump of a big tree that sticks out of the ground after someone has chopped it off, and what's different?</p> <p>Yes, of course, the cross sectional dimensions matter. It's proportional to the moment of inertia, which you can find at <a href="http://en.wikipedia.org/wiki/Second_moment_of_area" rel="nofollow">http://en.wikipedia.org/wiki/Second_moment_of_area</a></p>
948
statistical mechanics
How Statistical Physics?
https://physics.stackexchange.com/questions/187970/how-statistical-physics
<p>It's a common fact that in physics, we use statistics (or maybe probabilities ) to describe the behaviour of a system. It was from the statistical analysis of a system where quantum statistics arose and then the theory of quantum mechanics began.</p> <p>How is possible to make such descriptions and to construct theories fully capable of predicting and describing the (probabilistic) functionality of macroscopic system, quantum or classic or to be more specific:</p> <p>What are the attributes that hold for a system with many particles that allows us to study their hole behaviour as a system, with statistical physics?</p>
<p>We can use statistics by being willing to ask different questions.</p> <p>No individual particle has a pressure or a temperature. But we can ask about subsystems with particular pressures or temperatures.</p> <p>So we group collections into subsystems that are large enough and regular enough to have collective properties like pressure and temperature.</p> <p>Then we can use statistical methods to find out how constraints on the whole subsystem (such as volume or total energy) can affect the collective properties like temperature or pressure.</p> <p>This is a bit like cheating. We decide to only focus on things like temperature and pressure that only exist for large subsystems and then we see what kinds of constraints on the large subsystem such as total energy can affect it. And the real answer is that the total energy alone does not determine exactly what happens but we can set it up so that for a large enough system it usually acts in a certain way. So then we focus only on the usual behaviour.</p> <p>Like all science we study it because it works and is worthwhile. Besides, it's not like we'd actually be aware of the exact state of a subsystem if the subsystem had a truly huge number of components anyway.</p>
949
statistical mechanics
Boltzmann distribution for angles?
https://physics.stackexchange.com/questions/189522/boltzmann-distribution-for-angles
<p>Consider a system whose sole degree of freedom is an angle $\theta$ that goes from $0$ to $2\pi$. Let $E(\theta)$ be its energy function. Obviously, $E(\theta)$ is $2\pi$-periodic. What's the general form for the Boltzmann distribution for $\theta$? Is it just: $P(\theta)\propto e^{-E(\theta)/kT}$? Or is there some issue stemming from the fact that $\theta$ is an angle?</p>
<p>There's no issue with the energy having an angular dependency. This is similar to the case of a spin in a magnetic field, in which the energy is</p> <p>$$E = -\mathbf{\mu \bullet B}$$</p> <p>or </p> <p>$$E (\theta) = -\mu B cos(\theta)$$</p> <p>This poses no problem. As you say, the Boltzmann factor is $e^{\mu B cos(\theta)/kT}$, and the partition function is found by integrating the Boltzmann factor w.r.t. $\theta$ from $0$ to $2 \pi$. </p> <p>Others are correct in suggesting that in cases of a continuous energy spectrum, you're most often integrating the probability density over a range of interest to find what the probability is that, say, the spin is aligned with the field to within 0.1 radians. But this is common in statistical physics, and there's nothing special about the case of an angular dependence.</p>
950
statistical mechanics
The different in wear test when using Aluminum and Steel disc in pin on disc apparatus
https://physics.stackexchange.com/questions/203401/the-different-in-wear-test-when-using-aluminum-and-steel-disc-in-pin-on-disc-app
<p>In wear test of pin on disc apparatus i found that mass loss of pin when i used Aluminum disc is higher than when i used Steel disc under the same conditions ,pressure, velocity and contact time can anyone explain this behavior to me and give me the reason ?</p>
<p>Sometimes, when you have a soft material like aluminum, and a brittle material like asbestos, in a pin/disk configuration, particles of asbestos break off and become embedded in the aluminum. And now you have created a very abrasive disk (a bit like diamond particles in phosphor bronze), and you will wear the pin much more rapidly while apparently protecting the aluminum.</p> <p>See for example <a href="http://www.researchgate.net/publication/248296418_An_energetic_approach_to_abrasive_wear_of_a_martensitic_stainless_steel" rel="nofollow">Pamuk et al</a></p> <p>From the introduction:</p> <blockquote> <p>Abrasive wear is the most common type of wear that causes failure of machine elements. Examinations of abraded surfaces revealed presence of embeded particles and grooves elongated along the sliding direction. This indicates that, there are two sequential stages of an abrasion process. In the first stage, asperities on the hard surface and/or hard abrasive grains penetrate into the soft material surface and then in the second stage, they grind the surface in the sliding direction. </p> </blockquote>
951
statistical mechanics
Energy transfer in form of work or heat?
https://physics.stackexchange.com/questions/205394/energy-transfer-in-form-of-work-or-heat
<p>Suppose a system A which is a vessel of water with two electrodes, connected by a resistor, placed in the water. </p> <p>If you apply voltage to the electrodes, energy is transferred from the battery (not included in system A) to system A.</p> <p>I read in a book that the form of energy transferred is work, and not heat. But basically what happens is that the resistor heats up and transfers heat to the water, am I wrong? </p>
<p>Sadly, as @Brionius comments, you must have misunderstood the book.</p> <p>The 1st law of thermodynamics, which sets up the energy balance, says that:</p> <p>$$\Delta U=Q-W$$</p> <p>$U$ is internal energy, so $\Delta U$ covers the change in the total <em>contained</em> energy in the system. Both <strong>work</strong> $W$ and <strong>heat</strong> $Q$ as you said yourself are methods of <em>adding</em> energy to the system.</p> <p>In you case heat is added to the system, so the internal energy raises, and $\Delta U = Q$. There is no work done $W=0$ if nothing moves (no expansion, motion, rotation, etc).</p>
952
statistical mechanics
equilibrium state of a system in statistical mechanics
https://physics.stackexchange.com/questions/211202/equilibrium-state-of-a-system-in-statistical-mechanics
<p>Why we consider the maximum number of micro states or complexions as equilibrium state of a macro state or a system in statistical physics?</p>
<p>In statistical mechanics all micro-states are considered to be equally likely. This means that the most likely macro-state is the one that contains the most micro-states. This macro-state is considered at equilibrium because one the system is there, it is unlikely to move away from that macro state, as the vast majority of micro-states are contained in that macro-state.</p>
953
statistical mechanics
Difference between macroscopic variable, macroscopic observable, parameter and generalized force in Thermodynamics
https://physics.stackexchange.com/questions/228668/difference-between-macroscopic-variable-macroscopic-observable-parameter-and-g
<p>When I read Books about statistical physics, then often names like "macroscopic variable / observable", parameter of the macroscopic state and generalized force are used, and I want to know, what is the difference, and wether there are definitions for that. Plus, I want to know of what type are the commonly quantities V, p, N, T, S, U, $\mu$ ...</p> <p>In many cases, the books I read (german literature) begin with describing a Hamiltonian System, described by the Hamilton-Function $H(\Gamma)$ on the Phase-Space, which has so many dimensions, that even numerical calculations are not possible, and thus one searches for another way of describing the system, by giving Distributions and calculating mean-values. </p> <p>In the Books that I worked with, it is stated that the Hamilton-Function can additionaly depend on external Parameters, which affect the energy of the microstates (one microstate is one point in the phase space of the system). </p> <p>Question 1: Is N such a parameter? I can't imagine how the Hamilton-Function of a classical System can depend on N in a numerical way in which N denotes the Number of particles in the system. The same goes for V: How can the Hamilton-Function depend on one number V ? I could imagine that the Hamilton-Function contains some external Potential-pot, whose shape could depend on some variables. </p> <p>Afterwards: Observables: I read about Observables, that are mean values of functions on the phase space: For every function $O(\Gamma)$ I can calculate and measure $&lt;O&gt; = \int d\Gamma \rho(\Gamma) * O(\Gamma)$.</p> <p>Question 2: For Systems that don't have a sharp volume, like in the pressure Ensemble, the volume is given as a mean value, dependend on the distribution of the microstates of the system. But I can't imagine something like a "Volume-function" on the Phase-Space. How do I measure that? Is the Volume of a microstate the volume of the smallest region that contains every particle of the system? If so, what shape does it have? Is it a square? I could think of abitrary possibilities to asign a volume to a given set of coordinates of particles, and that is my problem. </p> <p>Next, the concept of generalized forces was introduced to me as the mean-value of derivations of the Hamiltonian to the external Parameters, so for every external Parameter $X_i$ there is a corresponding force $F_i = \int d \Gamma \rho (\Gamma) \frac{\partial H}{\partial X_i}$. </p> <p>Question 3: Is $\mu$ a generalized force, like the ones described above? I can't imagine how I should derivate H after a Number of particles, because, as stated in question 1, I don't know how H depends on N in a differentiable way. </p> <p>Further Questions:</p> <p>Can I calculate a mean-value of the system for every parameter of the system? Which of those quantities is entropie? It's not a mean-value, but it also isn't a parameter in the hamilton-function, as opposed to the energy U, which is the mean-value of the hamilton-function. </p> <p>Last question(still important): How do I decide which of those macroscopic quantities are natural variables of the system, and which of them are thermodynamic potentials? Is there any convention?</p>
<p>The general principle is that macroscopic variables and macrostates are not "real" from the microscopic, Hamiltonian perspective. They're things that we, human beings on the scale of $10^{23}$ atoms, make up based on what we can observe. </p> <p>For example, let's take pressure. Given a microstate $\Gamma$, you can't calculate the pressure, because such a thing doesn't exist. When we measure a pressure, we are really measuring the time average of a force over a (microscopically) long time. The microstate knows nothing about this.</p> <p>Said another way, the ergodic hypothesis says a time average is an ensemble average, so a pressure reading <em>is</em> an ensemble average. Therefore, it makes no sense to talk about the pressure of a single member of an ensemble.</p> <p>Similarly, entropy is macroscopically defined. There is no way to calculate "the entropy of a microstate" because entropy is the amount a macroscopic observer doesn't know about an ensemble of microstates.</p> <p>You might be confused because all of the above discussion falls apart if you consider systems of only a few particles. And that makes sense, because these macroscopic, thermodynamic variables are not defined for such systems. Thermodynamics <em>is</em> the $N \to \infty$ limit. To apply it, you use real-life, $N &gt; 10^{23}$ intuition.</p>
954
statistical mechanics
How is $ \left(1-\frac{p^2}{2mE}\right)^{3N/2-2} =\; \exp\left(-\frac{3N}{2}\frac{p^2}{2mE}\right)\;?$
https://physics.stackexchange.com/questions/227654/how-is-left1-fracp22me-right3n-2-2-exp-left-frac3n2-fra
<p>How is $$ \left(1-\frac{p^2}{2mE}\right)^{3N/2-2} = \exp\left(-\frac{3N}{2}\frac{p^2}{2mE}\right)$$ (Karder, Statistical Physics of Particles, Page 107)</p> <p>in the large $E$ limit. Here $N$ is particle, of the order of $10^{23}$, $E$ is the total energy. I roughly guess that it should be $\exp(-\frac{p^2}{2m})$ since both $N$ and $E$ can be treated as infinitely large.</p> <hr> <p>Update: a hint to solution is provided in the comments.</p>
<p>Add a comment needs 50 reputation, and I got only 46 now. So I write my opinion here. I have read the textbook, the original formula is $$p(\vec{p_1})=(1-{{\vec{p_1}^2}\over {2mE}})^{3N/2-2}\cdots\cdots$$</p> <p>So $\vec{p_1}$ is the momentum of only one particle in the ensemble. Considering the system has very large $N$, that is only a tiny proportion of total $E$, which makes the ${{\vec{p_1}^2}\over {2mE}}$ term approaches 0.</p> <p>Then with the above comments of other guys, you can get the results. Here I think $3N/2$ makes no difference with $3N/2-2$ because $N$ is large</p>
955
statistical mechanics
Partition function of primon bosonic gas
https://physics.stackexchange.com/questions/229111/partition-function-of-primon-bosonic-gas
<p>Can we interpret the <strong>Euler product formula</strong> " $\sum\frac{1}{n^s} = \prod_{p\;\mathrm{prime}} \frac{1}{1-p^{-s}} $ " in a stat. physical sense, as a product of single-particle system <em>partition functions</em>, considering them <em>statistically independent</em> ?</p>
<p>Umm...OK well lets see what happens.</p> <p>Lets let $s = \beta\varepsilon$, where $\varepsilon$ is some fixed energy and $$ Z\left(\frac{s}{\varepsilon}\right) = \zeta(s) $$.</p> <p>To get some kind of idea for what kind of system $Z$ describes we need to find the energy levels of the system and to do that we need to express $Z$ in the form $\sum_{i} e^{-\beta E_i}$. In general there will not be a unique way to do this but Euler's formula gives us a couple of obvious ways to try.</p> <p>The left had side of Euler's formula gives us \begin{align} Z(\beta) &amp; = \sum_n n^{-\varepsilon \beta}\\ &amp; = \sum_n e^{-\beta \varepsilon \ln n} \end{align}.</p> <p>So we have some system with logarithmically spaced energy levels.</p> <p>The right hand side we are looking to interpret as a collection of independent, weakly interacting, distinguishable systems with partition functions $$ Z_p\left(\frac{s}{\varepsilon}\right) = \frac{1}{1-p^{-s}} $$.</p> <p>Again the systems described by $Z_p$ will not in general be unique but there is an obvious binomial expansion to give a geometric series \begin{align} Z_p(\beta) &amp;= \sum_n p^{-\beta\varepsilon n}\\ &amp; = \sum_n e^{-\beta\varepsilon n \ln p} \end{align} This does at least have a simple interpretation; it is the partition function of a harmonic oscillator with $\hbar\omega_p = \varepsilon \ln p$</p> <p>Euler's formula tells us that $$ Z(\beta) = \prod_{p\;\mathrm{prime}}Z_p(\beta) $$ So we would expect the system logarithmically spaced energy levels to have the same macroscopic properties as an infinite collection of harmonic oscillators with frequencies in ratios of the logarithms of the primes. Indeed \begin{align} e^{-s\ln n} &amp;= e^{-s\ln p_1^{a_1}p_2^{a_2}\dots}\\ &amp;=e^{-sa_1\ln p_1}e^{-sa_2\ln p_2}\dots \end{align} This is precisely the form of the terms obtained by multiplying out the $Z_p$s, which shows that the 2 systems have in fact got the same energy levels (This is essentially a rewriting of proof of Euler's formula on the wiki page)</p> <p>Now what does this tell us. That's a good question. I cannot this of a naturally occurring system with logarithmically space energy levels, nor can I think where you would find a collection of oscillators with frequencies in radios of $\ln p$, so there doesn't seem to be much physical insight here that I can see. There may also be other ways to expand $Z$ which give different energy levels which may be more interesting. Somebody else may know something I don't.</p>
956
statistical mechanics
Does fluctuation really occur in equilibrium as its microstates are allowed to occur by Fundamental Postulate in equilibrium?
https://physics.stackexchange.com/questions/234396/does-fluctuation-really-occur-in-equilibrium-as-its-microstates-are-allowed-to-o
<p>The Fundamental Postulate says:</p> <blockquote> <p>In <em>equilibrium</em>, all accessible microstates are equally likely.</p> </blockquote> <p>Accessible means having same energy.(right?)</p> <p>Let a container is taken full of gas having number of particles $N_,$ volume $V$ and energy $E\:_;$ the system is isolated.</p> <p>At equilibrium, the system would be in that macrostate which would have the maximum multiplicity or the largest number of microstates; that would correspond to gas totally dispersed over the whole volume $V\;.$</p> <p>However, consider the case when the gas gets confined to the left half of the container that is in volume $V/2\;_;$ this macrostate would have a much lesser multiplicity than the former one that is $$\Omega(N,V,E)\gt \Omega\left(N,\frac{V}{2}, E\right) $$</p> <p>However, as the microstates corresponding to $(N,V/2,E)$ have the same energy $E\;_,$ all the microstates of $(N,V,E)$ and $(N,V/2,E)$ are equally likely <strong>in equilibrium</strong>. </p> <p>But, since, the microstate corresponding to $N,V/2,E$ doesn't represent the microstate corresponding to the maximum entropic macrostate, all the microstates $\Omega\left(N,\frac{V}{2}, E\right)$ must be called <strong>fluctuation</strong>.</p> <p>But <strong>fluctuation</strong>, in <em>equilibrium</em>?</p> <p>At first, I couldn't believe this; how fluctuation happens in <em>equilibrium</em>.</p> <p>But according to the Fundamental Postulate, all microstates having the same energy are equally likely in <em>equilibrium.</em> This means the microstates of the <strong>fluctuation</strong> can be exhibited by the system in <em>equilibrium</em> as they are equally likely along the other microstates having the same energy $E.$</p> <p>How can this happen? How can <strong>fluctuation</strong> microstates occur in <em>equilibrium</em>, inspite of having the same energy $E\,?$</p> <p>I thought I was mistaking; only the microstates corresponding to $(N,V,E)$ can occur in <em>equilibrium</em>. But also, I can't deny the fact the the microstates having the same energy $E$ are equally likely.</p> <p>Also, as written <a href="http://www.jamia-physics.net/lecnotes/statmech/lec01.pdf" rel="nofollow">here</a>:</p> <blockquote> <p>We will first consider an isolated system, typically a gas enclosed in a box, which is thermally insulated. So, any time evolution of the system will be subject to the constraint that the total energy remains constant. Left for a long time, it is believed to be in equilibrium. We further assume that given an isolated system in <em>equilibrium</em>, it is found with equal probability in each of its accessible microstates. This is the postulate of equal a priory probability. ... Now each macrostate comprises of numerous microstates. For example, all the gas confined to only one half of the box, is a macrostate. There is a huge number of ways this can happen, by various arrangements of particles and their momenta. The gas uniformly occupying the whole volume of the box, is another macrostate. And again, there are a huge lot of microstates associated with this macrostate. <strong>Now each microstate is equally probable, but we never actually see a gas occupying only one half of its container.</strong> Why does that happen? It happens because the number of microstates associated with the gas occupying the whole volume are overwhelmingly large, compared to the microstates associated with the gas occupying only one half of the box.</p> </blockquote> <p>Notice the words, <strong>'equilibrium', 'each'</strong> and the follow-up question <strong><em>'Why does that happen'?</em></strong>; the author clearly means that at <em>equilibrium</em>, since all the microstates are equally likely, not only the macrostate having the gas uniformly spread over the entire volume has the greatest probability to appear; but also the macrostate corresponding to the gas confined to the left-half of the container can appear in <em>equilibrium</em>; it is only that the former is exhibited mostly.</p> <p>So, as the Fundamental Postulate permits all the microstates having the same energy $E$ to occur equally likely in <em>equilibrium</em>, it is inferred from that the <strong>fluctuation</strong> occurs in <em>equilibrium</em> since the microstates $\Omega(N,V/2,E)$ are equally likely in <em>equilibrium</em> :(</p> <p>So, my question is:</p> <ul> <li>Does <strong>fluctuation</strong> occur really in <em>equilibrium</em> as its microstates are permitted to occur equally likely in the system at <em>equilibrium</em>? Or is it wrong? </li> </ul> <p>What am I actually missing in utilising the Fundamental Postulate and <strong>fluctuation</strong>? Can anyone please help me clear out my confusion?</p>
<blockquote> <p>At equilibrium, the system would be in that macrostate which would have the maximum multiplicity or the largest number of microstates; that would correspond to gas totally dispersed over the whole volume $V\;.$</p> </blockquote> <p>This is wrong, and based on a misunderstanding of terminology. You have microstates, and macrostates. A macrostate assigns a probability to each microstate. Some macrostates are in thermal equilibrium. Those states assign zero to many microstates, and they assign the same non-zero number to all the other microstates. That's it. That fundamental terminology needs to be understood.</p> <blockquote> <p>However, as the microstates corresponding to $(N,V/2,E)$ have the same energy $E\;_,$ all the microstates of $(N,V,E)$ and $(N,V/2,E)$ are equally likely <strong>in equilibrium</strong>. </p> </blockquote> <p>If only the volume $V/2$ were available, then you would assign the probability $1/\Omega\left(N,\frac{V}{2}, E\right)$ to the microstates of energy $E$ where they are all on the left (and zero to all the other microstates). If the full volume $V$ were available, then you would assign the probability $1/\Omega\left(N,V, E\right)$ to <em>all</em> the microstates of energy $E$ (and zero to all the other microstates).</p> <p>In the latter case, since all of the microstates of energy $E$ are available (including the ones where they are all on the left) you might get one of those where they are all on the left. But the probability of getting one of them is $\Omega\left(N,\frac{V}{2}, E\right)/\Omega\left(N,V, E\right)\ll 1$ because there are $\Omega\left(N,\frac{V}{2}, E\right)$ many of them and each has a probability of $1/\Omega\left(N,V, E\right).$</p> <blockquote> <p>But, since, the microstate corresponding to $N,V/2,E$ </p> </blockquote> <p>That's not meaningful. There were $\Omega\left(N,\frac{V}{2}, E\right)$ many states of energy $E$ where they were are on the left. You haven't specified which one you are talking about.</p> <blockquote> <p>$\Omega\left(N,\frac{V}{2}, E\right)$ must be called <strong>fluctuation</strong>.</p> </blockquote> <p>No. $\Omega\left(N,\frac{V}{2}, E\right)$ is a number, it's like 5 or 10 except it is much larger.</p> <blockquote> <p>Does <strong>fluctuation</strong> occur really in <em>equilibrium</em> as its microstates are permitted to occur equally likely in the system at <em>equilibrium</em>? Or is it wrong? </p> </blockquote> <p>When you are in the larger microstate you have a non-zero (but super tiny) probability of getting a microstate where they are all on the left side. Just like if you flip four coins then each outcome $HHHH,$ $HHHT,$ $HHTH,$ $HHTT,$ $HTHH,$ $HTHT,$ $HTTH,$ $HTTT,$ $THHH,$ $THHT,$ $THTH,$ $THTT,$ $TTHH,$ $TTHT,$ $TTTH,$ $TTTT$ are equally likely, but there is only a $1/16$ chance of getting a state like $HHHH$ and if you had a thousand coins then there would only be a $2^{-1000}$ chance of getting all heads. That's pretty small. If you had $10^{24}$ coins, you realistically wouldn't see them all come out heads, the probability is nonzero, but $2^{-10^{24}}$ is quite small.</p> <blockquote> <p>I've read <strong>To each microstate there corresponds exactly one macrostate.</strong></p> </blockquote> <p>That only holds sometimes. It definitely holds when two macrostates have different total energy or different numbers of particles. But the passage you quoted literally is talking about a case where all being on the left is in these two different macrostates. Accessibility isn't determined solely by energy, it is determined by whether it is consistent with the macroscopic state variables such as energy, volume, pressure, temperature.</p> <blockquote> <p>Have you saw the article I quoted in my question above? It says <code>all the gas confined to only one half of the box, is a macrostate</code> and then says <code>The gas uniformly occupying the whole volume of the box, is another macrostate. And again, there are a huge lot of microstates associated with this macrostate. Now each microstate is equally probable, but we never actually see a gas occupying only one half of its container.</code>- they are two different macrostates aren't they?</p> </blockquote> <p>Let's say you had a big box of volume $V$ and you placed a thin barrier down the middle and filled the left side (of volume $V/2$) with gas of total energy $E$. Then there would be a macrostate describing that, and there would be a probability of a given microstate (out of many many possible). Now if you very quickly moved the barrier out of the way, so fast that no gas was touching it while you moved it then you could argue that the microstate changed or it didn't (it happened so fast no particle had time to move). And you could argue that the macrostate changed or it didn't. And in both cases the argument would be purely semantic. You know zero particles moved.</p> <p>Now if the first macrostate was in thermal equilibrium then each of the many microstates was equally likely. And whichever one it was in at that moment, that microstate is one of the vastly many more microstates available to the volume $V$ system. But it <strong>is</strong> one of them. And there are way way way more microstates in the volume $V$ system. So if you tried to look again later and hoped to find it all on the left side again, the chance would be $\Omega\left(N,\frac{V}{2}, E\right)/\Omega\left(N,V, E\right)\ll 100\%.$ It is a nonzero chance. But you aren't going to see it.</p> <p>Make sure you understand that example and the math, then reread your passage again. Whether something is "the same" microstate or "the same" macrostate doesn't affect what the probabilities are. It is what it is. When someone says the particles can be anywhere in the large box, they could be anywhere, they could even be positioned so all are on the left side. But its so <em>so</em> <strong>so</strong> unlikely when you have $10^{24}$ particles. So the chance is small.</p> <blockquote> <p>Wouldn't the occurrence of microstates corresponding to $E,V/2, N$ be <strong>fluctuation</strong> from the macrostate having the greatest multiplicity?</p> </blockquote> <p>If you describe your macrostate as just one number, $V$ (the size of the box) and another number, $E$ the total energy, then if you have all your particles on the left with a particular velocity each, then that collection of positions and velocities is the same for the two macrostates. But for the macrostate with more volume available such a microstate is highly improbable in equilibrium since so few states are like that and all the states are equally likely.</p> <p>Now, you could if you wanted describe your macrostate with more numbers. You could divide your box into regions and have a density of particles in each region and an average energy per particle in that region. And then you could ask which macrostates of that are equilibrium ones. And for that, it would not be equilibrium to have totally different densities and average energies in different parts of a box. And you could achieve that from a rare transition. But that is a different question (though it might address what you want to know).</p>
957
statistical mechanics
Negative amount of particles in a grand canonical ensemble
https://physics.stackexchange.com/questions/210152/negative-amount-of-particles-in-a-grand-canonical-ensemble
<p>Knowing that, in the $\mu$-canonical (or micro-canonical) and canonical ensembles, the number of particles is held constant and usually reflects the actual number of particles, in which case $N_{MC} \simeq N_{actual}$, and also $N_C \simeq N_{actual}$. Or otherwise </p> <p>$\text{sgn}(N_{MC}) = \text{sgn}(N_{actual})$</p> <p>$\text{sgn}(N_C) = \text{sgn}(N_{actual})$</p> <p>where $\text{sgn}(x)$ is the sign of x. But here, $N_{GC}=-\left(\frac{\partial\Omega}{\partial\mu} \right)_{VT}$, defined as a derivative and may not reflect the actual number of particles, and $N_{GC}$ can even be negative, given its definition.</p> <p>For this reason, I would tend to believe that a system where </p> <p>$\text{sgn}(N_{GC}) \neq \text{sgn}(N_{actual})$</p> <p>could exist, that is, $N_{GC}&lt;0$ while $N_{actual}\geq 0$, and hence negative numbers of particles are a physical reality, albeit only in a grand canonical ensemble. Or would even a grand canonical ensemble necessarily imply</p> <p>$\text{sgn}(N_{GC}) = \text{sgn}(N_{actual})$</p> <p>i.e., $N_{GC}&lt;0$ also imply $N_{actual}&lt;0$?</p>
<p>No, this cannot happen, simply because in general $N_G \ge 0$, which can easily be proven in a quantum-statistical setting.</p> <p>$N_G$ is defined as $N_G = \frac 1 {Z_G} \mathrm{Tr}(\rho_G N)$, now consider the following expansion of the trace in terms of a complete set of states $\left|n\alpha\right&gt;$ which are eigenstates of the number operator with the eigenvalue $n$: \begin{align*} N_G &amp;= \frac 1 {Z_G} \mathrm{Tr}(\rho_G N) = \frac 1 {Z_G} \sum_{n\alpha, m\beta} \left&lt;n\alpha \right| \rho_G \left|m\beta\right&gt;\left&lt;m\beta\right| N \left|n\alpha\right&gt; \\ &amp;= \frac 1 {Z_G} \sum_{n\alpha, m\beta} \left&lt;n\alpha \right| \rho_G \left|m\beta\right&gt;n\delta_{nm}\delta_{\alpha\beta} \\ &amp;= \frac 1 {Z_G} \sum_{n\alpha} n \left&lt;n\alpha\right| \rho_G \left| n\alpha \right&gt; \ge 0. \end{align*} In the last step the inequality follows trivially as a density matrix is a positive operator ($\left&lt;a\right|\rho\left|a\right&gt; \ge 0$ for all $\left|a\right&gt;$). If it were not a positive operator, it would not represent a mixed state as it would then assign negative probability to some pure state (which is in contradiction to the axioms of probability).</p> <p>Also note, that $N_\text{actual}$ is not a valid concept for a system in the grand canonical equilibrium, in equilibrium there is no actual particle number, the grand canonical equilibrium is a mixed state. So again, a concept $N_\text{actual}$ cannot be well defined and is only confusing, not helpful, especially as we usually use the grand canonical ensemble for large systems where we fix $\mu$ by requiring a fix particle number/density (which will never be negative as this has no physical meaning).</p>
958
statistical mechanics
Derivation of Fermi-Dirac Distribution
https://physics.stackexchange.com/questions/240623/derivation-of-fermi-dirac-distribution
<p>How can I derive the Fermi-Dirac distribution function using simple mathematics? I am now tired of looking for the derivation on the net.So please help me to understand how actually electrons are distributed between various energy levels?</p>
<p>Okay, so do you understand derivation of thermodynamics of an ideal gas and what partition function is? This derivation can be done through use of principle of maximization of entropy and Lagrange multipliers. Do you know how to derive partition function of an ideal gas with fixed number of particles (canonical ensemble)? Make sure you understand this one and similar derivations for different gases. Try relativistic ideal gas as an exercise. Make sure you understand what partition function is and in general the basic principles of statistical physics and get a feeling of what it is all about. Then the next step would be derivation of grand canonical ensemble, and understanding what is meant by occupation numbers, chemical potential and so on and so forth (I found this part to be least intuitive). If you understand that, now you just need some understanding of QM and exchange symmetries. Now apply your understanding of QM and it's oddities to the derivation of partition function for a gas of fermions (What does Pauli exclusion principle imply about the occupation numbers and partition function derivation?) This is how the course was taught to me at my university and I feel that is the best way to understand it. To be honest, I am a dumdum when it comes to statistics and combinatorics, but learning it in that order I managed to obtain a very decent understanding of the topic myself. I don't know what is your level of understanding. If your level is basic don't expect to be able to jump in and understand it straight away, that is not an easy topic. If you level of knowledge is good but you are still struggling,maybe take look at the grand canonical ensemble in more detail, I personally don't find the derivation of it very intuitive and it keeps giving me headaches. If you are struggling with any of the steps be sure to leave a comment, I will be glad to explain it in more detail.</p>
959
statistical mechanics
Langevin Paramagnetism for dipoles rotating in 2D
https://physics.stackexchange.com/questions/677796/langevin-paramagnetism-for-dipoles-rotating-in-2d
<p>The energy for dipoles in a magnetic field can be described by <span class="math-container">$$H = - \mathbf{m} \cdot \mathbf{B}.$$</span> What I did for an exercise was integrate the partition function for the case where I allow rotation in three dimensions (reducing it to the integral <span class="math-container">$Z_1 = \int_0^{2 \pi} d \varphi \int_0^{\pi} d \theta \sin(\theta) e^{\beta m B \cos(\theta)}$</span> and taking the <span class="math-container">$N$</span>-th power) but I do not know how to set up the integral only allowing 2D rotations. If I take polar coordinates, my length element doesn't contain <span class="math-container">$\sin(\theta)$</span> anymore so the integral is hard to solve. Any ideas how to set up the integral for the partition function for the 2D case? Thanks in advance!</p>
960
statistical mechanics
Question regarding expectation value of energy and gibbs factor
https://physics.stackexchange.com/questions/679293/question-regarding-expectation-value-of-energy-and-gibbs-factor
<p>In lecture we introduced the average energy for a single particle as <span class="math-container">$$E = \langle \varepsilon \rangle = \frac{\int d \varepsilon g(\varepsilon) e^{- \beta \varepsilon} \varepsilon}{Z_1}$$</span> where <span class="math-container">$Z_1 = \int d \varepsilon e^{- \beta \varepsilon}$</span>. In the case of <span class="math-container">$g(\varepsilon) = 1$</span> the above calculation can be simplified and you would get <span class="math-container">$$E = k_B T$$</span> as the result of the integral. When you consider <span class="math-container">$N$</span> indistinguishable particles, I would have no problem assuming that <span class="math-container">$$E = N \langle \varepsilon \rangle = Nk_BT$$</span> since this still makes sense to me. However if I consider the above expression for the expectation value now integrated for <span class="math-container">$N$</span> particles meaning that if they are not interacting <span class="math-container">$Z = Z_1^N$</span> and <span class="math-container">$$E = \langle \varepsilon \rangle = \frac{\int d \varepsilon g(\varepsilon) e^{- \beta \varepsilon} \varepsilon}{Z_1^N}$$</span> where <span class="math-container">$\varepsilon = \sum_{i=1}^N \varepsilon_i$</span> and <span class="math-container">$d \varepsilon = \prod_{i=1}^{N} d \varepsilon_i$</span> then I get a strange result which is not <span class="math-container">$Nk_BT$</span>, even though in my head it's still the expectation value for the energy for <span class="math-container">$N$</span> particles. So my first question would be: What did I do wrong in the second calculation? My second question overall is also: How does the definition of the expectation value here account for indistinguishable particles? Normally you would introduce a Gibbs factor in the way that <span class="math-container">$\tilde{Z} = \frac{Z}{N!}$</span> which would change the result for the second calculation while the first calculation isn't affected because we are only looking at single particle partition function/average energy. I would be grateful for someone telling me where my thinking went wrong.</p>
<p>To start with your second question</p> <blockquote> <p>How does the definition of the expectation value here account for indistinguishable particles?</p> </blockquote> <p>It doesn't. What you have calculated here is not the expected energy of <span class="math-container">$N$</span> indistinguishable particles, it is the expectation of the energy for <span class="math-container">$N$</span> distinguishable particles, which happen to have the same available energies and degenerates. For truly indistinguishable particles you could not factor the degeneracy <span class="math-container">$g$</span>, as if any two particles had the same energy you would get a reduced degeneracy to avoid over-counting configurations with the particles exchanged. This is what results in the factor of <span class="math-container">$\frac{1}{N!}$</span> when calculating the combined partition function.</p> <p>For the frist part</p> <blockquote> <p>What did I do wrong in the second calculation?</p> </blockquote> <p>I don't know what happened in your calculation. You have not written down exactly what you did. For the <span class="math-container">$N$</span> identical-but-distinguishable particles discussed above, with <span class="math-container">$g(\varepsilon_1,\varepsilon_2,\dots\varepsilon_n) = 1$</span> (As a side note <span class="math-container">$g$</span> should have units of <span class="math-container">$[E]^{-N}$</span>, but we will ignore this and assume we have somehow normalized our energy to get a dimensionless <span class="math-container">$\varepsilon$</span>), we can optain <span class="math-container">$E = Nk_BT$</span> as follows <span class="math-container">\begin{align} E &amp;= \frac{1}{Z_1^N} \int \left(\prod_i d\varepsilon_i\right) \sum_j \varepsilon_j e^{-\beta \sum_k \varepsilon_k}\\ &amp;= \frac{1}{Z_1^N} \sum_j\int \left(\prod_{i} d\varepsilon_i\right) \varepsilon_j e^{-\beta \sum_k \varepsilon_k}\\ &amp;= \frac{1}{Z_1^N} \sum_j\int d\varepsilon_j \varepsilon_j e^{-\beta \varepsilon_j}\;\int\left(\prod_{i\ne j} d\varepsilon_i \right)e^{-\beta \sum_{k\ne j}\varepsilon_k}\\ &amp;= \frac{1}{Z_1^{N-1}} \sum_j \langle \varepsilon_j \rangle Z_1^{N-1}\\ &amp;= N k_B T \end{align}</span></p>
961
statistical mechanics
MCE nr. of microstates &amp; proof that entropy is extensive (ideal gas in a box)
https://physics.stackexchange.com/questions/682081/mce-nr-of-microstates-proof-that-entropy-is-extensive-ideal-gas-in-a-box
<p>I am trying to find the nr. of microstates inside a box of dimensions <span class="math-container">$L_1,L_2,L_3$</span> of a hypersphere in the phase space.We have a gas of N particles in 3D. While we have a 6N dimensional phase space because we assumed that the gas is ideal, we do not have potential energy in the Hamiltonin, only kin. energy:</p> <p><span class="math-container">$$H = \Sigma_1^N \frac {(\vec p_i)^2} {2m}=\Sigma_1^{3N} \frac {(p_i)^2} {2m}$$</span></p> <p>The volume of an n dimensional sphere with radius R is:</p> <p><span class="math-container">$$V= \frac {\pi^{\frac n 2} R^n}{\Gamma(\frac n 2 +1)}$$</span> In our case in phase space, <span class="math-container">$R=E$</span> where E is the energy of the states in the hyper-surface of the hyper-sphere (in phase space) and <span class="math-container">$n=3N$</span>. Then I get:</p> <p><span class="math-container">$$V(R=E)=\frac 1 {N!} \frac 1 {2^{3N}} \frac {(E \pi)^{\frac {3N} 2}} {(\frac {3N} 2)}$$</span></p> <p>Then I need to find the volume of the microstate. I initially consider a single particle in the box, for which we would have:</p> <p><span class="math-container">$$E= \frac {\vec p^2} {2m}= \frac {\hbar^2 \pi^2}{2m} [(\frac {n_1}{L_1})^2 + (\frac {n_2}{L_2})^2 + (\frac {n_3}{L_3})^2] \longrightarrow \vec p= (p_x,p_y,p_z)^T= \hbar \pi(\frac {n_1}{L_1},\frac {n_2}{L_2},\frac {n_3}{L_3})$$</span> where <span class="math-container">$n_1,n_2,n_3$</span> are the quantum numbers. The the volume of the microstate of 1 particle is :<span class="math-container">$\Delta p_x \Delta p_y \Delta p_z$</span></p> <p><span class="math-container">$$V_{microstate}=\Delta p_x \Delta p_y \Delta p_z = (\hbar \pi)^3\frac {\Delta n_1 \Delta n_2 \Delta n_3}{L_1L_2L_3}= (\hbar \pi)^3\frac 1 V_{box} $$</span> For N particles then you'd have:</p> <p><span class="math-container">$$V=(\hbar \pi)^{3N}\frac 1 {V_{box}^N}$$</span></p> <p>Then in order to find the nr. of microstates inside the volume in phase space which corresponds to that of a hypersphere with radius <span class="math-container">$R=E$</span>:</p> <p><span class="math-container">$$\Omega(E)= \frac {V_{sphere}} {V_{microstate}}= V^N (\frac {E}{4 \pi \hbar^2})^{(\frac {3N}{2})}\frac 1 {N!}\frac 1 {(\frac {3N} 2)!}$$</span></p> <p>But the actual formula is:</p> <p><span class="math-container">$$\Omega(E)= \frac {V_{sphere}} {V_{microstate}}= V^N ( \frac {mE}{2 \pi \hbar^2})^{(\frac {3N}{2})}\frac 1 {N!}\frac 1 {(\frac {3N} 2)!}$$</span></p> <p>This implies that I need a <span class="math-container">$2m$</span> factor multiplying the energy <span class="math-container">$E$</span>. And I think it comes from the definition of the momentum I need to include the <span class="math-container">$\sqrt{2m}$</span>. But that makes no sense since that's not how the momentum is defined. Therefore I have 2 questions:</p> <ol> <li>Where does <span class="math-container">$m$</span> the mass comes from, so why is my result different?</li> <li>Why do we say that if we do not include <span class="math-container">$N!$</span> then entropy is not extensive quantity, is there a way to show this?</li> </ol>
<ol> <li><p><span class="math-container">$R$</span> is not <span class="math-container">$E$</span>, since you are thinking about the <span class="math-container">$3N$</span>-dimensional hyper-space of momenta, and <span class="math-container">$\sum_i \mathbf{p}_i^2=2mE$</span>, so the radius is really <span class="math-container">$\sqrt{2mE}$</span>, which supplies the missing <span class="math-container">$\sqrt{2m}$</span>'s.</p> </li> <li><p>If you compute the entropy <span class="math-container">$S(E, V, N)$</span> by <span class="math-container">$S=k_B\ln \Omega$</span> without the <span class="math-container">$1/N!$</span> factor, you will see that the entropy is not an extensive function of the three extensive variables. Namely, <span class="math-container">$S(\lambda E, \lambda V, \lambda N)\neq \lambda S(E, V,N)$</span>.</p> </li> </ol>
962
statistical mechanics
In statistical mechanics, is the partition function a mathematical cumulative distribution function as in probability mathematics?
https://physics.stackexchange.com/questions/685153/in-statistical-mechanics-is-the-partition-function-a-mathematical-cumulative-di
<p>In statistical mechanics, is the partition function a mathematical cumulative distribution function as in probability mathematics ?</p> <p>Why is the partition function the fundamental objects for statistical mechanisms ?</p> <p>Why isn't it the probability <em>density</em> function ?</p>
963
statistical mechanics
Microstates and Combinations
https://physics.stackexchange.com/questions/693650/microstates-and-combinations
<p>There are two boxes (which I will call 1 and 2) that are initially thermally isolated and have a sliding door in between them. We can write the probability of configuration <span class="math-container">$A$</span> in box 1 as,</p> <p><span class="math-container">$$P_1(A)=\frac{1}{\Omega_1}$$</span></p> <p>Similarly, configuration B in box 2 is,</p> <p><span class="math-container">$$P_2(B)=\frac{1}{\Omega_2}$$</span></p> <p>The probability of these configurations occurring simultaneously when the barrier is removed between the two boxes is,</p> <p><span class="math-container">$$P_1(A)P_2(B)=P_0(A +B)$$</span></p> <p>Similarly,</p> <p><span class="math-container">$$\Omega_1(A)\Omega_2(B)=\Omega_0(A+B) \tag{1}$$</span></p> <p>This is a pretty basic result that's used in <span class="math-container">$NVE$</span> ensembles to derive a series of interesting results.</p> <p><em>FYI: From here I am sketching out an idea so forgive me for not having a reference for anything below this line.</em></p> <p>If we define the configurations or microstates, <span class="math-container">$\Omega$</span>, as the number of combinations of indistinguishable particles, <span class="math-container">$n$</span>, in <span class="math-container">$r$</span> discrete spatial locations (ignoring momentum space for the time being), we would get:</p> <p><span class="math-container">$$\Omega_1=\frac{n_1!}{r_1!(n_1-r_1)!} \tag{2}$$</span></p> <p>Similarly, <span class="math-container">$$\Omega_2=\frac{n_2!}{r_2!(n_2-r_2)!} \tag{3}$$</span></p> <p>To simplify the math for each box I will let the number of particles be <span class="math-container">$n$</span> and the number of spatial locations be <span class="math-container">$r$</span>, then,</p> <p><span class="math-container">$$\Omega_1 = \Omega_2 = \frac{n!}{r!(n-r)!} \tag{4}$$</span></p> <p>Then by <span class="math-container">$(1)$</span>,</p> <p><span class="math-container">$$\Omega_0 = \Omega_1 \Omega_2 = \left(\frac{n!}{r!(n-r)!}\right)^2 \tag{5}$$</span></p> <p>But as I have defined it, when the wall is removed between the two boxes we will get <span class="math-container">$n_0=2n$</span> and <span class="math-container">$r_0=2r$</span>. Therefore,</p> <p><span class="math-container">$$\Omega_0 = \frac{(2n)!}{(2r)!(2n-2r)!} \tag{6}$$</span></p> <p>Is there any way to show that <span class="math-container">$(5)$</span> and <span class="math-container">$(6)$</span> are equal? Maybe with Stirling's approximation?</p> <p>Edit: According to Desmos, they are not equal expressions at low values. Is there something wrong with how I am thinking about this?</p>
<p>I believe it does actually work out using Stirling's approximation (it's a lot of annoying algebra):</p> <p><span class="math-container">$$\ln a!\approx a\ln a - a$$</span></p> <p>First expression:</p> <p><span class="math-container">$$\ln \left( \frac{n!}{r!(n-r)!} \right)^2 = 2(\ln n! - \ln r! - \ln(n-r)!)$$</span> <span class="math-container">$$\approx 2(n\ln n - n - r\ln r + r - (n-r)\ln(n-r) + (n-r)) = 2(n\ln n-r \ln r -n\ln (n-r) + r\ln(n-r))$$</span></p> <p>Second expression:</p> <p><span class="math-container">$$\ln \left( \frac{2n!}{2r!(2n-2r)!} \right)=\ln(2n!)-\ln(2r!)-\ln(2n-2r)!$$</span></p> <p><span class="math-container">$$ \approx 2n\ln2n - 2n -2r\ln2r + 2r -(2n-2r)\ln2(n-r) + (2n-2r)$$</span></p> <p><span class="math-container">$$ = 2[n\ln2+n\ln n-r\ln2-r\ln r-n\ln2(n-r)+r\ln2(n-r)]$$</span></p> <p><span class="math-container">$$=2[n\ln2+n\ln n-r\ln2-r\ln r-n\ln2-n\ln(n-r)+r\ln2+r\ln(n-r)]$$</span></p> <p>Which after cancelling terms shows they are equal.</p> <p>Edit: missed the last part of stirling's approx but it doesn't change the result</p>
964
statistical mechanics
bridging the connection from the Helmholtz free energy in classical thermo to stat mech
https://physics.stackexchange.com/questions/182774/bridging-the-connection-from-the-helmholtz-free-energy-in-classical-thermo-to-st
<p>The Helmholtz-free energy from classical thermo is defined as $$\text{F=u-TS}$$</p> <p>taking the differential and algebraic manipulation, we arrive at</p> <p>$$\text{dF=-pdv-sdT}$$</p> <p>Observe that:</p> <p>$$\text{p=-(}\frac{\text{$\delta $F}\backslash }{\text{$\delta $v}})_T$$ and $$\text{S=-(}\frac{\text{$\delta $F}\backslash }{\text{$\delta $T}})_V$$</p> <p>I am perfectly comfortable with the derivation up till here.</p> <p>In my text, it is mentioned that $$\text{F=-}K_B\text{TLog[z]}$$ but this is not at all obvious from which this expression came from. Would someone be nice to clear things up?</p>
<p>There are multiple ways to justify this identification.</p> <ul> <li>The first one is to recognize that in thermodynamics the Helmholtz free energy is the right thermodynamic potential to figure out spontaneous evolution of your thermodynamic system (in virtue of the second principle of thermodynamics) at fixed (N,V,T). In statistical mechanics, this is expressed by the canonical ensemble and the corresponding statistics associated to it. If you were to have a mesoscale-observable $X$ about which you want to know the probability distribution for $X$ to have value $x$, then you will compute </li> </ul> <p>\begin{equation} p(x|N,V,T) = \frac{\sum_{\{ i; \:X(i) = x\}} e^{-\beta E_i}}{e^{-\beta F(N,V,T)}} \end{equation} </p> <p>now, the sum at the numerator can be thought as a partial partition function (with for instance an additional potential that biases all the micro states $i$ so that $X(i) =x$). We can then define some kind of partial free energy $F(x|N,V,T)$ such that</p> <p>\begin{equation} p(x|N,V,T) = \frac{e^{-\beta F(x|N,V,T)}}{e^{-\beta F(N,V,T)}} \end{equation} We then see that the most probable value of the meso-observable $X$ to be observed is the one that will minimize the "partial" free energy at fixed (N,V,T); thus the logarithm of the partition function plays a role equivalent to that of the Helmholtz free energy in usual thermodynamics when the second principle is here replaced by "most probable value".</p> <ul> <li>The second one is more direct and more formal. It consists in noticing that by definition</li> </ul> <p>\begin{equation} \langle E \rangle = \sum_i E_i \frac{e^{-\beta E_i}}{Z(N,V,T)} \end{equation} and then write the tautology $E_i = -k_b T \ln e^{-\beta E_i} = -k_b T \ln \left( e^{-\beta E_i} \frac{Z(N,V,T)}{Z(N,V,T)} \right) = -k_B T \ln p(i|N,V,T) -k_B T \ln Z(N,V,T)$. Upon replacing in the above formula we get:</p> <p>\begin{eqnarray} \langle E \rangle &amp;=&amp; -k_B T \sum_i p(i|N,V,T) [\ln p(i|N,V,T)+ \ln Z(N,V,T)] \end{eqnarray}</p> <p>Now, defining the entropy as being the Shannon entropy of the distribution i.e </p> <p>\begin{equation} S(N,V,T) = -k_B \sum_i p(i|N,V,T) \ln p(i|N,V,T) \end{equation}</p> <p>we get </p> <p>\begin{equation} -k_B T \ln Z(N,V,T) = \langle E \rangle - TS(N,V,T) \end{equation}</p> <p>By comparison with the definition of the Helmholtz free energy, it comes naturally that $F = -k_BT \ln Z$.</p> <ul> <li>The third and last one for this answer consists in using the role of generating function of the cumulants played by the logarithm of the partition function (that I discussed quite in detail <a href="https://physics.stackexchange.com/questions/174150/the-unreasonable-effectiveness-of-the-partition-function/174180#174180">here</a>) to retrieve partial derivatives which bear a thermodynamical significance (like internal energy, chemical potential and os forth) and then again make a strong parallel with the Helmholtz free energy.</li> </ul>
965
statistical mechanics
Occupation number of a bosonic gas for high temperatures
https://physics.stackexchange.com/questions/694716/occupation-number-of-a-bosonic-gas-for-high-temperatures
<p>If we consider the average occupancy for a bose gas, we know:</p> <p><span class="math-container">$$\langle n \rangle_B=\frac{1}{e^{\beta(\epsilon-\mu)} -1}$$</span></p> <p>I also know that for high temperatures this transforms itself to the occupancy for a classical Boltzmann gas:</p> <p><span class="math-container">$$\langle n \rangle_{MB}= e^{-\beta(\epsilon - \mu)}$$</span></p> <p>I have a question here. We say the temperature increases, and for bosonic gases, the chemical potential goes to <span class="math-container">$-\infty$</span>. Therefore in this expression: <span class="math-container">$\frac{\epsilon -\mu}{KT}$</span>, both the temperature and the chemical potential increase.So why do we make the assumption that the chemical potential increases so much more then the temperature, hence why we get the Boltzmann expression in the end. Because we assumed that <span class="math-container">$e^{\beta(\epsilon - \mu)}&gt;&gt;1$</span></p>
966
statistical mechanics
Can susceptibility be infinite even for small systems?
https://physics.stackexchange.com/questions/694727/can-susceptibility-be-infinite-even-for-small-systems
<p>Susceptibility can be espressed in terms of Gibbs free energy as:</p> <p><span class="math-container">$$\chi^{-1}= \frac{\partial^2g}{\partial m^2}$$</span></p> <p>Where <span class="math-container">$g$</span> is the intensive Gibbs free energy. So if the second derivative of <span class="math-container">$g$</span> with respect to <span class="math-container">$m$</span> is zero, even with <span class="math-container">$N$</span> (<span class="math-container">$N$</span> is the size of the system) finite we have a divergence of the susceptibility. Is it possible?</p>
<p>I don't think so. Typically, the nonanalyticity of the free energy comes from the thermodynamic limit. In finite systems, the free energy is (almost) always a finite sum of analytic functions so there is no possibility of divergence of derivatives.</p> <p>The divergence of the susceptibility is typically linked to a macroscopic change of order-parameter <span class="math-container">$m$</span> as a result of infinitesimally small source <span class="math-container">$h$</span>. In a finite system, the response to a source would be always finite because the system can only have finite magnetisation.</p> <p>Also, I think it should be <span class="math-container">$\frac{\partial^2 g}{\partial h^2}$</span>?</p>
967
statistical mechanics
Boltzmann distribution: derivation from canonical distribution
https://physics.stackexchange.com/questions/106288/boltzmann-distribution-derivation-from-canonical-distribution
<p>I'm trying to understand the Maxwell-Boltzman Distribution, and in particular the derivation from the boltzman distribution for energy. I have successfully created an incorrect derivation, but I'm not sure what's wrong with it :). Any guidance would be much appreciated!</p> <p>I believe that the probability density function for the energy, $E$, of a random particle in a system at equilibrium is meant to look like this:</p> <p>$$f_E(x) = Ae^{-\frac{x}{kT}}$$</p> <p>To simplify things I'll just consider the particular case:</p> <p>$$f_E(x) = e^{-x}$$</p> <p>The cumulative distribution function is then:</p> <p>$$Pr(E\leq x) = F_E(x) = 1-e^{-x}$$ Now suppose all the particles in our system are of mass $1$, so, ignoring potential energy, the random variable governing the speed of a given particle is $V = \sqrt{2E}$. Now we want to find the distribution of the speeds. We can do this as so:</p> <p>$$F_V(v)=Pr(V\leq v) = Pr(\sqrt{2E}\leq v) = Pr\left(E\leq \frac{v^2}{2}\right) = F_E\left(\frac{v^2}{2}\right)$$</p> <p>and so</p> <p>$$F_V(v)=1-e^{-\frac{v^2}{2}}$$</p> <p>and finally</p> <p>$$f_V(v)= ve^{-\frac{v^2}{2}}$$.</p> <p>This is <em>almost</em> right, however, the Maxwell-Boltzman distribution would predict a pdf with $v^2$ rather than $v$, something like:</p> <p>$$f_V(v)= \sqrt{\frac{2}{\pi}}v^2e^{-\frac{v^2}{2}}.$$</p> <p>Can anyone point out the mistake in my logic? Thanks!</p>
968
statistical mechanics
Phase-space diagrams of microstates-infinite-dimensional?
https://physics.stackexchange.com/questions/695242/phase-space-diagrams-of-microstates-infinite-dimensional
<p>I've just begun learning Statistical Mechanics and my question concerns my professor's statement and I quote:</p> <blockquote> <p>Consider a gas of <span class="math-container">$N$</span> atoms which can have position <span class="math-container">$q_i$</span> and momentum <span class="math-container">$p_i$</span>. It is in a volume <span class="math-container">$V$</span> and has energy <span class="math-container">$E$</span>. To get an exact picture of the state a system is in, we have to define all the positions <span class="math-container">$q_i$</span> and momenta <span class="math-container">$p_i$</span> for all the particles in the system <span class="math-container">$i = 1, 2, 3, ..., N$</span>. This is an infinite-dimensional space because the variables qi and pi can vary continuously.</p> </blockquote> <p>Why is it infinite dimensional? I would think it is 6-dimensional because all we need to describe a particle would be the position coordinates, and the momenta along the <span class="math-container">$x,y,z$</span> directions. I don't grasp this statement at all. Please help.</p>
<p>Sometimes in a lecture, it may happen to make careless mistakes. I would advise everybody to check in the textbook, and if this doesn't provide an answer, ask the lecturer directly.</p> <p>In the present case, it is clearly false that the space of microstates for a finite system is infinite-dimensional. However, it is not 6-dimensional either. It is <span class="math-container">$6N$</span>-dimensional.</p>
969
statistical mechanics
How do we know there are no local maxima in the number of microstates with respect to energy?
https://physics.stackexchange.com/questions/705610/how-do-we-know-there-are-no-local-maxima-in-the-number-of-microstates-with-respe
<p>In my textbook, the definition of temperature begins by determining the maximum number of microstates for two systems in thermal equilibrium of energies <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span> and microstates <span class="math-container">$\Omega_1(E_1)$</span> and <span class="math-container">$\Omega_2(E_2)$</span> with respect to a change in <span class="math-container">$E_1$</span>. It does this by differentiation: <span class="math-container">$\frac{d}{dE} (\Omega_1(E_1) \Omega_2(E_2)) = 0$</span>. This apparently assumes that there are no local minima or maxima which this could be identifying. Does this derive from the Ergodic principle?</p>
970
statistical mechanics
What is the particle density for a Gaussian distribution in position and velocity?
https://physics.stackexchange.com/questions/711853/what-is-the-particle-density-for-a-gaussian-distribution-in-position-and-velocit
<p>Consider that the position and momentum of my particles have a Gaussian distribution. If I now calculate the number of particles in <span class="math-container">$dx$</span> with momentum <span class="math-container">$dp$</span> then which one of the following would it be:</p> <p><span class="math-container">$$d^2N=Ce^{-\frac{p^2}{2}}dxdp$$</span></p> <p>or <span class="math-container">$$d^2N=Ce^{-\frac{x^2}{2}}e^{-\frac{p^2}{2}}dxdp$$</span></p> <p>where in both cases <span class="math-container">$C$</span> is fixed by: <span class="math-container">$$\int d^2N=N $$</span>(N=total number of particles)</p> <p>I think it's the second one because in <span class="math-container">$dx$</span> number of particles are <span class="math-container">$e^{-\frac{x^2}{2}}dx$</span> and in <span class="math-container">$dp$</span> the number of particles are <span class="math-container">$e^{-\frac{p^2}{2}}dp$</span> so that the number of particles for both should be the product but my advisor wrote the first one so I am not sure how my reasoning is wrong?</p>
<p>These are two different distributions, which one is correct depends on the system of particles you are trying to describe.</p> <p>In the first one, the particles are uniformly distributed in space, with a Gaussian momentum distribution of zero mean and standard deviation <span class="math-container">$1$</span>. In the second one, the particles are Gaussian distributed in both position and momentum, i.e. the particles are concentrated around <span class="math-container">$x = 0$</span>.</p> <p>The first one has the issue that it is not normalizable: the integral is infinite so there is no non-zero <span class="math-container">$C$</span> that will give you the correct integral. But if you require the distribution to be normalized/normalizable, there are workarounds such as assuming the spatial distribution is uniform only in a finite region <span class="math-container">$|x| &lt; L/2$</span>, and zero elsewhere.</p>
971
statistical mechanics
Boltzmann distribution - why does distinguishability increase likelihood?
https://physics.stackexchange.com/questions/712644/boltzmann-distribution-why-does-distinguishability-increase-likelihood
<p>I am looking through derivations of the Boltzmann distribution. The method I've seen uses an argument that involves counting distinguishable microstates of a system with fixed energy, and then assuming that these distinguishable microstates are equally likely to occur.</p> <p>A first assumption is that from an experimental perspective, it is not possible to distinguish certain configurations. This seems reasonable. However, the derivations I've seen never explicitly say whether the (possibly indistinguishable) rearrangements of particles are still physically meaningful/realizable or not. Are they? Here's why I think this is important to know.</p> <p>By example, consider the derivation of Eisberg &amp; Resnick, Appendix C. Assume a four particle system of total energy <span class="math-container">$3\Delta E$</span>, with energy divisions <span class="math-container">$\{0, 1\Delta E, 2\Delta E, ... \}$</span>. Let's just consider two of the possible valid macrostates to avoid getting bogged down.</p> <ol> <li><p>One particle at energy <span class="math-container">$3\Delta E$</span>, three particles at <span class="math-container">$0$</span> energy. In principle there are <span class="math-container">$4!$</span> rearrangements, but <span class="math-container">$3!$</span> are irrelevant due to indistinguishability, giving only <span class="math-container">$4$</span> distinguishable microstates.</p> </li> <li><p>One particle at energy <span class="math-container">$2\Delta E$</span>, one particle at energy <span class="math-container">$\Delta E$</span> and two particles at energy <span class="math-container">$0$</span>. There are again <span class="math-container">$4!$</span> rearrangements in principle, but <span class="math-container">$2!$</span> are irrelevant due to indistinguishability. This leaves <span class="math-container">$12$</span> microstates.</p> </li> </ol> <p>In macrostates 1 and 2 both, there are <span class="math-container">$4!$</span> possible orderings, but they are not all distinguishable. However, for a moment, let's suppose that each of these orderings, despite not being distinguishable to an experimenter, <em>do</em> correspond to a valid and physically realizable configuration. If all <span class="math-container">$4!$</span> rearrangements of a macrostate <em>are</em> physically realizable, would it not be more reasonable to then assume that (a) &quot;each possible rearrangement (distinguishable or not) is equally likely,&quot; <strong>not</strong> that (b) &quot;each <em>distinguishable</em> rearrangement is equally likely?&quot;</p> <p>To see the difference in practice, suppose I have a lab notebook, and every <span class="math-container">$T$</span> seconds I observe this system to find it in one of the two configurations above, i.e. macro state 1 or 2. Also assume that I write the distingushable microstate that I observe. That means that, I write one of 1.1-1.4 for macro 1, and for macro 2, I write 2.1-2.12. Suppose I do this for a long time.</p> <p>Under the assumption (a) above, my entries tend toward an equal number of 1's and 2's, but. But as for microstates, I would have them in differing frequencies. This also seems to agree with a statement the book makes: &quot;all possible divisions of the energy of the system occur with the same probability.&quot; It is tempting to interpret this as saying that all macrostates are equally likely (and thus distinguishable microstates should not be).</p> <p>Under the assumption (b), in contrast, my entries would have 1.1-1.4 and 2.1-2.12 occurring in equal amounts -- all distinguishable microstates equally likely. Overall, macrostate 2 would be happening much more often than macrostate 1, and this is obviously reflected in the standard derivation.</p> <p>Have I deeply confused myself? How do I justify the assumption (b) without drawing a strange relationship between distinguishability and likelihood?</p> <p>thanks.</p>
<p>I think that there are some basic concepts to be reviewed in your question.</p> <p>First, a macrostate is defined by macroscopic variables, in your example of the four particles, fixing the macrostate is equal to fixing the total energy of the system. A microstate is defined by a configuration compatible with a macrostate. So the usual question is to ask: How many micro-states are compatible with total energy <span class="math-container">$3\Delta E$</span>? Your points (1.) and (2.) are instances of such microstates.</p> <p>Second, the phrase &quot;but 3! irrelevant due to indistinguishability, giving only 4 distinguishable microstates&quot;, are the particles indistinguishable or distinguishable?</p> <p>This might seem irrelevant, but I think that it would help to answer your doubts if you rephrase it in the usual framework of statistical physics.</p> <p><strong>Example:</strong> Say that the total energy is <span class="math-container">$\Delta E$</span> and there are 4 particles. The energy of these particles is quantized, so particle <span class="math-container">$i$</span> can only have energy values equal to <span class="math-container">$\epsilon_i=n\Delta E$</span> with <span class="math-container">$n=0,1,2,3,\dots$</span>. Then, we can count the number of microstates with distinguishable and indistinguishable statistics.</p> <p><em>Indistinguishable:</em> There is only one possible microstate. One particle with energy <span class="math-container">$\Delta E$</span>, the rest with energy 0.</p> <p><em>Distinguishable:</em> There are four possible microstates. These are:(<span class="math-container">$\epsilon_1$</span>,<span class="math-container">$\epsilon_2$</span>,<span class="math-container">$\epsilon_3$</span>,<span class="math-container">$\epsilon_4$</span>)=(<span class="math-container">$\Delta E$</span>,0,0,0); (0,<span class="math-container">$\Delta E$</span>,0,0); (0,0,<span class="math-container">$\Delta E$</span>,0) and (0,0,0,<span class="math-container">$\Delta E$</span>). In this case, you can count the number of microstates as the ways of sorting 1 energy pack among 4 distinguishable particles (see <a href="https://en.wikipedia.org/wiki/Stars_and_bars_(combinatorics)" rel="nofollow noreferrer">Wikipedia: combinatorics</a>). This is, <span class="math-container">$\Omega(E=\Delta E)=\binom{1+4-1}{1}=4$</span>. This result has nothing to do with indistinguishability.</p>
972
statistical mechanics
Confusion about fundamental assumption of statistical mechanics
https://physics.stackexchange.com/questions/713939/confusion-about-fundamental-assumption-of-statistical-mechanics
<p>I am confused about the fundamental assumption of statistical mechanics. It says, over a long time scale, that all microstates are equally accessible. I get it so far.</p> <p>But for microstate, there are arrangements such that no particles(thinking of Einstein solid) have energy (Daniel schroeder), how is that physically possible? and how does it make sense with fundamental assumption of statistical mechanics?</p> <p>BR</p>
973
statistical mechanics
Fermions and bosons weakly degenerate gases
https://physics.stackexchange.com/questions/729892/fermions-and-bosons-weakly-degenerate-gases
<p>I've tried to derive the pressure, <span class="math-container">$P$</span>, for a weakly degenerate gas of fermions (and analogously for bosons). The strange thing is that the expression I calculated is correct except for the sign of a term. I checked the calculations and i don't see any error, so the error might be conceptual. Can you explain me why it is conceptually wrong to calculate the pressure as follow?</p> <p>For weakly degenerate gases we can write, <span class="math-container">$\Omega=\Omega^{MB}(1 \mp \frac {e^{\beta \mu}} {4 \sqrt 2})$</span>, where the upper sign is for fermions, and the lower one for bosons. MB stands for Maxwell Boltzmann. This equation is derived in my book, so i'm pretty confident that is correct in this context.</p> <p>Now, <span class="math-container">$\Omega=-PV$</span>, and since for a system described by Maxwell Boltzmann grand potential we have <span class="math-container">$PV=K_BNT$</span>, we can write <span class="math-container">$\Omega^{MB}=-K_bNT$</span>. Substituting in the first one: <span class="math-container">$\Omega=-K_bNT(1\mp \frac {e^{\beta \mu}} {4 \sqrt 2})$</span>.</p> <p>Then, <span class="math-container">$\mu=K_bT ln(N\Lambda^3/V)$</span>, so, substituting in the previous one we get <span class="math-container">$\Omega=-K_bNT(1 \mp N \Lambda^3/V4 \sqrt2)$</span>.</p> <p>Finally, since <span class="math-container">$\Omega=-PV$</span> we have <span class="math-container">$P=\frac {K_bNT}{V}(1 \mp \frac {N \Lambda^3}{V4 \sqrt 2})$</span>.</p> <p>Notice that the correct expression should be <span class="math-container">$P=\frac {K_bNT}{V}(1 \pm \frac {N \Lambda^3}{V4 \sqrt 2})$</span></p>
<p>Your <span class="math-container">$\mu$</span> expression is correct in the classical limit only. There is a correction term which is the same order as the correction you have kept. That is you should substitute for <span class="math-container">$\Omega^{MB}(T,V,\mu) = -\frac{VT}{\Lambda^3} e^{\beta \mu}$</span> to get <span class="math-container">$\Omega = -\frac{VT}{\Lambda^3} e^{\beta\mu}\left [1 \mp 2^{-5/2} e^{\beta \mu} \right ]$</span>.</p> <p>Then calculate <span class="math-container">$N$</span> from the negative <span class="math-container">$\mu$</span> derivative. Solving for <span class="math-container">$e^{\beta \mu}$</span> as a function of <span class="math-container">$n$</span> gives <span class="math-container">$e^{\beta u} = \Lambda^3 n \left [ 1 \pm 2^{-3/2} n\Lambda^3 +...\right ]$</span>. Substituting into <span class="math-container">$P(T,\mu)$</span> and keeping all terms at the same order will give the standard result.</p>
974
statistical mechanics
What does the chromatic polynomial have to do with the Potts model?
https://physics.stackexchange.com/questions/482/what-does-the-chromatic-polynomial-have-to-do-with-the-potts-model
<p><a href="http://en.wikipedia.org/wiki/Potts_model">Wikipedia</a> writes:</p> <blockquote> <p>In statistical mechanics, the Potts model, a generalization of the Ising model, is a model of interacting spins on a crystalline lattice.</p> </blockquote> <p>From combinatorics conferences and seminars, I know that the Potts model has something to do with the <a href="http://en.wikipedia.org/wiki/Chromatic_polynomial">chromatic polynomial</a>, but it's not clear to me where it arises (and for which graph it's the chromatic polynomial of).</p> <blockquote> <p>Question: What does the chromatic polynomial have to do with the Potts model?</p> </blockquote>
<p>The relationship between the chromatic polynomial and the Potts model is a special case of the relationship between the Tutte polynomial and the random cluster model of Fortuin and Kastelyn. There's a very tiny bit about this in <a href="http://en.wikipedia.org/wiki/Tutte_polynomial#Definitions">the wikipedia page on the Tutte polynomial</a>, but there's <a href="http://books.google.com/books?id=SbZKSZ-1qrwC&amp;pg=PA342&amp;lpg=PA342&amp;dq=statistical+mechanics+tutte&amp;source=bl&amp;ots=NzBEf6i4u_&amp;sig=p3ou8AaZYDM6PeVCu1defABrECA&amp;hl=en&amp;ei=yNnaTMTxEobGlQf4mMG9CQ&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=3&amp;ved=0CCoQ6AEwAg#v=onepage&amp;q=statistical%20mechanics%20tutte&amp;f=false">a section about this</a> in Bollobàs's "Modern Graph Theory" and plenty more articles that pop up under a google search for "Tutte statistical mechanics" including "<a href="http://arxiv.org/abs/0804.2468">A little statistical mechanics for the graph theorist</a>" which I've always wanted to finish reading...</p> <p>By the way, it seems there's a little bit of confusion in your question about what the Potts model "is". It doesn't quite make sense to ask "for which graph is the Potts model the chromatic polynomial". The Potts model is a probabilistic model (roughly, a measure on random vertex "colorings", i.e. assign a color (often called a spin value) to each vertex) and it may be defined on any graph, where the parameters like temperature or applied field change the measure on the set of vertex colorings of the graph, e.g. at low temperatures, colorings with the same color on neighboring vertices are preferred. As typically studied in physics, the Potts model (and other such models) are put on periodic graphs (often called "lattices" in physics), such as the square lattice graph, triangular lattice graph, etc. The chromatic and Tutte polynomials show up when you compute the "partition function" associated to the model, which is a quantity like a generating function... this paragraph is a bit rough, so I'd refer to the other references. </p> <p>Let me know if you have any questions. I could talk about this stuff all day.</p>
975
statistical mechanics
Applicability of Baxter&#39;s method for IRF models
https://physics.stackexchange.com/questions/6899/applicability-of-baxters-method-for-irf-models
<p>In a interaction-round-a-face model of $n^2$ particles in a lattice, a weight $W(a,b,c,d)$ is assigned to each face in the lattice based on the spins $a,b,c,d$ (listed say from the bottom-left corner in counter-clockwise fashion) of the particles at the corners of the face. Based on this, a partition function is formed $Z=\sum\prod W(a,b,c,d)$, where the the sum is over all possible spins of all particles and the product is over all the faces in the lattice. The well-known text <em>Exactly Solved Models in Statistical Mechanics</em> by Baxter shows how to obtain exact solutions for $Z$ in a couple cases, like the six-vertex model and the eight-vertex model, using "commuting transfer matrices." In reading Baxter's book, I am uncertain of the applicability of this method. For instance, I am working on a particular application where particles in a square lattice can have either up spin (+1) or down spin (-1) and the weight of any face $W(a,b,c,d)=1$ unless all four spins around the face alternate, in which case $W(1,-1,1,-1)=W(-1,1,-1,1)=4$. (The boundary condition here is that the all the spins on the boundary must be up.)</p> <p>Does anyone know if the commuting-transfer-matrix method (or any other method) should yield an exact solution for this particular weighting of the spins? More generally, should the commuting-transfer-matrix method yield exact solutions for <em>any</em> weighting of the spins (even in the two-spin case)?</p>
976
statistical mechanics
Is there any physics behind flocking?
https://physics.stackexchange.com/questions/10578/is-there-any-physics-behind-flocking
<p>There are many articles published in physics journals about <a href="http://en.wikipedia.org/wiki/Flocking_%28behavior%29" rel="nofollow">flocking</a>. Is there a physical reason for these phenomena or is it just because physics methods are being used to study collective motion?</p> <p>It seems there is no true mechanics in models of flocking, there are no Hamiltonians defined and so on, only rules of motion, like alignment rule and constant velocity of <a href="http://en.wikipedia.org/wiki/Self_propelled_particles" rel="nofollow">self-propelled particles</a>. Yet physicists debate the validity of <a href="http://en.wikipedia.org/wiki/Mermin%E2%80%93Wagner_theorem" rel="nofollow">Mermin-Wagner theorem</a> in simple flocking models. Isn't this fundamentally problematic because of absence of real dynamics in these systems?</p>
<p>For your first question: Yes, there is a physical reason for this phenomenon as is (or should be) for every observable phenomenon.</p> <p>For your second part: Depends on who is debating the validity or applicability of that theorem. Can you please add a reference, then perhaps someone may be able to answer it.</p> <p>For your third question: Yes there is a lot of &quot;real&quot; dynamics involved here. In fact, because a lot of objects are interacting with each other, it is highly involved as well. Different models to explain/simulate flocking behavior use almost the same physical phenomena in their calculations --gravitational pull from earth, fluid mechanics, wing flapping. What they debate is usually one or more of the following:</p> <ul> <li>What is a bird's comfort level of independence? Meaning, how far will a bird on the edge will go away from the flock before it senses danger. This has nothing* to do with physics and has everything to do with such factors as the predators around, the evolutionary history of the birds, etc.</li> <li>How close do birds like flying to each other? At what distance they sense an impending collision? Again, nothing to do with physics really.</li> <li>How perfectly do birds prefer to align to the group? Is there a single leader or multiple?</li> <li>Does a change in the wind's direction change the leader's course?</li> <li>etc...</li> </ul> <p>One can of course abstract out the biology and social behaviour out of this model and empirically define a force and construct laws (similar to Newton's laws of gravity, Maxwell's equations etc.) for that force. Perhaps then that person may be able to reuse some of the results that physicists have derived for other problems (<strong>note</strong>: this depends a lot on how your force laws are defined). Basic physical laws that apply universally shall still be applicable.</p> <h3>Footnotes</h3> <p>*You can say everything is physics deep deep down but that is not the point here obviously.</p>
977
statistical mechanics
Can somebody provide some sort of crash course on random walk and its problems at the level of a beginning undergraduate student in physics?
https://physics.stackexchange.com/questions/12733/can-somebody-provide-some-sort-of-crash-course-on-random-walk-and-its-problems-a
<p>I really need some very simple discussions of random walk (probability). Couldn't get anything from class, more so from Reif. Thanks!</p>
<p>Random walk is intimately connected with diffusion, heat equation, Laplacian, harmonic functions, quadratic forms and Gaussian distribution. Here's a sketch of the relationship in the discrete case (continuous case is conceptually similar but requires much more familiarity with probability theory). I'll try to be as consice as possible since there is too much going on.</p> <p>Consider some connected subset $S$ of a square lattice and suppose there is a random process such that in each time step the particle can jump to a neighboring site (with probability of jumps being uniform for simplicity). This can be considered as an operator (more precisely a probability kernel) $P$ acting on probability measures on $S$ (so called transition operator in the theory of Markov processes). As this is essentialy linear algebra we obviously want to find eigenstates of this operator. So, if $f$ is a stationary distribution w.r.t. this walk, it must be the case that $$\sum_{j \in n(i)} {1 \over 4} f(j) = f(i)$$ where $n(i)$ is the set of neighbors of the site $i$. In other words, such a function must be (discretely) harmonic $\Delta f = 0$ where $\Delta$ is a discretized Laplacian. Since we now have eigenstates we can also consider evolution of the distribution under time in a macroscopic description and this will bring us to the heat equation $\partial_t f = k \Delta f$, $k$ being some coefficient of diffusion or conduction or whatever it is we are trying to model.</p> <p>Now, there is another nice connection between this random process and certain difference equation <a href="http://en.wikipedia.org/wiki/Feynman%2dKac_formula" rel="nofollow">Feynman-Kac formula</a> (here shown in the continuous version -- stochastic process and PDE). This formula tells us that a solution to this equation can be written as a sum over all paths of the random walk. Incidentally, the equation we are interested in is given by a quadratic potential between nearest neighbors (this in turn leads to Gaussian measures since the Hamiltonian $H$ is quadratic and the equilibrium measure is given by usual Boltzmann formula $p \sim \exp(-\beta H)$). The condition on being a solution is then that the function is harmonic.</p> <p>As for the problems and topics, one is interested in things such as recurrence (whether the random walk ever returns to origin with positive probability), exit times, etc. If the RW is recurrent one can study the average time of recurrence. Note that these questions are not that hard on square lattice with uniform probability but become harder on arbitrary graphs with arbitrary probability assignments. We can again study stationary measures, evolution of macroscopic observables, Gaussian distributions, etc. (and essentialy these become tools that can tell us something about the properties of graphs).</p> <p>On a square lattice in 1D and 2D the random walk is recurrent ("drunkard finds his way home") but in 3D it isn't. This is connected to the fact that Coulomb potential (recall the relationship with harmonics!) in 2D is logorithmic while in 3D it has a power law. Another very close relation is that there are no equilibrium Gibbs states for a massive free field in dimensions less than 3.</p>
978
statistical mechanics
Cross-field diffusion from Smoluchowski approximation
https://physics.stackexchange.com/questions/13804/cross-field-diffusion-from-smoluchowski-approximation
<p>I'm reading <em>An Introduction to Stochastic Processes in Physics</em> by Don S Lemons. Problem 10.2 leads to a pair of equations:</p> <p>$dV_x = -\gamma V_xdt+V_y\Omega dt-V_y\sqrt{2\gamma dt}N_t(0,1)$</p> <p>$dV_y = -\gamma V_ydt-V_x\Omega dt+V_x\sqrt{2\gamma dt}N_t(0,1)$</p> <p>($\gamma$, $\Omega$ constants, $X$, $Y$ position, $V_x$, $V_y$ velocity. The two noises are identical.)</p> <p>It then asks for the application of the Smoluchowski approximation to these equations. As the book is very hand-wavey in justifying the use of this approximation I find it tricky to reason with. I only know how to use the search-and-replace rules the book gives, eg. set $dV_x=0=dV_y$ and replace $V_xdt$ and $V_ydt$ with $dX$ and $dY$. So what do I do with the terms like $V_y\sqrt{2\gamma dt}N_t(0,1)$ for the Smoluchowksi approximation? Presumably all references to $V_x$ and $V_y$ need to be eliminated.</p>
<h3>Original wrong stuff</h3> <p>Well, the OP asked for a solution, but I still can't figure out how this is physics. Add the two equations with an i to get the equation for $V=V_x + i V_y$ (dividing by dt--- I hate finance/mathematics conventions, and calling the white noise $\eta$)</p> <p>${dV\over dt} = (-\gamma+i\Omega)V + iV\eta$</p> <p>So that the complex logarithm of V is executing a biased Brownian motion:</p> <p>${d\over dt} \log(V) = (-\gamma + i \Omega) + i\eta$</p> <p>and</p> <p>$V(t) = \exp( - (\gamma + i \Omega)t + iB(t))$</p> <p>Where B is a standard Brownian motion. The V solution goes to zero for sure, even without the i multiplying the Brownian term, because B only goes as square root of t (assuming positive nonzero damping), and the total distance travelled is given by a Laplace transform of the exponential of a Brownian motion, and I didn't bother to evaluate its statistics because this is clearly not physics.</p> <ul> <li>The x and y $\eta$ noise can't be the same in physics. I didn't realize they were the same in the problem as stated, because this is so strange.</li> <li>In physical Brownian diffusion, in the limit of long times, V acquires a distribution with a scale, the Maxwell Boltzmann distribution, and in this case V can't acquire dimension because the equation is scale invariant (the OP says "dimensionally consistent"). There is a thermal velocity scale of $V_T=\sqrt{2mkT}$ in real Brownian motion which allows the equation to be dimensionally consistent without V multiplying the noise, but with $V_T$. It is this thermal velocity that sets the scale for the average velocity at times long compared to the velocity relaxation time.</li> </ul> <p>The Smoluchowski approximation requires that V have a steady state distribution and a relaxation time to that steady state distribution, which doesn't happen in this problem. Here the velocity goes away. That's clearly a property of the scaling invariance of V, and the overall decay of V, so it won't be fixed even if the x and y noise are different. This is not thermal physics, and I can't see any way to do a Smoluchowski approximation, so how to do the approximation is moot.</p> <p>It is possible that the book had a thermal scale constant V multiplying the noise, and there is some error in transcription of the problem, but I don't want to start guessing.</p> <h3>Later: Correct stuff</h3> <p>Thanks for linking to the book, it made everything clear. The key words are "energy conserving process". This is designed to model a particle moving through a constant magnetic field with an additional random magnetic field on top, so that it goes around with a randomly fluctuating curvature, but the particle never changes speed.</p> <p>Disregard the previous stuff, it is totally wrong, because it is using a different convention for the time derivative of stochastic quantities than is used throughout the book (I should have seen this from the form of the problem, but I didn't). In deriving the solution I used the fact that</p> <p>${1\over V} {dV\over dt} = {d\over dt} \log(V)$</p> <p>which is only true if the time derivative is Stratonovich (centered difference), which is a common convention in physics. The book is using an Ito convention for time derivatives (forward-difference), which is the most common convention in mathematics and in finance. The Ito time derivative does <em>not</em> obey the uncorrected chain rule, but the correction is easy to work out:</p> <p>${1\over V(t+\epsilon)} {dV\over dt} -{ 1\over V(t)} {dV\over dt} = -{1\over V^2} {(V(t+\epsilon) - V(t))^2\over \epsilon} $</p> <p>and the last term, the infinitesimal square fluctuation in V at time t, is entirely determined by the square of the coefficient of the noise term. In quantum mechanics language,</p> <p>$[{1\over V}, {dV\over dt} ] = (i\sqrt{2\gamma})^2 = -2\gamma$</p> <p>In order to go from Ito derivative, which is one order of operators, to Stratonovich derivative which is the average of the two orders (like the Jordan product of operators in quantum mechanics), you have to add half the commutator, and this gives</p> <p>${d\over dt} \log(V) - \gamma = -\gamma + i\Omega + i\sqrt{2\gamma}\eta$</p> <p>Which exactly cancels the damping term, so that the result is completely physical--- the velocity magnitude doesn't decay at all. This is not a surprise. The coefficient which I confused for a physical damping is not a damping at all, but just the amount of magnitude change of the velocity vector an infinitesimal time into the future caused by the random direction-changing fluctuations.</p> <p>$V = V_0 \exp(i\Omega t + i\sqrt{2\gamma}B(t))$</p> <p>This means that the angle of the velocity V is executing a Brownian motion with drift, and keeps the same magnitude at all times. The equation at long times does become a Brownian motion in x, and the Smoluchowski approximation is valid for long times.</p> <h3>The Smoluchowski limit</h3> <p>The central difficulty you have is with the Smoluchowski limit, so I'll give a short explanation. In ordinary Brownian motion, the equation of motion is:</p> <p>$m{d^2x\over dt^2} + \gamma{dx\over dt} + \sqrt{2D}\eta = 0$</p> <p>(Ito and Stratonovich are the same, there are no products of non-commuting quantities). If you average this equation over long time window, the velocity averages to the total distance over the total time, while the second derivative averages to zero (because the velocity is varying in a bounded thermal range). This means that the first derivative average is equal to the noise thermal average, which means that the long-time averaged displacement of the solution to the above second order stochastic equation is equal to the long-time averaged solution of the first order equation below, which drops the second derivative term:</p> <p>$\gamma {dx\over dt} + \sqrt{2D}\eta =0$</p> <p>Or, more succinctly at the cost of butchering history, in Brownian motion, Newton equals Aristotle. So that $D$ is the diffusion constant of the long-time Brownian motion (D is also determined from the condition that the v diffusion process has as a steady state the Maxwell Boltzmann distribution, and this gives the Einstein relation). Dropping the second derivative term in the equation is equivalent to the formal rules that the book gives. There is no more content in those formal rules than the above averaging procedure, which is better because its both clearer and more rigorous.</p> <h3>The actual problem</h3> <p>In this problem, the equation in question is (in second order Ito form--- all derivatives are forward differences)</p> <p>${d^2 X\over dt^2} - \gamma {dX\over dt} + i\Omega{dX\over dt} - i\sqrt{2\gamma}{dX\over dt} \eta = 0$</p> <p>The Smoluchowski approximation averages this equation over a long interval, which gets rid of the second derivative term (for the same reason as before, bounded velocity fluctuations), and produces an equation which is first order:</p> <p>$(-\gamma + i\Omega)\Delta X = i\sqrt{2\gamma}\int_0^T {dX\over dt}\eta$</p> <p>But unlike standard thermal drift, this is no simplification, because one needs to know the average of the product of the velocity and the noise in order to get a closed equation. This is the difficulty you were having.</p> <p>There is a minor annoyance in this way of looking at things: it seems like there are too many equations, in that the real and imaginary parts of the equation give two separate constraints, but this isn't true, because the right hand side is almost a perfect integral. By putting the explicit solution for the velocity,</p> <p>$\int_0^T i\sqrt{2\alpha} {dX\over dt}\eta(t) dt = \int_0^T \sqrt{2\alpha}V_0\eta e^{i\Omega t + i\sqrt{2\alpha} B(t)} dt$</p> <p>And considering that $\eta = {dB\over dt}$ by definition, this would be a perfect integral up to adding Omega, in the stratonovich convention, where the chain rule works. But this is Ito, so you do the half-commutator trick etc, etc, I'll leave out the details, but the end result gives just one consistent equation for $\Delta X$</p> <p>$\Delta X = V_0 \int_0^T \exp(i\Omega t + B(t)) dt$</p> <p>which tells us nothing new, because this is just the integral of the velocity from the explicit solution. So there is no way to evaluate the Smoluchowski limit without knowing something about the integral of cosines and sines of a Brownian motion. This is the central problem that the book is posing.</p> <h3>The integral of cosines and sines of a Brownian motion</h3> <p>To get the Smolochowski limit, all you need evaluate the average square distance traveled after time t:</p> <p>$\langle |\Delta X|^2 \rangle = |V_0|^2\langle \int_0^T \int_0^T e^{i\Omega (t - t&#39;) + i (B(t)-B(t&#39;))} dt dt&#39; \rangle = \int_0^T\int_0^T e^{i\Omega(t-t&#39;)} \langle e^{i(B(t)-B(t&#39;))}\rangle$</p> <p>So the irreducible quantity to solve this problem is:</p> <p>$\langle e^{i(B(t) - B(t&#39;))} \rangle = G(t-t&#39;)$</p> <p>$G(t)$ is, by translation invariance, the expected value of</p> <p>$G(t) = \langle e^{iB(t)} \rangle $</p> <p>This can be evaluated by Feynman diagrams: expand the exponential in powers, and note that B(t) is gaussian with width t, so all its moments are known. The odd moments are zero, and the even moments are given by products of odd numbers (pairing combinations):</p> <p>$\langle B(t)^{2n} \rangle = 1 \cdot 3 \cdot 5 ...\cdot (2n-1) t^n$</p> <p>So the power series for the exponential only has odd terms</p> <p>$G(t) = \sum_{k=0}^\infty {(-1)^n \langle B(t)^{2n}\rangle\over (2n)!} = e^{-{t\over 2}}$</p> <p>And this means by symmetry that</p> <p>$G(t) = e^{-{1\over 2} |t-t&#39;|}$</p> <p>This can be found by path integrals too, it is a quadratic path integral with a linear source, and the result is the exponential of 1/2 the Green's function of a 1d laplacian between the two sources, and this Green's function is the absolute value function.</p> <p>The distance traveled after time T is the double integral</p> <p>$|\Delta X|^2 = |V_0|^2\int_0^T \int_0^T G(t-t&#39;) e^{i\Omega(t-t&#39;)}$</p> <p>which by thinking about it in 45-degree rotated coordinates, is, up to the unimportant boundaries of the rectangle, equal to:</p> <p>$|\Delta X|^2 = T|V_0|^2 \tilde{G}(\Omega)$</p> <p>So that the diffusion constant is the Fourier transform of $\exp(-|t|)$, which is ${1\over 1+\omega^2}$, fixed up to account for the factor of 2. This gives the average square distance, and the diffusion constant is the coefficient of T in this expression:</p> <p>$\tilde{G}(\Omega) = {1\over 4 + \Omega^2}$</p> <p>You can restore the $\gamma$ into the formula by dimensional analysis, I'll do so later if necessary.</p>
979
statistical mechanics
How &quot;to take&quot; this integral?
https://physics.stackexchange.com/questions/28529/how-to-take-this-integral
<p>When I learned anharmonic model of crystal, I read that considering anharmonic oscillations and Boltzmann distribution for the "atoms" of crystal we can get the dependence of distance between the "atoms" from a temperature as</p> <p>$$ \langle r \rangle = r_{0} + \alpha T. $$</p> <p>As I understood the words below, it's like</p> <p>$$ \langle r \rangle = \frac{\int \limits_{0}^{\infty}re^{-\frac{U}{kT}}dr}{\int \limits_{0}^{\infty}e^{-\frac{U}{kT}}dr} \approx \frac{\int \limits_{0}^{\infty}re^{-\frac{U_{0} + a(r - r_{0})^{2} + b(r - r_{0})^{3} }{kT}}dr}{\int \limits_{0}^{\infty}e^{-\frac{U_{0} + a(r - r_{0})^{2} + b(r - r_{0})^{3} }{kT}}dr} = |x = r - r_{0}| = \frac{\int \limits_{0}^{\infty}(x + r_{0})e^{-\frac{ax^{2} + bx^{3}}{kT}}dr}{\int \limits_{0}^{\infty}e^{-\frac{ax^{2} + bx^{3}}{kT}}dr}, $$</p> <p>and then - $$ \langle r \rangle \approx r_{0} + \frac{\int \limits_{0}^{\infty}xe^{-\frac{ax^{2} + bx^{3}}{kT}}dr}{\int \limits_{0}^{\infty}e^{-\frac{ax^{2} + bx^{3}}{kT}}dr}. $$</p> <p>What can I do on the next step?</p>
<p>You expand the top and bottom integral in a power series in b, and keep the lowest order term in $b$:</p> <p>For the top integral:</p> <p>$$ \int x e^{-ax^2} (1 + bx^3) = b\int x^4 e^{-ax^2} = b {d^2\over da^2} \int e^{-ax^2} = b{d^2\over da^2} \sqrt{\pi\over a} = -\sqrt{\pi\over a} {3b\over 4a^2} $$</p> <p>For the bottom integral</p> <p>$$ \int e^{-ax^2} (1 + b x^3) = \sqrt{\pi \over a} $$</p> <p>So the quotient is</p> <p>$$ {3b\over 4a^2}$$</p> <p>The coefficients $a$ and $b$ absorb the $kT$ on the bottom in your expression, so replace $a$ by $a\over kT$ and b by $b\over kT$. The above becomes:</p> <p>$$ kT {3b\over 4a^2} $$</p> <p>and it is linearly proportional to T, as you expected.</p>
980
statistical mechanics
Spin 3/2 Statistical Mechanics Problem
https://physics.stackexchange.com/questions/43625/spin-3-2-statistical-mechanics-problem
<p>I am trying to solve a problem from the book 'Introductory Statistical Mechanics' (Bowley, Sanchez). The question reads:</p> <p>Calculate the free energy of a system of N particles, each with spin 3/2 with one particle per site, given that the levels associated with the four spin states have energies e, 2e, -e, -2e....</p> <p>What I want to know is how I use the face that each particle has a spin 3/2? Does this add some kind of degeneracy I need to take into account?</p>
<p>If there are no term in the Hamiltonian lifting the degeneracy on the spin, then you'll have a degree of degeneracy equal to $$g_s=(2s+1)$$ where s is the spin ($=\frac32$ so $g_s=4$ in your case).</p> <p>Each correspond to a projection of the spin along some axis (the z usually) $$s_z = \frac32,\frac12,-\frac12,-\frac32$$</p>
981
statistical mechanics
Why do we need different ensembles in statistical mechanics?
https://physics.stackexchange.com/questions/54758/why-do-we-need-different-ensembles-in-statistical-mechanics
<p>Why do we study these different ensembles, microcanonical, canonical, grand canonical ensemble ? Are they used for studying different physical system or scenarios?(e.g. in some system you can only treat it as mirocanonical, and in other cases you can only apply canonical ensemble) Do they have the same result at thermodynamic limit? </p>
<p>If somebody tells you what the entropy is as a function of energy, volume, and number of particles, you have all the information you need (for a standard plain vanilla system). It is not <em>necessary</em> to define any other ensemble, but it is <em>convenient</em>. If your system for instance is in contact with a big other system ("reservoir") with which it can exchange energy, then you can either describe <em>system plus reservoir</em> microcanonically, or you describe <em>only your system</em> canonically. The latter is clearly more convenient, since you need not bother about the internal "workings" of the reservoir. For the purpose of your problem the entire reservoir is perfectly well characterized by a single number: its temperature.</p> <p>The mathematical machinery of Legendre transforms provides a neat way to change from a thermodynamic potential (such as the entropy) to other potentials in which derivatives of the original thermodynamic potential become the new variables, and this transformation is being done without losing information. So, at the end of the day, this is just mathematical convenience: represent the necessary thermodynamic information in ways that are easier to handle in a given situation characterized by a particular set of constraints.</p>
982
statistical mechanics
Partition function for multidimensional scaling energy
https://physics.stackexchange.com/questions/55525/partition-function-for-multidimensional-scaling-energy
<p>Let $D_{ij}$ a random matrix with i.i.d positive coefficients. One can take for instance $D_{ij}$ uniformly distributed in [0,1]. We consider the following energy function $H(x)$ defined for $x=(x_i)_1^n$, with each $x_i\in \mathbb{R}^k$, where $n&gt;k$ are two positive integers:</p> <p>$$H(x) = \sum_{i,j} \left(\|x_i-x_j\|^2 - D_{ij}\right)^2$$</p> <p>I would like to find the expectation of $$H^*:=\inf_{x \in (\mathbb{R^k})^n} H(x)$$</p> <p>To this end, I was thinking of using a statistical mechanics approach and estimate the partition function associated with $H$. I know the definition but I don't know how to work it out... Anyone can help ?</p> <p>Thanks</p>
983
statistical mechanics
Why the chemical potential of massless boson is zero?
https://physics.stackexchange.com/questions/60499/why-the-chemical-potential-of-massless-boson-is-zero
<p>In Bose-Einstein condensation, the chemical potential is less than the ground state energy of the system($\mu&lt;\epsilon_g$). But why does the massless boson such as photon have zero chemichal potential($\mu=0$)?</p>
<p>The chemical potential is a complementary variable to $N$, the number of particles (of a certain kind), and they get combined in the same sense as $-\beta,H$ and similar pairs. The chemical potential "punishes" too high or too low number of particles in grand canonical and similar distributions such as $$\exp(-\beta(H-\mu N))$$ In the derivation of similar terms in the exponential in the distribution, it's important that all the extensive quantities such as $H, N$ are conserved. You may view $\beta,\beta\mu$ and similar coefficients as Lagrange multipliers that impose the conservation of $H,N$ etc.</p> <p>The distribution is maximizing the number of microscopic rearrangements given the fixed specified values of the conserved quantities such as $H,N$ etc.</p> <p>However, for massless bosons, there doesn't exist any sense or approximation in which the number $N$ of these particles would be conserved. So the states with higher or lower numbers $N$ can't be punished by any $\exp(\beta\mu N)$ factor. It always takes "zero work" to change the number of these massless bosons by one. For example, it's trivial to create a photon; in fact, an accelerating charge is emitting an infinite number of photons (a source of infrared divergences in quantum field theories). The number $N$ of particles like photons isn't even finite so it's clear that the coefficient multiplying it has to be zero for the product to be well-defined.</p>
984
statistical mechanics
What is the minimum non-integer dimension for which the XY model shows a phase transition? (if well-defined)
https://physics.stackexchange.com/questions/64552/what-is-the-minimum-non-integer-dimension-for-which-the-xy-model-shows-a-phase-t
<p>I know that <a href="http://en.wikipedia.org/wiki/Classical_XY_model" rel="nofollow">XY statistical model</a> for $d=2$ doesn't show a regular phase transition , while the $3d$ has, I was wondering what is the behaviour for $2&lt; d &lt; 3$.</p> <p>If it is simpler one could consider another model in its class of universality, like it's done for the $2+\epsilon$ expansion.</p> <p>The simplest hypothesis is that exist a $d_{min}$ between 2 and 3 such that for values of dimension greater than $d_{min}$ there is phase transition.</p> <p>Assuming this number has a meaning for every statistical model, I would like to ask if it is universal with respect to the simmetries of the order parameter.</p> <p>Going further, what is the behaviour of the critical temperature around $d_{min}$, is that universal?</p>
985
statistical mechanics
Why is velocity normally distributed in a gas, but not energy?
https://physics.stackexchange.com/questions/74660/why-is-velocity-normally-distributed-in-a-gas-but-not-energy
<p>If one looks at a cubic box of gaseous atoms all initially flying in the same direction at the same speed (but flying at an angle to the walls, so as not to reflect up-and-down against the box walls forever), they will collide with the walls and each other, their previously uniform velocities becoming messed up randomly until they are distributed according to a Maxwell-Boltzmann distribution, which is a normal distribution. Looking at the set-up as a random-generator, this can be considered an application of the central limit theorem, which says that the mean of a large number of random variables will end up being normally distributed.</p> <p>My question: Why doesn't the same reasoning hold for energy? Aren't the energies of the atoms in such a box also random variables? However, the energies follows the Boltzmann distribution, which isn't a normal distribution.</p> <p>Of course, the relationship between speed and energy prohibits both energy and speed being distributed normally, but doesn't this finding contradict the central limit theorem?</p>
<p>The more natural relationship between the two distributions is the opposite one. The Boltzmann distribution $$\exp(-E/kT)$$ is the more general one (connected with the microscopic definition of the temperature $T$ in any system in physics) and one may simply substitute the kinetic energy $mv^2/2$ for $E$ to get the Maxwell part of the distribution.</p> <p>The non-normal distribution of the energy doesn't contradict the central limit theorem because the energy of a gas molecule after $N$ collisions isn't a simple sum of $N$ contributions to the energy. Instead, a molecule is likely to lose lots of energy in a collision if its initial energy before the collision (e.g. one accumulated from the previous collisions) was high to start with. So the previous history matters which makes the evolution of the energy non-Markovian or non-linear, if you wish.</p> <p>The central limit theorem only talks about the distribution of a quantity that is a sum (or average) of many terms with a fixed distribution but this isn't the case for energy after many collisions. On the contrary, it is the case for the momentum (in non-relativistic physics, assuming elastic collisions).</p> <p>It would be a clear inconsistency if e.g. kinetic energy were predicted to be normally distributed. In particular, the kinetic energy can't be negative but the normal distribution is nonzero for arbitrarily large positive and negative values of the variable.</p>
986
statistical mechanics
Practical difference between canonical and grand canonical ensembles
https://physics.stackexchange.com/questions/80050/practical-difference-between-canonical-and-grand-canonical-ensembles
<p>I'm currently doing some calculations which require evaluating various standard thermal expectation values in the canonical ensemble (both bosons and fermions). Now, in order to make my theoretical machinations easier, I am actually using the grand canonical ensemble, where the chemical potential acts as a Lagrange multiplier enforcing the constraint $\langle \hat{N} \rangle = N$, where $N/V$ is the fixed density of the physical system. The justification for this is that the relative fluctuations in $\langle \hat{N} \rangle$ should vanish in the thermodynamic limit, in which case I expect fixing the average number to be physically equivalent to fixing the number once and for all. (Also, this approach seems to be adopted by a several presumably trustworthy references, see for example <a href="http://archive.org/details/CondensedMatterFieldTheory" rel="nofollow">Simons &amp; Altland</a> Section 6.3.) This intuition seems reasonable, but I wonder if matters may be more subtle than this argument implies.</p> <blockquote> <p>Do thermal averages in the thermodynamic limit of the grand canonical and canonical ensembles coincide?</p> </blockquote> <p>I'm hoping for either a more rigorous justification supporting this procedure, or examples where it can go horribly wrong. Pointers to appropriate references would also be much appreciated.</p>
<p>As you know, the thermodynamic limit requires the system to grow to infinite size while keeping the same density, which lets you neglect surface effects. It also requires the lack of long-range interactions so that distant parts can act independently. So, you need to neglect gravitational interactions, allow the system to be charge neutral, etc.</p> <p>Even in the thermodynamic limit, the different ensembles can behave differently near phase transitions. For example, if you park the grand canonical ensemble at a liquid-gas boundary, then the system is free to be filled with liquid or gas or a mixture. So, you get a giant and non-negligible fluctuation in the system's energy and particle number. By comparison the phase transition is extended in the canonical ensemble, with a range over which you have a liquid-gas mixture. Another example is the boson condensate: once the condensate has occurred, the total particle number in the grand canonical ensemble has a geometric distribution (!) and so there are giant fluctuations.</p> <p>There are probably some other issues but I can't think of them off the top of my head. Anyway, just avoid long-range interactions, extreme conditions, and critical phenomena, and your calculations should be fine.</p>
987
statistical mechanics
Gap exponents and homogeneous functions
https://physics.stackexchange.com/questions/89637/gap-exponents-and-homogeneous-functions
<p>Looking at <a href="http://www.tcm.phy.cam.ac.uk/~bds10/phase/scaling.pdf" rel="nofollow">this paper on page 1</a> how is the first limit obtained? That is, if I have some homogeneous function $g_f(h/t^{\Delta})$, how does setting the gap exponent $\Delta$ to $3/2$ ensure that $$\lim_{x \to 0} g_f(x) = -1/u?$$</p>
<p>Setting $\Delta=3/2$ is useful only to ensure the correct behavior of the second limit.</p> <p>The first limit is given by the condition $f(t,0)\propto t^2$. Because $f(t,h)=t^2 g_f(h/t^\Delta)$, you directly get that $g_f(x)\to {\rm const}$ for $x\to 0$.</p> <p>For the sake of completeness, let's do the other case and show that we must have $\Delta=3/2$. You know that $f(0,h)\propto h^{4/3}$. Let's assume that $\lim_{x\to \infty} g_f(x)\propto x^a$, which gives $$\lim_{t\to0}f(t,h)\propto t^2 \Big(\frac{h}{t^\Delta}\Big)^a.$$ In order to get the expected result $f(0,h)\propto h^{4/3}$, we see that $a=4/3$ and $2-a \Delta=0$, which gives $\Delta=3/2$.</p> <p>(Another way to show the first limit is to assume $\lim_{x\to0}g(x)\propto x^b$ and show that $b=0$.)</p>
988
statistical mechanics
What are the key properties of and differences between classical and quantum statistical mechanics?
https://physics.stackexchange.com/questions/89850/what-are-the-key-properties-of-and-differences-between-classical-and-quantum-sta
<p>I'm studying different ensembles and different statistics (<a href="http://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution" rel="nofollow">M-B</a>, <a href="http://en.wikipedia.org/wiki/Bose%E2%80%93Einstein_statistics" rel="nofollow">B-E</a>, <a href="http://en.wikipedia.org/wiki/Fermi%E2%80%93Dirac_statistics" rel="nofollow">F-D</a>), and I have some ambiguities about which of these models are applicable to quantum systems and which are usable for classical systems or both. (For example, the applicability of grand canonical ensemble to classical systems, M-B statistics to quantum mechanical systems, etc.)</p> <p>So I arrived at the question of what are the key properties of classical and quantum statistical mechanical systems, and what are their differences? </p>
<p>Let's recall basics of classical and quantum mechanics for non-statistical systems.</p> <p>In classical Hamiltonian mechanics, one models the non-statistical state of a system as a point in phase space $\mathcal P$. If the configuration space (space of <em>spatial</em> positions) of the system is $N$-dimensional, then the phase space is $2N$ dimensional because the state of the system is described both by its position and its momentum. The time evolution of the system is governed by the hamiltonian $H$, a real-valued function defined on phase space, in terms of which the dynamical equations of the system, Hamilton's equations, are written.</p> <p>In quantum mechanics, one models a non-statistical state of a system as a point (vector) in a Hilbert space $\mathcal H$, a complex vector space that can be infinite-dimensional. The time evolution of the state of the system is government by the Hamiltonian $\hat H$, a self-adjoint operator on $\mathcal H$ in terms of which the dynamical equation of the system, the Shrodinger equation, is written.</p> <p>When one moves to statistical mechanics, then the state of a classical mechanical system is no longer modeled as a point in phase space, but rather as a probability density $\rho$ on phase space. This probability density encodes the fact that one is ignorant about the exact states (positions and momenta) of individual particles in the system, and can be thought of in terms of ensembles of identically prepared systems, and one becomes primarily concerned with computing statistical quantities like ensemble averages of a given observables for a given phase density. The phase density will take different forms based on the ensemble (namely based on how the macroscopic state of the system in prepared), and for a given ensemble, one can define an object called the partition function which allows one to compute, for example, the ensemble average of any observable for a system in that ensemble.</p> <p>For example, the partition function for $N$ identical particles in the canonical ensemble in classical mechanics will be the following phase space integral: \begin{align} Z(\beta) = \frac{1}{N!h^{3N}}\int d^{3N}pd^{3N}q\, e^{-\beta H(p,q)} \end{align} where $\beta = 1/kT$.</p> <p>In quantum statistical mechanics, the state of the system is again no longer modeled in the same way (as a vector in Hilbert space), but rather as a non-negative self-adjoint operator $\hat \rho$ of unit trace called the density operator (or density matrix). As in the classical case, one uses this operator to determine statistical quantities such as ensemble averages of observables. In particular, one can again compute the partition function as in the classical case, but the expression will be different. Concretely, it its the Hilbert space trace of the density operator; \begin{align} Z = \mathrm{tr}\hat \rho = \sum_i \langle i|\hat \rho|i\rangle \end{align} where $\{|i\rangle\}$ is a basis for the Hilbert space. </p> <p>In particular, for $N$ identical particles, one needs to careful to distinguish between the computation one does with fermions, and that performed for bosons. For $n$ identical fermions, the Hilbert space of the system will be restricted to the antisymmetric subspace of the $n$ particle Hilbert space. On the other hand, if the system consists of identical bosons, then the Hilbert space of the system is restricted to be the symmetric subspace of the $n$-particle Hilbert space. It follows that, for example, when one determines the partition function for such systems, one needs to trace over the appropriate Hilbert space. Since the antisymmetric and symmetric Hilbert spaces don't generally coincide, the partition functions for fermionic systems will generally be completely different than those of bosonic systems.</p>
989
statistical mechanics
Where does the Maxwell-Boltzmann distribution come from?
https://physics.stackexchange.com/questions/91708/where-does-the-maxwell-boltzmann-distribution-come-from
<p>I understand that <a href="http://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann_distribution" rel="nofollow">Maxwell-Boltzmann distributions</a> arise for distributions of weakly interacting particles at equilibrium. But I'd like to know if there's a deeper reason behind why they are specifically Maxwellian. </p> <p>I apologize if the question is poorly formed, it just popped into my head and I thought I'd ask about it.</p>
<p>Maxwell derived it from simple assumptions about collisions of the molecules. If the interaction is weak (decreases fast enough with distance), use of the Boltzmann-Gibbs probability distribution of states of the molecule</p> <p>$$ \rho(\mathbf r,\mathbf p) = \frac{e^{-\frac{E(\mathbf r,\mathbf p)}{k_B T}}}{Z}, $$ where $$ Z = \int e^{-\frac{E(\mathbf r,\mathbf p)}{k_B T}}\,d^3\mathbf r\,d^3\mathbf p, $$</p> <p>should be valid and from this one may directly derive the M-B probability distribution $f(v)$ for speeds.</p>
990
statistical mechanics
&#39;Fermi-Dirac&#39;-like occupation probability at high temperature
https://physics.stackexchange.com/questions/93075/fermi-dirac-like-occupation-probability-at-high-temperature
<p>Consider an ensemble of $N\to\infty$ free particles, each of which can assume energy states $E_i\in\{0,E\}$. Using the canonical ensemble one can compute the occupation probability for a single of those particles to be in the excited state $E_i=E$ (or equivalently the expectation value for what fraction of all particles is in the excited state). The result is:</p> <p>$$n_T(E)=\frac{1}{e^{\frac{1}{k_B T}E}+1}$$</p> <p>Now, if we check this expression in the limit $T\to 0$, we properly obtain $n_0(E)=0$, telling us that at low temperatures almost no particles will be in the excited energy state. But then, in the opposite limit $T\to\infty$ we get $n_\infty(E)=1/2$, so apparently at infinite temperature there will be equally many particles in the ground and the excited state! I kind of feel like all the particles should go into the excited state for $T\to\infty$, so that this goes against intuition. But maybe I am wrong? What should I expect to happen for $T\to\infty$?</p>
<p>Perhaps the following (which basically involves investigating what happens for a general system with discrete energy spectrum) will help.</p> <p>The canonical partition function for a quantum system with discrete spectrum $\{E_n\}$ is \begin{align} Z = \sum_ne^{- E_n/(kT)} \end{align} and the population fraction of the systems in the ensemble with energy $E_n$ is given by \begin{align} p_n = \frac{e^{-E_n/(kT)}}{Z}. \end{align} Now, consider two energy levels $E_n$ and $E_m$, then the ratio of their population fractions is \begin{align} \frac{p_n}{p_m} = e^{-(E_n - E_m)/(kT)} \end{align} Now here's the key point. As long as the difference $E_n-E_m$ is finite (which of course it will be for any two energies in the spectrum), the $T\to\infty$ limit of this expression always gives $1$! The difference in the energies gets "washed out" by the largeness of $T$. This tells us that at high temperature, any two levels in the spectrum will have an equal likelihood of being populated!</p> <p>In particular, if the system has a finite-dimensional Hilbert space, say of dimension $N$, then the probabilities must add to $1$ and must all be equal in the high-temperature limit; \begin{align} p_1+p_2+\cdots+p_N = 1, \qquad \text{$p_n = p_m$ for all $m,n\in\{1,\dots,N\}$} \end{align} which gives $p_n = 1/N$ for all $n\in\{1,\dots, N\}$.</p> <p>Now, to complete your intuition, you simply need to understand "why" the partition function looks the way it does, but that's another story all together for which, admittedly, my own intuition isn't the greatest.</p>
991
statistical mechanics
Classical regime for Fermi-Dirac and Bose-Einstein gases
https://physics.stackexchange.com/questions/93638/classical-regime-for-fermi-dirac-and-bose-einstein-gases
<p>I'm studying statistical mechanics, in particular classical regime for Fermi Dirac and Bose Einstein gases. Time average value for occupation numbers in FDBE statistics: $$ \langle n_\epsilon\rangle_{FB} = \frac{1}{e^{(\epsilon-\mu)\beta}\pm1} $$ For Boltzmann Statistics: $$ \langle n_\epsilon \rangle_B = e^{(\mu-\epsilon)\beta} $$ How can one work out a nice condition of classical regime in which $ \langle n_\epsilon\rangle_{FB} \rightarrow \langle n_\epsilon\rangle_B $ ?</p> <p>An obvious option is $e^{\frac{(\epsilon-\mu)}{kT}}\gg1$. However, I don't really like it, since it implies convergence at low temperature. Moreover I'm expecting an $\epsilon$-free asymptotic expression in terms of temperature and density.</p> <hr> <p>@Adam : i've read your comment again and things are much more clear now :)! Here's what i've got:</p> <p>I'll assume $ \beta|\mu|&gt;&gt;1 $ and $\mu&lt;0 $ or $z \rightarrow 0 $.</p> <p>In terms of z:</p> <p>$$ \langle n_\epsilon\rangle_{FB} = \frac{1}{\frac{e^{\epsilon\beta}}{z}\pm1} \, \,\underrightarrow{z\rightarrow0} \, \,\langle n_\epsilon\rangle_{B}$$</p> <p>Being $z=\lambda^3_t \rho$, i can say FDBE gases behaviour like classical one when the particle's thermal wavelenght is small if compared to the typical particle distance. Almost the "low density, high temperature" condition i was looking for.</p> <p>At low temperature Boltzmann statistics lose physical mean (for example it's easy to recover the classical Sackur–Tetrode entropy from his thermodynamic) . Approximating in this scenary, although it may look mathematically legitimate, is conceptually wrong. Quantum statistcs have to be handled carefully on their own.</p> <p>Am i doing it right :)?</p> <p>Sorry for the poor english. Thanks you so much</p>
992
statistical mechanics
Boltzmann–Gibbs-distribution as resulting from a limiting density of states?
https://physics.stackexchange.com/questions/95174/boltzmann-gibbs-distribution-as-resulting-from-a-limiting-density-of-states
<p>I'm interested in the relation between the probability distribution $p_i$ over states of a system on the one side and the density of states $\rho(\eta)$ of its environment. (Meaning, $\int_{\eta_a}^{\eta_b} \rho(\eta) ~ \mathrm{d} \eta$ is the number of environment states with energies in the interval $[\eta_a, \eta_b]$.)</p> <p>If the whole (system + environment) is energetically closed ("isolated") with a total energy $E = e + \eta$, but system and environment are in thermal equilibrium (i.e. the whole is described by the microcanonical ensemble), then it holds $$ p_i = \frac{ \rho(E - e_i) }{ \sum_i \rho(E - e_i) }. $$</p> <p>This means, the probability distribution over states of the system is determined by<br> a) something that only characterizes the energetic <em>structure</em> of the system, the $e_i$s<br> b) something that only characterizes the energetic <em>structure</em> of the environment, $\rho(\eta)$, and<br> c) the total energy $E$.</p> <p>This relation holds generally, for arbitrarily small or large systems and/or environments. Please note that we have not yet taken any limits!</p> <p>If we now consider the thermodynamic limit, i.e. an environment composed of an infinite number of subsystems, the probability distribution $p_i$ over states of the system becomes the Boltzmann–Gibbs-distribution (aka canonical ensemble) $$ p_i = \frac{ \exp(- \beta ~ e_i) }{ \sum_i \exp(- \beta ~ e_i) }, $$ where the sum in the denominator is called the partition function. Using the first relation above, this distribution could now be interpreted as corresponding to a <em>limiting density of states of the environment</em> of the form $$ \rho(\eta) \propto \exp( \beta \eta ) $$ which characterizes the "infinite environment". However, the expression refers to the parameter $\beta$ of the Boltzmann–Gibbs distribution, which represents the temperature and depends on the total energy $E$ (per subsystem). Whereas in the finite case $E$ only serves to connect $\rho(\eta)$ and $p_i$, it here defines $\rho(\eta)$ itself.</p> <p>To me this suggests that it does not make sense to characterize an infinite environment by a density of states — but maybe there's some way around this? Or is there a mistake in the derivation somewhere else?</p>
<p>When one proves that a small part of a greater system is described canonical ensemble, even though the greater system is described by a microcanonical ensemble, the key point is that the density of states of the greater system has the exponential form you mention, <em>over a certain interval of energy</em>.</p> <p>Specifically what is important is that $\log \rho(\eta) = {\rm const} + \beta \eta$ over the range $\eta = \langle\eta \rangle \pm \Delta e$, where $\Delta e$ is the energy fluctuation of the small system, and $\langle \eta \rangle = E - \langle e \rangle$ is the expected energy. The value of $\rho(\eta)$ is not really important for energies that are very far outside this range, since those energies never occur.</p> <p>In practice the form of $\rho(\eta)$ is usually something like $\log\rho(\eta) \approx {\rm const} + N \log \eta$ for some very large $N$ (such a form is found for gases, in particular). This log can be taylor expanded to yield the needed form for the canonical ensemble. For a large environment (one that is essentially in the thermodynamic limit and much larger than the attached system), the first order expansion is very accurate over the relevant energy range.</p>
993
statistical mechanics
The Maxwell and the Boltzmann distributions
https://physics.stackexchange.com/questions/103453/the-maxwell-and-the-boltzmann-distributions
<p>I am trying to understand where the Boltzmann distribution comes from. I recently learned some interesting things of which my interpretation follows below. Did I interpret correctly? If so, is this all there is to it, or is this only part of the story?</p> <p>When sampling coordinates in a high-dimensional Euclidean space as independent and identically distributed Gaussians, you are essentially uniformly sampling on a sphere in that space. Maxwell and Boltzmann were well aware of this.</p> <p>The equiprobability principle loosely asserts that configurations at the same energy occur with equal probability.</p> <p>In an $N$ particle system in 3-space at constant total kinetic energy $E_\text{kin}$, it can be derived that the equiprobability principle is consistent in the thermodynamic limit $N\to\infty$ with each velocity component of each constituent particle being independently distributed with probability distribution </p> <p>$$\pi(v_i^\alpha) \propto e^{-\frac12{m(v_i^\alpha)^2\over k_BT}}= e^{-{E_\text{kin, i}^\alpha\over k_BT}}$$</p> <p>where $i$ indexes the constituent particles and $\alpha$ the coordinates; namely, this leads to a distribution of $r = \sqrt{\sum\|v_i\|^2}$ of the form</p> <p>$$\pi(r) \propto r^{3N-1}e^{-\frac12 m{r^2\over k_BT}} = r^{3N-1}e^{-{E_\text{kin}\over k_BT}}$$</p> <p>($3N$ is the total number of velocity degrees of freedom). For large $N$, this distribution peaks sharply around $r = \sqrt{(3N - 1){k_BT\over m}} \approx \sqrt{3N{k_BT\over m}}$ and since the average kinetic energy per particle is $E_\text{kin,i} = {3\over2}k_BT$, this is equal to $r\approx\sqrt{{2\over m}E_\text{kin}}$, at which value indeed the kinetic energy is $E_\text{kin}$.</p> <p>Boltzmann postulated that this still holds when the total energy is non-constant, and not only for components of individual particles, but for entire subsystems, so that the probability of any configuration of energy $E$ becomes proportional to </p> <p>$$\pi(E) \propto e^{-{E\over k_BT}}$$</p>
994
statistical mechanics
Statistical mechanics of a coin toss
https://physics.stackexchange.com/questions/108156/statistical-mechanics-of-a-coin-toss
<p>I'd like to ask some questions about flipping two coins related to statistical mechanics, e.g. microcanonical distribution, phase space distribution function etc... after I rephrase the coin flipping problem into the language of statistical mechanics.</p> <p>In probability theory, given the following problem</p> <ul> <li>Random experiment: Toss two coins</li> <li>Example of an Outcome: $10 = (Heads, Tails)$</li> <li>Sample space: $S = {11,10,01,00}$, $|S| = 4$</li> <li>Examples of Events: 2 Heads $= 2H = \{11\}$, $|2H| = 1$, $1H = \{10,01\}$, $|1H| = 2$, $0H = \{00\}$, $|0H| = 1$ </li> </ul> <p>I'd like to translate this to statistical mechanics as much as possible, in which:</p> <ul> <li>A microstate is an element of the sample space (right?), e.g. $10$ or $01$.</li> <li>A macrostate is an event (a subset of the sample space, right?), e.g. $1H = \{10,01\}$.</li> <li>The statistical weight (statistical probability) of a macrostate is the cardinality of the event, e.g. $|1H| = 2$.</li> <li>The equilibrium distribution is the most likely macrostate which is the macrostate with the highest statistical weight which is the event with the highest cardinality, e.g. $1H = \{10,01\}$ since $|1H| = 2$.</li> </ul> <p>we can find the Maxwell-Boltzmann distribution function $n_i$ for this system by extremizing </p> <p>$$w(n) = "number \ of \ heads \ in \ n" = \tfrac{2!}{n!(2-n)!}= \tfrac{2!}{n_1!n_2!}$$</p> <p>with respect to $n_i$ given the constraint equation $n_1 + n_2 = 2$, showing $n_1 = e^0 = 1$ maximizes $w$, $w(1) = |1H| = |\{10,01\}| = 2$ is the maximum, thus the entropy $S = \ln(w) = \ln(2!)$ is at it's largest and the system is most disordered. </p> <ol> <li>What is the microcanonical distribution of a random experiment in which you toss 2 coins?</li> <li>What is the canonical distribution for this experiment?</li> <li>Can I choose both, or is there an example of when I should use one or the other, related to this example?</li> <li>How do I find it's phase space distribution function $\rho = \rho(p,q)$?</li> <li>Are these stupid questions? If they are, why? Is there a way to make it so that I can derive the phase space distribution function? As far as I can see I think I'm supposed to add another Lagrange multiplier as a way to incorporate the energy, then after expressing $n$ in terms of the $\varepsilon _i$'s I can take a derivative of $w$ with respect to $E$ (so long as I replace $\varepsilon _1 = (E - n_2 \varepsilon _2)/n_1$ and $\varepsilon _2 = (E - n_1 \varepsilon _1)/n_2$ in the Maxwell-Boltzmann distribution that I'm plugging back in to $w$, but that seems crazy and seems to make no sense since if $E$ varies then shouldn't $\varepsilon _1$ and $\varepsilon _2$ also vary? This idea assumes they stay fixed...) to get $\frac{dw}{dE} = \int \delta [H - E]dpdq$, but it seems logically flawed as I've described and even if it worked how would you use it to determine $\rho$ inside the integral? I'm not even sure if this set up allows for the micro or just canonical distribution anyway, hence the question...</li> </ol> <p>Thanks!</p>
995
statistical mechanics
What is an intuitive explantion for the fact that the Maxwell-Boltzmann distribution of energies is independent of mass?
https://physics.stackexchange.com/questions/119739/what-is-an-intuitive-explantion-for-the-fact-that-the-maxwell-boltzmann-distribu
<p>If you take the Maxwell-Boltzmann distribution of velocities (which depends on the mass) and substitute $v=\sqrt{\frac{2E}{m}}$ you get the distribution for the energies, which turns out to be independent of mass. What physical reality does this reflect? Why is the velocity distribution mass dependent, whereas the energy distribution is independent?</p>
<p>The simplest way to think about this is that, when considering the Maxwell-Boltzmann distribution of velocities, the sole energy scale in the system (as is mostly the case when considering thermal equilibrium systems) is set by the temperature as $k_BT$. So the probabilities ($p(E)\mathrm{d}E$) being dimensionless can only be a function of a scaled energy ($E/k_BT$) and hence obviating the occurrence of any mass term in the expression (as the particle mass only sets a momentum scale and not an energy scale in the system). </p>
996
statistical mechanics
Derivation of Landau diamagnetism
https://physics.stackexchange.com/questions/122081/derivation-of-landau-diamagnetism
<p>In deriving the magnetic susceptibility of free electrons, we need to calculate</p> <p>$$\chi = \left( \frac{\partial M}{\partial H} \right)_N = - \left( \frac{\partial^2 F}{\partial H^2} \right)_N.$$</p> <p>Here, $F = E- TS $ is the free energy. It should be emphasized that the particle number $N$ is held fixed. </p> <p>However, in the statistical mechanics book by Landau, he calculated $ - \left( \frac{\partial^2 \Omega}{\partial H^2} \right)_\mu $, where $\Omega = F - \mu N $ is the grand potential. See his equation 59.11 on page 174. The reason seems to be that it is much easier to calculate $\Omega$ as a function of $T, H, \mu$ than $F$ as a function of $T, H, N$. </p> <p>But, is the calculated quantity the one we really want?</p>
<p>$$-\left(\frac{\partial^2\Omega}{\partial H^2}\right)_\mu=\left(\frac{\partial M}{\partial H}\right)_\mu=\left(\frac{\partial M}{\partial H}\right)_N+\left(\frac{\partial M}{\partial N}\right)_H\left(\frac{\partial N}{\partial H}\right)_\mu$$</p> <p>In principle, there is a difference between these quantities, at least in ferromagnets (otherwise the magnetization does not depend on $N$, nor $N$ on $H$. I further suspect that when there is a difference, you typically want the derivative for an open system (constant chemical potential) rather than a closed one (constant number).</p>
997
statistical mechanics
Statistical mechanics: Meaning of &quot;accessible&quot; in &quot;accessible microstates&quot;
https://physics.stackexchange.com/questions/133608/statistical-mechanics-meaning-of-accessible-in-accessible-microstates
<p>What does "accessibility" mean in statistical mechanics?</p> <p>Is it an equivalent concept to accessibility in mathematical control theory?</p> <p>I'll provide an example: When two systems A and B interact on a subspace of their respective spaces, i.e. where they overlap in space, does accessibility of states of B from A mean the part where they overlap? </p>
<p>When people say "accessible microstates" it means "microstates consistent with a set of <em>constraints</em> or <em>conditions</em> which you are supposed to keep in the back of your mind".</p> <p>The most common example of a constraint is a fixed amount of energy. A closed physical system with exactly one constraint, fixed total energy, is called the <a href="http://en.wikipedia.org/wiki/Microcanonical_ensemble" rel="nofollow">microcanonical ensemble</a>.</p> <p>Another type of constraint/condition could be contact with a large thermal reservoir $R$ at temperature $T$. Such a system $S$ in contact with $R$ is modeled by the <a href="http://en.wikipedia.org/wiki/Canonical_ensemble" rel="nofollow">canonical ensemble</a>.</p> <p>So, "accessible" just means "consistent with the constraints and conditions at hand".</p>
998
statistical mechanics
Physical meaning of coefficient of variation
https://physics.stackexchange.com/questions/139866/physical-meaning-of-coefficient-of-variation
<p>While doing a course in statistical physics I came across a term called coefficient of variation. Now according to <a href="http://en.wikipedia.org/wiki/Coefficient_of_variation" rel="nofollow">Wikipedia</a>, coefficient of variation</p> <blockquote> <p>shows the extent of variability in relation to mean of the population</p> </blockquote> <p>However I don't see the connection to physics in this. For example considering nuclear decay, what is the physical meaning of coefficient of variation in this process?</p>
<p>Suppose you have some radioactive material with a half life $\tau_{1/2}$. What that term "half life" means is that the amount of material $m(t)$ you have left after a time $t$ is</p> <p>$$m(t) = m(0) \exp[ -t / \tau_{1/2}] . \qquad (*)$$</p> <p>However, the material is made up of discrete atoms and each one decays in a random way. Therefore, it's not 100% guaranteed that after a time $\tau_{1/2}$ there is <em>exactly</em> half as much material left. It could be a bit more or a bit less. In other words, on any particular trial equation $(*)$ will not necessarily be satisfied. Equation $(*)$ tells you the <em>average</em> amount of material left over. It means basically this:</p> <blockquote> <p>Get a large number $N$ of independent lumps of material with initial mass $m_i(0)$. Wait a time $\tau_{1/2}$. The resulting $m_i(\tau_{1/2})$ values are <em>random</em> but with a probability distribution whose mean $\mu$ is given by $(*)$.</p> </blockquote> <p>Since the remaining amount of material is given by a probability <em>distribution</em> you can ask for more than just it's mean value. In particular, you could ask for the entire distribution, i.e. the probability of finding any particular remaining amount of material after a time $t$. This might be denoted $P(m|m(0),t)$, i.e. "The probability of having an mount <em>m</em> left over, given that started with an amount $m(0)$ and am now looking at time $t$ later".</p> <p>Anyway, one useful property of a probability distribution is its width $\sigma$, also called the "standard deviation". The coefficient of variation is defined as $c \equiv \sigma/\mu$. This just answers the question "how wide is my distribution as compared to its mean?" It's a useful quantity because it allows you to easily relate the variability of the process to the mean behavior. In other words, it tells you how predictable the process really is, i.e. how closely it sticks to its average behavior.</p> <h1>Example: random walk</h1> <p>On each step either I either move one space to the right or I stand still, with 1/2 probability of each. You may have learned that the probability distribution for my position after $N$ steps is a binomial distribution. The mean $\mu$ is simply $\mu = N/2$ because I have a 1/2 chance of progressing on each step. You can see that, in a sense, my position becomes more uncertain as time goes on because my $\sigma$ increases with $N$. However, the variability of my position compared to my mean position is actually going down because $\sigma/\mu \propto 1/\sqrt{N}$. Intuitively this is saying that while the variability is going up, it's mattering less as compared to the scale of the problem. It's like saying that a 1 mile uncertainty in the distance from you to the sun is actually a lot less important than a 1 centimeter uncertainty in the distance between two atoms.</p> <p>I brought up the random walk example because it comes up all over experimental physics. If you integrate a signal longer you actually get <em>more noise</em> ($\propto \sqrt{t}$), but you get more signal <em>faster</em> ($\propto t$) than you get more noise, so the signal to noise <em>ratio</em> ($\propto \sqrt{t}$) gets better.</p>
999