category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
differential equations
|
Why differential equations?
|
https://physics.stackexchange.com/questions/349226/why-differential-equations
|
<p>Natural phenomena (e.g. heat flow) and systems (e.g. electrical circuits) are usually described using differential equations. Why is that?</p>
<p>Also, usually people use "<em>constant</em> coefficients <em>linear</em> differential equations" of low order (one or two, rarely three). Is this use (constant coefficients, linear, low order) justified by the adequacy with the modeled phenomena or just by model simplification?</p>
<p>There is also this seemingly equivalent version used when dealing with discrete systems (i.e. the independent variable is discrete) called "difference equation" instead of "differential equation" such that:</p>
<ul>
<li><p>Constant coefficients <strong>differential</strong> equation:</p>
<p><span class="math-container">$$\sum_{k=0}^N a_k \frac{d^ky(t)}{dt^k}= \sum_{k=0}^Mb_k \frac{d^kx(t)}{dt^k}$$</span></p>
</li>
<li><p>Constant coefficients <strong>difference</strong> equation:</p>
</li>
</ul>
<p><span class="math-container">$$\sum_{k=0}^N a_k y[n-k]= \sum_{k=0}^Mb_k x[n-k]$$</span></p>
<p>I can't see how the difference is equivalent to the derivative. I know that this might not be a physics question but any insights would be appreciated.</p>
|
<p>Given that time and space are believed to be continuous one would expect that the equations governing changes in time and space would reflect this continuity.</p>
<p>In other words, we can make sense of the concept of two points arbitrarily close in space or two moments arbitrarily close in time, something that difference equations do not capture.</p>
<p>The precise form of the differential equation depends on the physical phenomena and is not restricted to equations with constant coefficients.</p>
<p>(As an aside: the great mathematician Henri Poincare was troubled by the quantized nature of some quantities (energy in particular), precisely asking if this would imply the rewriting of the laws of physics in the form of difference equations.)</p>
| 1,000
|
differential equations
|
Differential Equations for Physicists
|
https://physics.stackexchange.com/questions/455312/differential-equations-for-physicists
|
<p>I find differential equations in physics to be quite challenging so I'm looking for a book to help me master them.</p>
<p>I'm familiar with solving ordinary differential equations via seperation of variables but haven't really gone much further than that.</p>
<p>I was thinking about buying this: <a href="https://www.waterstones.com/book/differential-equations-for-dummies/steven-holzner//9780470178140?awc=3787_1547914453_50402e12ab1b834f04a3a61a1372e9b2&utm_source=259955&utm_medium=affiliate&utm_campaign=Genie+Shopping" rel="nofollow noreferrer">https://www.waterstones.com/book/differential-equations-for-dummies/steven-holzner//9780470178140?awc=3787_1547914453_50402e12ab1b834f04a3a61a1372e9b2&utm_source=259955&utm_medium=affiliate&utm_campaign=Genie+Shopping</a></p>
<p>However I'm open to recommendations on books that are <em>specifically</em> targeted to physics, or will help me in general to solve any differential equation.</p>
|
<p>Are you a self-learning person? Any university course of math and physics for physicists suffice to cope with your difficulties. However, and I can say it from my professional experience, the learning never ends. So be ready to learn from different sources, points of view, etc., etc.</p>
| 1,001
|
differential equations
|
Accuracy of differential equations
|
https://physics.stackexchange.com/questions/178916/accuracy-of-differential-equations
|
<p>We use differential equations to model the world around us. For example, the logistic differential equation
$$\frac{dx}{dt} = rP\left(1-\frac PK\right)$$</p>
<p>is used to model population. However, it doesn't take into account things like climate, natural disasters, competition among other species, etc. Equations modelling forces (Newton's second law) don't really take into account <em>every</em> force acting on an object (i.e. electrical charge force between surrounding particles).</p>
<p>My question is: <strong>How accurate are differential equations really, and to what accuracy can we predict future circumstances and events from them?</strong></p>
|
<blockquote>
<p>How accurate are differential equations really, and to what accuracy can we predict future circumstances and events from them?</p>
</blockquote>
<p>Why does it matter that they are "differential equations"? Differential equations are just one type of model. The question is how accurate are theoretical models.
The answer is, the ones you learn about in high school / college are those that have been found to be "good enough" for at least some practical applications in the past. They are unlikely to represent the state of the art, but they are a useful starting point.</p>
| 1,002
|
differential equations
|
Decouple differential equations
|
https://physics.stackexchange.com/questions/374895/decouple-differential-equations
|
<p>I have a system of two Second Order differential equations
$$r^2\ddot{r}−r^3(\dot{\varphi}^2+ω^2)=−GM$$
$$r \ddot{\varphi}+2 \dot{r}(\dot{\varphi}+\omega)=0 $$
using the conserved quantity $(\dot{\varphi}+\omega)r^2$, call it <em>Ω</em>.<br>
I have shown that it is indeed a conserved quantity, as its time-derivative is $r$ times the second equation and therefore zero. However, I don't know how this is supposed to help me decouple the two equations.
I would be very thankful for hints.</p>
|
<p>Since $h:=r^2(\dot{\varphi}+\omega)$ is conserved, $\frac{d}{dt}=(\frac{h}{r^2}-\omega)\frac{d}{d\varphi}$ and $\dot{r}=-h\frac{du}{d\varphi}$ with $u:=\frac{1}{r}+\frac{\omega r}{h}$ so $\ddot{r}=-h(\frac{h}{r^2}-\omega)\frac{d^2 u}{d\varphi^2}$. You'll want to rewrite your equations of motion in the coordinate system $(u,\,\varphi)$.</p>
| 1,003
|
differential equations
|
Applications of delay differential equations
|
https://physics.stackexchange.com/questions/27143/applications-of-delay-differential-equations
|
<p>Being interested in the mathematical theory, I was wondering if there are up-to-date, nontrivial models/theories where delay differential equations play a role (PDE-s, or more general functional differential equations).</p>
<p>It is clear that</p>
<ul>
<li>in biological (population) models usually pregnancy introduces a delay term, or</li>
<li>in disease transition in networks the latent period introduces a delay, or</li>
<li>in engineering in feedback problems signal processing introduces the time delay.</li>
</ul>
<p>I would like to see a list of answers where each answer contains one reference/example.</p>
|
<p>In my corner of things what comes to mind is a recent paper by Atiyah and Moore <a href="https://arxiv.org/abs/1009.3176" rel="nofollow noreferrer">A Shifted View of Fundamental Physics</a>.</p>
| 1,004
|
differential equations
|
Recurrence differential equations
|
https://physics.stackexchange.com/questions/180701/recurrence-differential-equations
|
<p>We all know recurrence equations like e.q. Fibonacci relation</p>
<p>$$F_{n} = F_{n-1} + F_{n+1}$$</p>
<p>In order to find general expression for any $n$, we can use <em>generating function</em> method </p>
<p>$$G(x) = \sum\limits_{n=0}^{\infty}F_{n}x^{n}$$</p>
<p>or its variation <a href="http://en.wikipedia.org/wiki/Generating_function" rel="nofollow">Wikipedia</a>. However, in quantum mechanics we try to solve the Schrodinger equation </p>
<p>$$i\hbar \partial_{t}\left|\psi\right\rangle = \hat{H}\left|\psi\right\rangle$$
by means of expansion of the quantum state in orthonormal basis. For definiteness, lets consider double-mode bosonic system with fixed number of particles $N$. We can expand any quantum state in a Fock state basis</p>
<p>$$\left|\psi\right\rangle = \sum\limits_{k=0}^{N}c_{k}(t)\left|k,N-k\right.\rangle$$<br>
where </p>
<p>$$\left|k,N-k\right.\rangle = \frac{(\hat{a}^{\dagger})^{k}(\hat{b}^{\dagger})^{N-k}}{\sqrt{k!(N-k)!}} \left| 0,0\right\rangle$$
A double-mode bosonic system is equivalent to spin system with $s = N/2$, once we use Schwinger representation </p>
<p>$$\hat{S}_{x} = \frac{1}{2}(\hat{a}^{\dagger}\hat{b} + \hat{b}^{\dagger}\hat{a}), \ \ \hat{S}_{y} = \frac{1}{2i}(\hat{a}^{\dagger}\hat{b} - \hat{b}^{\dagger}\hat{a}),\ \ \hat{S}_{z} = \frac{1}{2}(\hat{a}^{\dagger}\hat{a} - \hat{b}^{\dagger}\hat{b}).$$</p>
<p>Consider the following hamiltonian </p>
<p>$$\hat{H} = \hat{S}_{x}^{2}.$$
If we use the state expansion we get the recurrence differential equation</p>
<p>$$\dot{c}_{k}(t) = A_{k}c_{k-2}(t) + B_{k}c_{k}(t) + A_{k+2}c_{k+2}(t),\\[3mm]
A_{k} = \sqrt{k(k-1)(N-k+2)(N-k+1)},\\[3mm]
B_{k} = k(N-k+1) + (k+1)(N-k).$$</p>
<p>Is there a general scheme for solving this kind of recurrence differential equations? Generating functions or something else? Maybe there is a whole branch of Mathematics that deals with these kind of problems?</p>
|
<p>I don't think that you need some complicated apparatus. You can simply solve this problem by writing </p>
<p>$$
\dot{c}_k(t) = M_{kj} c_j(t).
$$</p>
<p>Now because $M$ is symmetric and real you can find a transformation $\tilde{c}_k$ of the $c_k$ that diagonalizes $M$ and leads to trivial differential equations for the transformed functions.</p>
| 1,005
|
differential equations
|
Physics and Linear Differential Equations
|
https://physics.stackexchange.com/questions/95795/physics-and-linear-differential-equations
|
<p>Why in physics, most of the physical systems are modelled by linear differential
equations?</p>
|
<p>I think your qualification of "most" systems needs some clarification because really almost all of the classical universe is described by second-order, nonlinear partial differential equations. Fluids/liquids/gases and solids are described by the same set of second-order, nonlinear PDE's.</p>
<p>Linear equations, both linear PDE's and linear ODE's, show up often because they are a simplified approach to describe something in a tractable way. Nonlinear equations are very difficult to analyze or solve; linear ones are not nearly as hard. Nonlinear equations are difficult to approximate numerically; linear ones are not nearly as hard. So whenever possible, simplifications to things will be made to reduce them to linear equations provided the simplifications are justified and don't ignore fundamental physics (ie. give terrible answers). </p>
<p>Additionally, linear equations allow superposition of solutions. In fluids, for instance, the flow around a cylinder is very difficult to solve using the full Navier-Stokes equations. But if you reduce it to the linear potential flow equations, the solution is just the sum of a uniform flow solution and a doublet solution (or an irrotational vortex solution if the cylinder is spinning). So you can begin to build complicated solutions to complicated geometries and problems iff the equations are linear. </p>
<p>As to why we use differential equations at all -- because we choose to. Many times we can use integral equations or integro-differential equations, all for the same physical problem. The choice depends on your methods of analysis and what you are trying to capture from the equations, but all are valid options.</p>
| 1,006
|
differential equations
|
Phase-amplitude stochastic differential equations
|
https://physics.stackexchange.com/questions/757179/phase-amplitude-stochastic-differential-equations
|
<p>In the book of <span class="math-container">$\textit{The Quantum World of Ultra-Cold Atoms and Light: Book 1 Foundations of Quantum Optics}$</span> by Peter Zoller and Crispin Gardiner on page 75, they derive the phase-amplitude stochastic differential equation for a thermalized oscillator.</p>
<p>From a complex Ornstein-Uhlenbeck process of the form
<span class="math-container">\begin{equation}
d\alpha=-(i\omega+\gamma/2)\alpha dt +\sqrt{\gamma n_{th}} dW_{t}
\end{equation}</span>
where <span class="math-container">$dW_t$</span> is a complex Wiener increment, they define two new variables such that <span class="math-container">$\mu+i\phi=log \alpha$</span>. Then, by defining <span class="math-container">$a=e^\mu$</span>, they derive two real stochastic differential equations</p>
<p><span class="math-container">\begin{equation}
d a= \left(-\gamma/2 a + \frac{\gamma n_{th}}{4a}\right)dt + \sqrt{\frac{\gamma n_{th}}{2}} dW_{a}(t)
\end{equation}</span>
<span class="math-container">\begin{equation}
d \phi= -i\omega dt +\sqrt{\frac{\gamma n_{th}}{2}} \frac{dW_{\phi}(t)}{a}
\end{equation}</span></p>
<p>If we are on resonance, we can set <span class="math-container">$\omega=0$</span> and forget about the phase differential equation. The original complex-valued stochastic equation can be formally integrated to</p>
<p><span class="math-container">\begin{equation}
\alpha(t)=\alpha(0)e^{-\gamma/2 t} + \sqrt{\gamma n_{t}} \int_0^t e^{-\gamma/2(t-s)} dW(s)
\end{equation}</span></p>
<p>and from it, one can compute its mean and its covariance, <span class="math-container">$\overline{\alpha(t_1)\alpha(t_2)}$</span>.</p>
<p>My question is how to do it with the amplitude stochastic differential equation. I have been struggling with how to formally integrate the equation (mostly because if 1/a term) and then find its covariance. Is there some way to relate the complex-valued covariance <span class="math-container">$\overline{\alpha(t_1)\alpha(t_2)}$</span> to the new variable <span class="math-container">$\overline{a(t_1)a(t_2)}$</span>?</p>
|
<p>Using the fact that:
<span class="math-container">$$
a=|\alpha|
$$</span>
You can use it to calculate:
<span class="math-container">$$
\langle a(t_1)a(t_2)\rangle =\langle |\alpha(t_1)\alpha(t_2)|\rangle
$$</span>
You can calculate in very special cases, but there is no simple general formula. Perhaps what is more relevant is to look at large separation times to extract the correlation time.</p>
<p>Btw, correlations are most relevant when the process is Gaussian, but here, the amplitudes are not a gaussian process so perhaps it’s not so the best quantity to compute.</p>
<p>Furthermore, physically, it is rather the square of the amplitude that is of interest <span class="math-container">$a^2=|\alpha|^2$</span> which is related to the number operator/energy. In this case the calculation of the square amplitude correlations is easier.</p>
<p>Hope this helps.</p>
| 1,007
|
differential equations
|
Monodromy matrix and differential equations
|
https://physics.stackexchange.com/questions/238521/monodromy-matrix-and-differential-equations
|
<p>What is the significance of monodromy matrix in the context of differential equations? I have seen some papers(<a href="http://arxiv.org/abs/1303.6955" rel="nofollow">1</a>,<a href="http://arxiv.org/abs/1403.6829" rel="nofollow">2</a>,<a href="http://arxiv.org/abs/1510.06685" rel="nofollow">3</a> etc) in CFT which use the monodromy method to compute conformal blocks at large central charge. Can someone discuss (or give some basic references) what is actually this monodromy specially in the context of CFT. </p>
|
<p>I'm going to explain how the monodromy approach to computation of semi-classical conformal block arises.</p>
<p>The goal is to compute the conformal block corresponding to the exchange of operator $\mathcal{O}$ in the four-point function
$$
\langle V_1(z_1)V_2(z_2)V_3(z_3)V_4(z_4)\rangle,\qquad (1)
$$
in $V_1V_2-V_3V_4$ channel.
Here note that since we are talking about something determined completely by the conformal algebra, we can actually consider only the holomorphic problem. So here $V_i$ are the formal operators characterized by conformal weight $h_i$. </p>
<p>Now, some reparametrization of the problem is convenient,
$$
h_i=\frac{1}{b^2}\delta_i,\quad h_\nu=\frac{1}{b^2}\delta_\nu, \quad \delta_i=\frac{1-\lambda_i^2}{4},\quad \delta_\nu=\frac{1-\nu^2}{4},\\
c=1+6(b+b^{-1})^2,
$$
where $h_\nu$ is the dimension of exchanged $\mathcal{O}$.</p>
<p>We now consider the 5-point function
$$
\langle V_{(1,2)}(z)V_1(z_1)V_2(z_2)V_3(z_3)V_4(z_4)\rangle,
$$
where $V_{(1,2)}$ is a degenerate Virasoro field (I'm using the notation of Di Francesco, I believe). Of course, this field need not exist in the theory, we are just formally considering it here, since it reflects the properties of the conformal algebra. This correlator satisfies a differential equation due to degeneracy of $V_{(1,2)}$,
$$
\left[\frac{1}{b^2}\frac{\partial^2}{\partial z^2}+\sum_{i=1}^4\left(\frac{h_i}{(z-z_i)^2}+\frac{1}{z-z_i}\frac{\partial}{\partial z_i}\right)\right]\langle V_{(1,2)}(z)V_1(z_1)V_2(z_2)V_3(z_3)V_4(z_4)\rangle=0,
$$
and any conformal block of this correlator satisfies the same equation (because this equation is a property which follows from conformal algebra alone; you can check this explicitly by inserting projection operators in the correlator, and using the fact that they commute with all Virasoro generators).</p>
<p>We then consider a specific conformal block, the one given by the picture where you fuse $V_1$ and $V_2$ to obtain $\mathcal{O}$, then $\mathcal{O}$ fuses with $V_{(1,2)}$ to become some $\mathcal{O}'$ and then $\mathcal{O}'$ becomes $V_3$ and $V_4$. You should think of this as the conformal block for $(1)$ where the intermediate $\mathcal{O}$ interacts with $V_{(1,2)}$.</p>
<p>We consider the limit of large central charge $c$, corresponding to $b\to 0$, and simultaneously we take $h_i$,$h_\nu$ to be large, keeping $\delta_i,\delta_\nu$ fixed. The physical assumption is that since the scaling dimension of $V_{(1,2)}$ remains finite in this limit, the 5-point conformal block is given by the formula
$$
\langle V_{(1,2)}(z)V_1(z_1)V_2(z_2)V_3(z_3)V_4(z_4)\rangle_{CB}=\psi(z|z_i)e^{\frac{1}{b^2}f_{\nu,\delta_i}(z_i)},
$$
where $\psi(z|z_i)$ is the ``wavefunction'' of the light $V_{(1,2)}$ in the background of semiclassical 4-point conformal block $e^{\frac{1}{b^2}f_{\nu,\delta_i}(z_i)}$. I am not sure if there is a proof of this statement in the literature. What we do know is that it can be tested and it has been tested. </p>
<p>Plugging this ansatz into the differential equation for the 5-point conformal block, we find a wave equation
$$
\frac{\partial^2\psi}{\partial z^2}+\psi\sum_{i=1}^4\left(\frac{\delta_i}{(z-z_i)^2}+\frac{c_i}{z-z_i}\right)=0,
$$
where
$$
c_i=\frac{\partial f_{\nu,\delta_i}}{\partial z_i}
$$
are the so-called accessory parameters. (There are four of them, but only one is independent. This is because the semiclassical 4-point conformal block is conformally covariant, and this induces relations between $c_i$. E.g. $\sum_{i=1}^4 c_i=0$ because of the translational invariance of $f_{\nu,\delta_i}$).</p>
<p>Now we are in position to understand where does the monodromy problem come from. You can see that we obtained a second-order ODE for $\psi$, which has two linearly independent solutions, whereas $\psi$ should determine, it seems, just one 5-point conformal block. So what is the interpretation of the two solutions? As it turns out, there are actually two conformal blocks we are determining. Indeed, when we were specifying the 5-point conformal block, the operator $\mathcal{O}$ turned into an operator $\mathcal{O}'$ after the interaction with $V_{(1,2)}$. What is this operator? Well, since $V_{(1,2)}$ is a degenerate field, it can only have a non-zero three point function $\langle \mathcal{O}V_{(1,2)}\mathcal{O}'\rangle$ if the scaling dimension of $\mathcal{O}'$ is $h'=h_\nu\pm \frac{\nu}{2}$. </p>
<p>So we are in fact computing two 5-point blocks, and that is why there are two solutions for $\psi$. Now, how can we identify the two specific linear combinations corresponding to the blocks we are looking for? In computing the 5-point block, we replace $V_1(z_1)V_2(z_2)$ with $\mathcal{O}(z_1)$ and its descendants, and then, for $|z-z_1|>|z_1-z_2|$, we replace $\mathcal{O}(z_1)V_{(1,2)}(z)$ by $\mathcal{O}'(z_1)$. Thinking about the scaling dimensions, one can find that the 5-point conformal block must have an expansion for $|z_2-z_1|<|z-z_1|<|z_3-z_1|,|z_4-z_1|$ of the form
$$
\sum_n a_n (z-z_1)^{h_{\mathcal{O}}+h_{(1,2)}-h_{\mathcal{O}'}}
$$.
It then follows that the monodromy of the 5-point block when $z$ goes counterclockwise around $z_1$ and $z_2$ (we need to go around $z_2$ since this expansion only works for $|z_2-z_1|<|z-z_1|$) is
$$
\exp\left\{2\pi i\left(h_{\mathcal{O}}+h_{(1,2)}-h_{\mathcal{O}'}\right)\right\}=-\exp(\pm i\pi\nu),
$$
where the $\pm$ corresponds to the choice of conformal block as in $h'=h_\nu\pm \frac{\nu}{2}$. </p>
<p>We thus see that the basis of solutions corresponding to the two possible 5-point conformal blocks diagonalizes the monodromy around the points $z_1$ and $z_2$, and the monodromies have to be very specific. In this basis, the monodromy matrix for the contour $\gamma_{12}$ going around $z_1,z_2$ counterclockwise is then
$$
M(\gamma_{12})=\begin{pmatrix}
-e^{i\pi\nu} & 0\\
0 & -e^{-i\pi\nu}
\end{pmatrix}.
$$
An basis-invariant way to characterize this monodromy is to say
$$
\mathrm{tr} M(\gamma_{12})=-2\cos\pi\nu,
$$
and this uniquely determines the eigenvalues (because the monodromy matrix has to be unimodular in this case, but I can't remember why).</p>
<p>Now, its a good time to step back and recap. Instead of computing the semiclassical 4-point conformal block directly, we considered how it enters the calculation of a 5-point conformal block with a degenerate field. We found that the 5-point block is then determined by a second order ODE. The coefficients in this ODE are determined by the semiclassical 4-point conformal block. Consistency then requires this ODE to have a specific monodromy property; this constrains the coefficients of the ODE and thus the semiclassical 4-point conformal block.</p>
| 1,008
|
differential equations
|
Simulating Interactions in QFT without differential equations
|
https://physics.stackexchange.com/questions/661999/simulating-interactions-in-qft-without-differential-equations
|
<p>As I understand it in QFT interactions are generally modeled as being from the exchange of virtual particles. If I was to think of how to simulate a classical analog I would model two spheres A and B, that each can only change velocity by emitting or absorbing an exchange sphere C. I would use a random number generator to decide whether or not A might at some point emit C and in what direction C might be emitted in, then calculate whether C and B would intercept, and only model C as actually being emitted if it was to intercept B and then model the velocities of A and B changing if they exchange C. This type of modeling wouldn't require using a differential equation in order to model the motion of A and B.</p>
<p>I understand that the quantum case of A, B, and C would be different from the classical case as even if A and B don't interact their equations of motion would not be the same as equations of motion for classical bodies moving with constant velocity, and C would be a virtual particle. Still I'm wondering if it would be possible to create a simulation for an interaction in QFT that doesn't use differential equations or at least wouldn't require knowing what differential equations would be involved. If so what would be the methods for modeling an interaction without using differential equations in QFT?</p>
|
<p>There's no QFT without Green functions and much more, but something like your question is addressed by Mattuck's "drunken man propagator", followed by "the classical quasi particle propagator", in the introductory chapters of his book "A Guide To Feynman Diagrams In The Many-Body Problem".</p>
| 1,009
|
differential equations
|
Exact differential equations and holonomic constraints
|
https://physics.stackexchange.com/questions/370835/exact-differential-equations-and-holonomic-constraints
|
<p>I understand that if a constraint equation given on a differential form is exact, that means it is also holonomic since I can find a solution. But there are other types of differential equations, like separable and linear, such that I can find an equation in the form of a holonomic constraint. Why is it that being exact is a condition, rather than just having a solution of the type $f(x,y) = 0$?</p>
|
<ol>
<li><p>Here I would like to mention the notion of a <a href="https://www.google.com/search?as_epq=semi+holonomic+constraint" rel="nofollow noreferrer">semi-holonomic constraint</a>
$$ \sum_{j=1}^n a_j(q,t)~ \mathrm{d}q^j + a_t(q,t)~\mathrm{d}t~=~0, \tag{1'}$$
which puts an
<a href="https://en.wikipedia.org/wiki/Inexact_differential" rel="nofollow noreferrer">inexact differential</a> equal to zero.</p></li>
<li><p>It is possible to incorporate semi-holonomic constraints into Lagrange equations, cf. e.g. my Phys.SE answer <a href="https://physics.stackexchange.com/a/283289/2451">here</a>.</p></li>
<li><p>Note that eq. (1') does <em>not</em> mean that we demand that the $n+1$ co-vector components
$$ a_t(q,t)~= a_1(q,t)~= \ldots ~=~a_n(q,t)~=~0 \qquad\qquad (\longleftarrow \text{Wrong!} )
\tag{2'}$$
of a co-vector (1') should be zero. Perhaps this potential misunderstanding (2') is secretly the core of OP's question? </p></li>
<li><p>Rather eq. (1') means that
$$ \sum_{j=1}^n a_j(q,t)~ \dot{q}^j + a_t(q,t)~=~0. \tag{3'}$$</p></li>
<li><p>In particular, a <a href="https://en.wikipedia.org/wiki/Holonomic_constraints" rel="nofollow noreferrer">holonomic constraint</a>
$$f(q,t)~=~0 \tag{0}$$
can be put on the above semi-holonomic form
$$ \mathrm{d}f~=~\sum_{j=1}^n \frac{\partial f}{\partial q^j}~ \mathrm{d}q^j + \frac{\partial f}{\partial t}~\mathrm{d}t~=~0 .\tag{1}$$
Explicitly, eq. (1) means that the total time derivative
$$ \frac{df}{dt}~\equiv~ \sum_{j=1}^n \frac{\partial f}{\partial q^j}~ \dot{q}^j + \frac{\partial f}{\partial t}~=~0\tag{3}$$
is zero.</p></li>
</ol>
<p>References:</p>
<ol>
<li>H. Goldstein, <em>Classical Mechanics,</em> Section 2.4.</li>
</ol>
| 1,010
|
differential equations
|
Using RC circuits to solve differential equations
|
https://physics.stackexchange.com/questions/319285/using-rc-circuits-to-solve-differential-equations
|
<p>As I was thinking about RC circuits it dawned upon me that under the correct configurations one could very efficiently solve differential equations by programming them into an RC circuit (the applications of this would be something like a very very fast hardware implementation of machine learning). </p>
<p>If you have a set of linear first order coupled differential equations $\dot{x} = f(x_1, x_2, x_3,...,x_n)$ and you set $n$ output capacitors such that after a time $t$, $q_i(t) \propto g(x_i(t))$ for some function $g$. </p>
<p>Under this paradigm, if you want to make a statement like $\dot{x}_i + \dot{x}_j$ somewhere in the circuit then you'd essentially add the currents $i_i + i_j$.</p>
<p>Is there a trivial way of doing this. I wouldn't know where to start trying to implement something like this. </p>
<p>I would be happy if someone could simply implement schematically using nothing but resistors, capacitors, diodes, and batteries the differential equation $$\dot{x} = -x$$ in a circuit, which would lead to harmonic output. </p>
|
<p>Consider this circuit:</p>
<p><a href="https://i.sstatic.net/RgbOF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RgbOF.png" alt="enter image description here"></a></p>
<p>If the capacitor is initially charged, the system is governed by these equations:</p>
<p>$$\frac{{\rm d}v}{{\rm d}t} = \frac{-i(t)}{C}$$
$$ i(t) = \frac{v(t)}{R}$$</p>
<p>where $v(t)$ is the voltage difference from the upper node to the lower node.</p>
<p>Thus,</p>
<p>$$\frac{{\rm d}v}{{\rm d}t} = \frac{-v(t)}{RC}.$$</p>
<p>But this will not lead to oscillation. For oscillation you actually want </p>
<p>$$\ddot{x} = -x$$</p>
<p>To get this behavior in an electrical circuit, you'll need to add inductors or some kind of active device (like a transistor or amplifier) to your bag of parts.</p>
| 1,011
|
differential equations
|
Solution to differential equation
|
https://physics.stackexchange.com/questions/643519/solution-to-differential-equation
|
<p>If I have a differential equations of the form <span class="math-container">$$\frac {d^2y}{dt^2}=\alpha^2y$$</span></p>
<p>Assuming the roots of the characteristic equation is complex the solution to the differential equation is: <span class="math-container">$$y=C_1e^{j\alpha t}+ C_2e^{-j\alpha t}$$</span> and after that we take only the real part of the solution.
Why do we take only the real part of the solution?</p>
<p>I'm solving the wave equation and this confusion stems from the solution of the Helmholtz wave equation.</p>
|
<p>There's two ways to look at it. The first one is as you said: we take the real part of the general solution. We can verify that if <span class="math-container">$\alpha^2$</span> is real, the real and imaginary parts of the general solution also happen to solve the differential equation on their own. So the real part is a solution to the differential equation.</p>
<p>Personally, I dislike this way of looking at it. It feels unmotivated, and it leaves open the question of wether we actually found the most general real solution. That's why I like to look at it a different way: Instead of taking the real part of the general complex solution, we take all the real solutions among the general complex solution. Since every real solution is also a complex solution, this guarantees that we actually get the general real solution. This leads us to the question:</p>
<p>For which <span class="math-container">$C_1, C_2$</span> is <span class="math-container">$y(t)=C_1\mathrm e^{\mathrm i\alpha t}+C_2\mathrm e^{-\mathrm i\alpha t}$</span> real?</p>
<p>To answer the question, we first write our coefficients in polar coordinates to get <span class="math-container">$C_1=R_1\mathrm e^{\mathrm i\varphi_1}$</span> and <span class="math-container">$C_2=R_2\mathrm e^{\mathrm i\varphi_2}$</span>. This gives us</p>
<p><span class="math-container">$$\begin{align}y(t)&=R_1\mathrm e^{\mathrm i(\alpha t+\varphi_1)}+R_2\mathrm e^{\mathrm i(\alpha t+\varphi_2)}\\
&=R_1[\cos(\alpha t+\varphi_1)+\mathrm i\sin(\alpha t+\varphi_1)]+R_2[\cos(\alpha t+\varphi_2)+\mathrm i\sin(\alpha t+\varphi_2)]\\
&=R_1\cos(\alpha t+\varphi_1)+R_2\cos(\alpha t+\varphi_2)~ +~ \mathrm i[R_1\sin(\alpha t+\varphi_1)+R_2\sin(\alpha t+\varphi_2)].
\end{align}$$</span></p>
<p>This is real if and only if</p>
<p><span class="math-container">$$R_1\sin(\alpha t+\varphi_1)+R_2\sin(\alpha t+\varphi_2)=0,$$</span></p>
<p>which is the case if and only if <span class="math-container">$\varphi_2=-\varphi_1$</span> and <span class="math-container">$R_1=R_2$</span>. In other words, <span class="math-container">$C_2=\overline C_1$</span>. But then we have</p>
<p><span class="math-container">$$\begin{align}y(t)&=C_1\mathrm e^{\mathrm i\alpha t}+C_2\mathrm e^{-\mathrm i\alpha t}\\
&=\frac{1}{2}\left(C_1\mathrm e^{\mathrm i\alpha t}+C_2\mathrm e^{-\mathrm i\alpha t}\right)~+~\frac{1}{2}\left(C_1\mathrm e^{\mathrm i\alpha t}+C_2\mathrm e^{-\mathrm i\alpha t}\right)\\
&=\frac{1}{2}\left(C_1\mathrm e^{\mathrm i\alpha t}+\overline C_1\mathrm e^{-\mathrm i\alpha t}\right)~+~\frac{1}{2}\left(\overline C_2\mathrm e^{\mathrm i\alpha t}+C_2\mathrm e^{-\mathrm i\alpha t}\right)\\
&=\frac{1}{2}\left(C_1\mathrm e^{\mathrm i\alpha t}+\overline{C_1\mathrm e^{\mathrm i\alpha t}}\right)~+~\frac{1}{2}\left(\overline{C_2\mathrm e^{-\mathrm i\alpha t}}+C_2\mathrm e^{-\mathrm i\alpha t}\right)\\
&=\frac{1}{2}\cdot 2\operatorname{Re}(C_1\mathrm e^{\mathrm i\alpha t})~+~\frac{1}{2}\cdot 2\operatorname{Re}(C_2\mathrm e^{-\mathrm i\alpha t})\\
&=\operatorname{Re}(C_1\mathrm e^{\mathrm i\alpha t})+\operatorname{Re}(C_2\mathrm e^{-\mathrm i\alpha t})\\
&=\operatorname{Re}(C_1\mathrm e^{\mathrm i\alpha t}+C_2\mathrm e^{-\mathrm i\alpha t})
\end{align}$$</span></p>
<p>So we can find the general real solution by taking the real part (or twice the real part, but the factor of 2 doesn't matter) of the complex solution.</p>
| 1,012
|
differential equations
|
Radioactive decay differential equations
|
https://physics.stackexchange.com/questions/550021/radioactive-decay-differential-equations
|
<p>I am trying to form a differential equation between two different isotopes, Uranium-238 and Thorium-234.
The rate of decay of an isotope is proportional to the amount present. So that:
<span class="math-container">$$
\frac{dx}{dt} = -kx
$$</span>
Where x is the amount of Uranium-238 and k is the constant if proportionality. Also,
<span class="math-container">$$
\frac{dy}{dt} = -cx
$$</span>
Where y is the amount of Thorium-234 and c is the constant if proportionality.</p>
<p>I have also been told that:
<span class="math-container">$$
\frac{dy}{dt} + cx = kxe^{-kt}
$$</span></p>
<p>I am trying to find the general solution to the differential equation above. I have found the integrating factor which is: </p>
<p><span class="math-container">$$
Ae^{ct}
$$</span>
And I have multiplied all the terms by the integrating factor to get:</p>
<p><span class="math-container">$$
Ae^{ct}\frac{dy}{dt} + Ae^{ct}cx = kxe^{-kt}Ae^{ct}
$$</span></p>
<p>Now using the product rule backwards we get:
<span class="math-container">$$
\frac{dAe^{ct}y}{dt} = kxe^{-kt}Ae^{ct}
$$</span></p>
<p>Then I integrated to get:
<span class="math-container">$$
Ae^{ct}y = \int({kxe^{-kt}Ae^{ct}})dt
$$</span>
<span class="math-container">$$
Ae^{ct}y = \frac{kxe^{-kt}Ae^{ct}}{c-k}
$$</span></p>
<p>Thus,
<span class="math-container">$$
y=\frac{kxe^{-kt}Ae^{ct}}{(c-k)(Ae^{ct})}
$$</span></p>
<p>however, it seems that I have done something wrong since the answer is:</p>
<p><span class="math-container">$$
y = Ae^{ct}+\frac{kxe^{-kt}}{k-c}
$$</span></p>
<p>Could you tell me where I may have gone wrong?</p>
|
<p>The decay rate of the mother isotope <span class="math-container">$x$</span> depends only on the amount of <span class="math-container">$x$</span>, so that:</p>
<p><span class="math-container">$$\frac{\text{d}x(t)}{\text{d}t}=-kx(t)$$</span>
This solves easily:
<span class="math-container">$$\ln x=-kt +C$$</span>
Initial condition: <span class="math-container">$t=0, x=x_0$</span>
<span class="math-container">$$\ln x_0=C$$</span>
<span class="math-container">$$\ln\Big(\frac{x}{x_0}\Big)=-kt$$</span>
<span class="math-container">$$\boxed{x(t)=x_0e^{-kt}}$$</span>
The rate of formation of the daughter isotope <span class="math-container">$y$</span> equals the decay rate of <span class="math-container">$x$</span> to <span class="math-container">$y$</span>, minus the decay of <span class="math-container">$y$</span>:
<span class="math-container">$$\Big(\frac{\text{d}y}{\text{d}t}\Big)_{\text{Total}} = \Big(\frac{\text{d}y}{\text{d}t}\Big)_{\text{x decay}} - \Big(\frac{\text{d}y}{\text{d}t}\Big)_{\text{y decay}}$$</span>
Or:
<span class="math-container">$$\frac{\text{d}y}{\text{d}t}=-\frac{\text{d}x}{\text{d}t}-cy$$</span>
<span class="math-container">$$\frac{\text{d}y}{\text{d}t}=kx_0e^{-kt}-cy$$</span>
<span class="math-container">$$y'+cy=kx_0e^{-kt}$$</span>
This is a linear, 1st order, inhomogeneous DE, which can be integrated with an integrating factor:</p>
<p><span class="math-container">$$I=\int e^{ct}\text{d}t$$</span></p>
<p>I'll leave that to you.</p>
<hr>
<p>Your third equation:
<span class="math-container">$$\frac{dy}{dt} + cx = kxe^{-kt}$$</span>
makes little sense. It's a differential equation in <strong>three variables</strong>: <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$t$</span>.</p>
| 1,013
|
differential equations
|
From differentials to differential equations
|
https://physics.stackexchange.com/questions/52547/from-differentials-to-differential-equations
|
<p>Suppose I have a function of time $t$ and position $(x,y)$ such that
\begin{equation} p_t \,dt = p \,dy - p_x (1-x) \,dx + p_y \,dy\end{equation}
where the subscript denotes a differentiation. In this case, I am able to derive a (partial) differential equation from this form.</p>
<p>I'd love to have your help to address the case in which, for example, $dy$ appears also with higher orders. Something like:
\begin{equation} p_t \,dt = p \,dy - p_x (1-x) \,dx + (dy)^2 (p-(1-y)p_y). \end{equation}
or simpler (the key point is the presence of $(dy)^2$). I expect that in this case the pde will be second order...</p>
<p>Any idea?</p>
<p>P.S. I posted yesterday a similar question on the math.stackexchange but maybe it is more a physics-like question :)</p>
|
<p>The $(dy)^2$ term is totally negligible, it's as if it was not there. If you had two differentials everywhere, then yes, it would lead to a 2nd order diff. equation.</p>
| 1,014
|
differential equations
|
Shadow method of solving differential equations
|
https://physics.stackexchange.com/questions/595945/shadow-method-of-solving-differential-equations
|
<p>While reading this answer by Rishab Navneet <a href="https://physics.stackexchange.com/a/595835/236734">here</a>, it is shown how we can visualize the harmonic oscillator as the shadow of a body moving in a circle onto a line. How was it found that the plane curve is a circle? More generally, is there a way to go from differential equations to see the plane curve whose projection onto an axis is associated with the solution of it?</p>
<p>For example, consider the simple pendulum, the differential equation modeling is given as:</p>
<p><span class="math-container">$$ \frac{d^2 \theta}{dt^2} = \frac{-g}{L} \sin \theta$$</span></p>
<p>I want to ask if there is any general way to find a plane curve such that when we project the point on the curve onto the <span class="math-container">$\theta$</span> axis, we see how the angle of pendulum <span class="math-container">$\theta$</span> is evolving with time (like a dot moving on the axis).</p>
<p>It's easy to find that for the first-order Taylor approximation it is a circle, but what about higher-order Taylor approximations for the motion, how do we find for those? That is considered a truncated Taylor expansion till the nth term of sine series:</p>
<p><span class="math-container">$$P_n (\theta)= \sum_{k=0}^n \frac{ \theta^{2k+1}}{(2k+1)!} (-1)^k$$</span></p>
<p>Now, how do I find a plane curve whose projection onto the <span class="math-container">$\theta$</span> axis shows <span class="math-container">$\theta$</span> evolves with time for the below differential equation given some 'n'?</p>
<p><span class="math-container">$$ \frac{d^2 \theta}{dt^2} = - \frac{g}{L} P_n $$</span></p>
|
<p>We have backtraced the curve for which SHM can be represented and if you solve the given differential equation
<span class="math-container">$$ \frac{d^2 \theta}{dt^2} = \frac{-g}{L} \sin \theta$$</span>
you will get a function and you can backtrace if possible.
after finding the function of SHM we have related it with the motion on circle
You can also do so but you know it will become very complex.</p>
| 1,015
|
differential equations
|
Solving the differential equations for self-induction
|
https://physics.stackexchange.com/questions/356130/solving-the-differential-equations-for-self-induction
|
<p>In my physics class we learned the equations for self-induction. Our teacher also showed us the differential equations and gave us the solutions. Because we didn't have differential equations in our Maths class he only told us that, if we were interested in how to solve these equation, we should look up separation of variables.</p>
<p>Because I am interested I looked it up and I am now able to solve the equation for switching the circuit on. But the equation for switching off is inhomogeneous and I can't seem to find the solution:</p>
<p>$$\frac{dI}{dt}+\frac RL * I = \frac{U_0}{L}, I(0s)=0A$$
He told us that the solution is:
$$I(t)=\frac {U_0}R *(1-e^{-\frac RL*t})$$
But I don't know how this works. I would really appreciate help showing the complete way to solve this.</p>
<p>EDIT: $\frac{U_0}R$ instead of $\frac{U_0}L$</p>
|
<p>The first step is $\frac{dI}{dt}=\frac{U_0}{L}-\frac{R}{L}I$, so we have $\frac{dI}{1-\frac{R}{U_0}I}=\frac{U_0}{L}dt$.</p>
<p>I hope you can continue now, is just integrate </p>
| 1,016
|
differential equations
|
Significance of exact solutions to differential equations
|
https://physics.stackexchange.com/questions/513007/significance-of-exact-solutions-to-differential-equations
|
<p>What is the importance of finding new exact solutions to partial differential equations? I kindly need someone to convince me, since my PhD will be on that. </p>
|
<p>If you have an analytical/exact (in contrast to some say "discretized" or similar numerical) solution, interpretation of the coefficients in this solution is much more straightforward than tackling a generic numerical solution. These coefficients of the exact solution can also be fitted to experiments to gain an understanding in that context.</p>
<p>An analytical solution typically is also a more tractable than a more generic numerical one (fewer parameters), and this can allow for building more complex models on top of it (which can quickly become unfeasible, if one had to build a model out of many already complicated numerical solutions). Example: exact quantum mechanical solution of the harmonic oscillator, where the exact eigenvalues (with mathematically simple form) of the modes can be employed to model what happens if you have many modes to occupy (e.g. phonons in a solid). This would be much more complicated if one would not have the understanding of the simple analytical structure of the spectrum.</p>
<p>Exactly solvable models had and have importance in gaining deeper understanding in a field at a very general (i.e. deep) level.</p>
| 1,017
|
differential equations
|
Differential equations of a fluid-mechanical system
|
https://physics.stackexchange.com/questions/294572/differential-equations-of-a-fluid-mechanical-system
|
<p>I'm trying to simulate a real system, in order to do so I have modelled a physical system (fluid-mechanical) that behaves similarly with some simplifications. The physical model in question is as follows:</p>
<p><a href="https://i.sstatic.net/CP1Sf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CP1Sf.png" alt="Physical system"></a></p>
<p>The system consists on a piston with an orifice enclosed inside a chamber which is divided in two chambers due to the piston. Also, there are two pressure relieve valves that allow to regulate the force (pressure) exerted in each chamber, these valves are modelled with a spring and a damper, as the picture shows.</p>
<p>I want to obtain the differential equation of the system that allows me to know the position of the piston as function of time.</p>
<p><strong>Question:</strong> how many differential equations will I need to simulate the model? I am not sure if I have to write an equation that includes the effects of the orifice, pressure relieve valves, etc. Or an X number of equations separately. In the latter case, how many (X) equations I need?</p>
| 1,018
|
|
differential equations
|
Differential equations ball with air resistance
|
https://physics.stackexchange.com/questions/317293/differential-equations-ball-with-air-resistance
|
<p>I'm trying to find the equation for a ball thrown from the ground with an initial velocity. Are these differential equations correct? I solved these and set the integrating constant to $v_0cos(\theta)$ for $v_x$ and $v_0sin(\theta)$ for $v_y$ and integrated again to get the function of position. Is that the correct approach?</p>
<p>$\dot{v_x}=\frac{k}{2m}v_x^2$ and $\dot{v_y}=\frac{k}{2m}v_y^2-g$ where $k:=
C_d\rho A$</p>
|
<p>I assume that you are taking the drag force as having magnitude $k|{\bf v}|^2$? In that case the you need to resolve the components of the drag vector which points backwards along ${\bf v}$, and get
$$
m\dot v_x= -kv_x\sqrt{v_x^2+v_y^2},
$$
$$
m\dot v_y= -kv_y\sqrt{v_x^2+v_y^2}-g.
$$
These are essentially impossible to solve analytically in general, but for a near horizontal trajectory (a rifle bullet say) we can use the Siacci approximation, in which we ignore the effect of $v_y$ under the square root.</p>
| 1,019
|
differential equations
|
Proving Kepler's 1st Law without differential equations
|
https://physics.stackexchange.com/questions/86435/proving-keplers-1st-law-without-differential-equations
|
<p>Is there a way to show that the motion of Earth around the Sun is elliptical (<a href="https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion" rel="nofollow noreferrer">Kepler's 1st law</a>) from Newton's laws without resorting to the use of differential equations of motion?</p>
|
<p>Newton's original proof was in fact based on geometry (he hadn't invented calculus yet). Richard Feynman devised his own, simpler geometric proof for one of his famous lectures. You can find it in <em>Feynman's Lost Lecture</em>, by Goodstein & Goodstein, and in this article: <a href="https://tlakoba.w3.uvm.edu/AppliedUGMath/auxpaper_planets_HallHigson.pdf" rel="nofollow noreferrer">Paths of the Planets</a> from Hall & Higson. But since it's so much fun, I'll describe it here as well.</p>
<p>Let's start with a lesser-known way to construct an ellipse, the so-called <em>circle construction</em>. Draw a circle with centre <span class="math-container">$O$</span>, and fix a point <span class="math-container">$A$</span> inside the circle. Pick a point <span class="math-container">$B$</span> on the circle, and draw the perpendicular bisector of <span class="math-container">$\overline{AB}$</span> (blue line). It intersects <span class="math-container">$\overline{OB}$</span> in a point <span class="math-container">$P$</span>, and as <span class="math-container">$B$</span> moves around the circle, these intersection points form an ellipse. Also, the blue biscector lines are tangent lines to the ellipse, and <span class="math-container">$O$</span> and <span class="math-container">$A$</span> are the foci.</p>
<p><img src="https://i.sstatic.net/seuJC.jpg" alt="enter image description here" /></p>
<p>Why is it an ellipse? Because <span class="math-container">$\overline{AP}$</span> has the same length as <span class="math-container">$\overline{BP}$</span>, so that the sum of the lengths of <span class="math-container">$\overline{AP}$</span> and <span class="math-container">$\overline{OP}$</span> is constant, i.e. the radius of the circle. In other words, we get the classic tack-and-string definition of an ellipse. It is also straightforward to see that the angles <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are equal. Since <span class="math-container">$a$</span> and <span class="math-container">$c$</span> are also equal, this means that <span class="math-container">$b$</span> and <span class="math-container">$c$</span> are equal, so that the blue line is indeed a tangent line.</p>
<p><img src="https://i.sstatic.net/7a5Te.jpg" alt="enter image description here" /></p>
<p>The geometric proof of Kepler's Second Law (planets sweep out equal areas in equal times) from Newton's first two laws is straightforward and can be found in the Hall & Higson article. Now, if a planet traverses an angle <span class="math-container">$\Delta\theta$</span> in a small time interval <span class="math-container">$\Delta t$</span>, it sweeps out an area
<span class="math-container">$$
\text{area}\approx \frac{1}{2}\Delta\theta\, r^2.
$$</span>
<img src="https://i.sstatic.net/SzhqA.jpg" alt="enter image description here" /></p>
<p>At this point, Feyman's argument deviates from Newton: while Newton breaks up the orbit it equal-time pieces, Feyman considers equal-<em>angle</em> pieces. In other words, Feynman breaks up the orbit in subsequent pieces with areas
<span class="math-container">$$
\text{area}\approx \text{constant}\cdot r^2.
$$</span>
Newton's inverse-square law (which can be derived from Kepler's Third Law) states that the acceleration of a planet is proportional to the inverse square of its distance <span class="math-container">$r$</span>:
<span class="math-container">$$
\left\|\frac{\Delta\boldsymbol{v}}{\Delta t}\right\| = \frac{\text{constant}}{r^2}.
$$</span>
Eliminating <span class="math-container">$r^2$</span>, we get
<span class="math-container">$$
\left\|\Delta\boldsymbol{v}\right\| \approx \text{constant}\cdot\frac{\Delta t}{\text{area swept out in $\Delta t$}}.
$$</span>
But Kepler's Second Law states that the area swept out in <span class="math-container">$\Delta t$</span> is a constant multiple of <span class="math-container">$\Delta t$</span>. Therefore,
<span class="math-container">$$
\left\|\Delta\boldsymbol{v}\right\| \approx \text{constant},
$$</span>
that is, intervals of constant <span class="math-container">$\Delta\theta$</span> also have a constant change in velocity. We can use this fact to construct a so-called <em>velocity diagram</em>. Break up the orbit into equal-angle pieces, draw the velocity vectors, and translate these vectors to the same point.</p>
<p><img src="https://i.sstatic.net/k6gaD.jpg" alt="enter image description here" /></p>
<p>Since <span class="math-container">$\left\|\Delta\boldsymbol{v}\right\|$</span> is constant, the resulting figure is a polygon with <span class="math-container">$\dfrac{360^\circ}{\Delta\theta\,}$</span> sides. The smaller the angles, the more it approaches a circle.</p>
<p><img src="https://i.sstatic.net/j6Y0c.jpg" alt="enter image description here" /></p>
<p>Now, let's draw the velocity diagram of an orbiting planet. If <span class="math-container">$l$</span> is the tangent line to the orbit at point <span class="math-container">$P$</span> (parallel to the velocity vector in <span class="math-container">$P$</span>), then <span class="math-container">$l'$</span> in the corresponding velocity diagram is also parallel to <span class="math-container">$l$</span>. Also note that <span class="math-container">$\theta$</span> in both diagrams is the same.</p>
<p><img src="https://i.sstatic.net/48Ts7.jpg" alt="enter image description here" />
<img src="https://i.sstatic.net/So3PR.jpg" alt="enter image description here" /></p>
<p>Rotate the velocity diagram clockwise by <span class="math-container">$90^\circ$</span>, so that <span class="math-container">$l'$</span> becomes perpendicular to <span class="math-container">$l$</span>. Construct the perpendicular bisector <span class="math-container">$p$</span> to the line <span class="math-container">$\overline{AB}$</span>, and the intersection <span class="math-container">$P'$</span> with <span class="math-container">$\overline{OB}$</span>. It turns out that we are in the exact same situation as the circle construction for the ellipse: as <span class="math-container">$B$</span> moves on the velocity diagram, the points <span class="math-container">$P'$</span> form an ellipse.</p>
<p><img src="https://i.sstatic.net/uNhN2.jpg" alt="enter image description here" /></p>
<p>The lines <span class="math-container">$p$</span> are the tangent lines to the ellipse. However, these lines are also parallel to the lines <span class="math-container">$l$</span>, which are the tangent lines to the planet's orbit. Because of the <em>tangent principle</em>, if two curves have the same tangent lines at every point, then those curves are the same. In other words, the lines <span class="math-container">$l$</span> are also the tangent lines of an ellipse. This proves that the orbit of a planet is indeed an ellipse.</p>
| 1,020
|
differential equations
|
Does the SUVAT equations of motion (Kinematics) come from some differential equation?
|
https://physics.stackexchange.com/questions/606669/does-the-suvat-equations-of-motion-kinematics-come-from-some-differential-equa
|
<p>Wikipedia says about the equations of motion that;</p>
<blockquote>
<p>"If the dynamics of a system is known, the equations are the solutions for the differential equations describing the motion of the dynamics."</p>
</blockquote>
<p>And</p>
<blockquote>
<p>"A differential equation of motion, usually identified as some physical law and applying definitions of physical quantities, is used to set up an equation for the problem. Solving the differential equation will lead to a general solution with arbitrary constants, the arbitrariness corresponding to a family of solutions. A particular solution can be obtained by setting the initial values, which fixes the values of the constants."</p>
</blockquote>
<p>So, does the SUVAT equations of motion come from some differential equation?
Can someone please show me the derivation of them from some differential equation?
The derivation will really help.
I don't know much about differential equations though.
Just to get some intuition.</p>
<p><a href="https://i.sstatic.net/StzbU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/StzbU.png" alt="enter image description here" /></a></p>
|
<p>Yes they do; the SUVAT equations are directly derived from the simple relations between acceleration, velocity and position:
<span class="math-container">\begin{equation}
a=\frac{dv}{dt}\\
v=\frac{ds}{dt}.
\end{equation}</span><br />
Starting from the first equation, we find
<span class="math-container">\begin{equation}
\int^v_udv=\int^t_0adt,
\end{equation}</span>
where we assume the acceeration to be constant over time. This yiends the result:
<span class="math-container">\begin{equation}
v-u=at \implies v=u-at
\end{equation}</span>
which is the first equation from the list. Integrating it again with respect of time:
<span class="math-container">\begin{equation}
\int^sds=\int^t_0u-at\ dt\implies s=ut-\frac{1}{2}at^2,
\end{equation}</span>
giving the third equation from the list. Subbing <span class="math-container">$u$</span> from the first equation in this last one, we can get the forth one:
<span class="math-container">\begin{equation}
s=(v+at)t-\frac{1}{2}at^2 \implies s=vt+\frac{1}{2}at^2.
\end{equation}</span>
Now we can calculate the second one from the list by squaring the first one:
<span class="math-container">\begin{equation}
v^2=(u-at)^2=u^2-2uat+(at)^2=u^2-2a(ut-\frac{1}{2}at^2),
\end{equation}</span>
where the bit inside the brackets is nothing but the equation we just derived (third from the SUVAT list):
<span class="math-container">\begin{equation}
v^2=u^2-2as,
\end{equation}</span>
Finally, to getthe last one, we just need to add up the third and the forth equations together, giving:
<span class="math-container">\begin{equation}
s+s=vt+ut+\frac{1}{2}at^2-\frac{1}{2}at^2\\
2s=(v+u)t\\
s=\frac{1}{2}(u+v)t.
\end{equation}</span>
And that is how you get all the SUVAT equations from differential equations. At the end of the day, you are just expanding the standard definitions of acceleration and velocity, making them easier to use once you already know the initial conditions of your system. I hope you find this helpful :)</p>
| 1,021
|
differential equations
|
Gravity Differential Equations
|
https://physics.stackexchange.com/questions/710179/gravity-differential-equations
|
<p>I was just messing around with Newton's Law of gravitation, when I had the idea of converting Newton's Law into differential form (more or less like Maxwell's equations).</p>
<p>I did the following:</p>
<h5>#1 Divergence of the field:</h5>
<p><span class="math-container">$$
\iint_C {\mathbf g \cdot d\mathbf S} = -4\pi G M \rightarrow \
\iint_C {\mathbf g \cdot d\mathbf S} = \iiint_C {-4\pi G \rho \; d\mathbf V} \\
\iiint_C {\nabla \cdot \mathbf g \; dV} = \iiint_C {-4\pi G \rho \; dV} \\
$$</span>
<span class="math-container">$$
\boxed{
\begin{array}{rcl}
\nabla \cdot \mathbf g = -4\pi G \rho
\end{array}
}
$$</span></p>
<h5>#2 Curl of the field:</h5>
<p><span class="math-container">$$
\mathbf g = -\nabla \phi \\
\nabla \times \mathbf g = \nabla \times (-\nabla \phi) = \mathbf 0
$$</span>
<span class="math-container">$$
\boxed{
\begin{array}{rcl}
\nabla \times \mathbf g = \mathbf 0
\end{array}
}
$$</span></p>
<p>Until now everithing is fine. Now I thinked if it was possible writting the equtions in terms of other field, like the velocity field <span class="math-container">$\mathbf v$</span>, but I'm stucked.</p>
<p>It is well known that the velocity on an orbit obeys the following:
<span class="math-container">$$
v = \sqrt{\frac{GM}{r}} \\
v \propto \frac{1}{\sqrt{r}}
$$</span></p>
<p>It is possible to express how this velocity field must behave to incorporates all we did knew about Newtonian Gravity using only vectors and vector calculus? In other words, it is possible to formulate gravity from its velocity field, using vectors? For example:
<span class="math-container">$$
\mathbf g = \frac{d \mathbf v}{dt} \\
\nabla \times \mathbf v = \gamma_0\mathbf L
$$</span></p>
<p>Where <span class="math-container">$\mathbf L$</span>, is the angular momentum, and then, if you rearange the equations, you could get the velocity which an object will have at a certain height, or find the escape velocity of a planet (maybe this is too idyllic).
Also, would be possible to create a set of equations in which this Newtonian gravity has solutions of a wave equation (in a similar way that of Maxwell's Equations), resembling to the gravitational waves or GR?</p>
|
<p>Newtonian gravity does not have any forces that are not radially inward or outward, so if you are in pure Newtonian gravity, there is no generalization of magnetism.</p>
<p>Now, if you've seen a <a href="https://physics.stackexchange.com/questions/64703/how-special-relativity-causes-magnetism">fancy derivation of magnetism using only electricity and special relativity</a>, you would expect that something similar would be true in general relativity, and it does show up, in the form of the <a href="https://en.wikipedia.org/wiki/Lense%E2%80%93Thirring_precession" rel="nofollow noreferrer">Lense-thirring effect</a>, and works very similarly to magnetism.</p>
<p>You do, of course, also get gravitational waves, but figuring out exactly what waves, and the directions of the force, etc, are different in the details from electromagnetic waves, because gravitational radiation has spin 2, and couples to the quadrupole moment of the matter distribution, rather than having EM radiation spin 1 and coupling to the dipole moment of the charge distribution. So, ultimately, the equations you get aren't just perfect copies of the maxwell equations, the way that Gauss's Law and Newton's Law are.</p>
| 1,022
|
differential equations
|
Building realistic simulation using differential equations
|
https://physics.stackexchange.com/questions/519857/building-realistic-simulation-using-differential-equations
|
<p>I am building simulation using differential equations to model the motion of a damped vertical spring-mass system. I wish to use this simulation to extract data. For example, I am trying to find the effect of mass on the damping. </p>
<p>The problem I am facing is every time I run the model, I receive the same numbers. Thus, I cannot run multiple trials. This makes sense because the model is purely iterative. How do I add a bit of randomness to the model to make it more realistic and usable for experimentation?</p>
|
<p>The answer to the question as asked is <strong>“you add randomness to the system parameters and initial conditions”</strong>. Some search terms are “uncertainty quantification” and “design of experiments”.</p>
<p>But ... those techniques are used for models where there isn’t a detailed analytic understanding of the end-to-end behavior of the system. They would be wasteful overkill for the system in front of you.</p>
| 1,023
|
differential equations
|
Looking for a good book on Differential Equations
|
https://physics.stackexchange.com/questions/571726/looking-for-a-good-book-on-differential-equations
|
<p>I know many of you are tired of book recommendation posts and questions. But I am self learning Theoretical Physics, and I am having a hard time choosing a book to learn differential equations (ODEs). I really want a good understanding of differential equations; I have been told that ODEs and PDEs are the language of physics.</p>
<p>Anyways, if you could be so kind to give me some good recommendations, I would truly appreciate it. I thank you if you read my question and hope you have a wonderful rest of the day.</p>
<p>PD: I new to this site, so this is essentially my first post. I was checking the h bar chatroom. I hope to one day gain the knowledge that you guys have. Truly mesmerizing community. I hope to grow in this environment.</p>
|
<p>Welcome here!</p>
<p>If you are just starting you may want to use a very didactic textbook:</p>
<p>Zill. - Differential Equations with Boundary Value Problems (look for the solution's manual, it helps if you are an autodidact)</p>
<p>Then a very good one to get into the maths of physics I'd recommend this one:</p>
<p>Arfken - Mathematical Methods for Physicists: A Comprehensive Guide</p>
<p>You may download them from Library Genesis (a bit illegal)</p>
| 1,024
|
differential equations
|
Differential Equations for Block Diagram of Satellite Attitude Control System
|
https://physics.stackexchange.com/questions/122219/differential-equations-for-block-diagram-of-satellite-attitude-control-system
|
<p><img src="https://i.sstatic.net/zuU5y.png" alt="Text Book Cut Out"></p>
<p>I am trying to understand the procedure to setup differential equations from a block diagram. The enclosed example is about the attitude control of a satellite. The ultimate goal is to find a state-space system representation of the model. Transfer functions are the intermediate step in this process; I understand how they are set up. I encouter problems as soon as differential equations must be determined. For example, $\dot{x}_{1}$ is stated to be $0.01K(\theta_{c}-\theta)$, which seems to only account for the lower block of the controller component. Also for $\dot{x}_{2}$ and $\dot{x}_{3}$, the $0.01$ disappears from the equations, which I don't understand. It would be very much appreciated if someone could tell me how I should approach these differential equations.</p>
|
<p>I think I've understood the basic tricks that the book used, which are what are tripping you up.</p>
<p>The main difficulty I see you struggling with is the arbitraryness of the variable selection. <em>Could</em> you write the state-space system differently? Yes. There are many ways you could write it. The particular selection the book used is due to specific preferences dictated by consistency of the material they're teaching.</p>
<p>So let me explain how this answers your questions:</p>
<blockquote>
<p>For example, x˙1 is stated to be 0.01K(θc−θ), which seems to only account for the lower block of the controller component. </p>
</blockquote>
<p>Yes. But this is a true statement. How? Because x1 isn't what you think it is. Here, I've tried to re-label the system with where I think they're selecting the variables. I colored my marking in red.</p>
<p><img src="https://i.sstatic.net/gbPdD.png" alt="marked"></p>
<p>You can see here that x1 is referring to a <em>particular</em> output. This is why it's not the summation output. It's just not that variable. You could introduce a new variable which is the output of the summation. They just don't have need for such a variable.</p>
<p>Then there are some more wonky elements of this. For instance, since the last block is 2nd order, there are 2 independent variables introduced, one of which doesn't have a "location" on the diagram at all. You would have to split the block into 2 blocks with a line in-between in order to label x3 on the diagram.</p>
<p>I hope that helps.</p>
| 1,025
|
differential equations
|
Research problems in application of Lie groups to differential equations
|
https://physics.stackexchange.com/questions/100800/research-problems-in-application-of-lie-groups-to-differential-equations
|
<p>Are there any open problems in physics involving Lie groups and differential equations for a phd theses. </p>
<p>Some applications are say, Noether's theorem in classical or quantum field theory. But I am not sure if those topics lead to any research problems. </p>
<p>So any idea about prospective research problems in application of Lie groups to differential equations?</p>
|
<p>I do not believe that there are any.<br>
You can check Stephanie Singer's book on Lie groups as applied as symmetries of differential equations, and also the book on mechanics and look at the unfortunately old-fashioned review
<a href="http://people.ucsc.edu/~rmont/papers/Symm_in_Mech_Review.PDF" rel="nofollow">http://people.ucsc.edu/~rmont/papers/Symm_in_Mech_Review.PDF</a></p>
<p>There you will see that although there is some research activity, it is what a physicist would consider "pure math", e.g., the possibility of collisions in the three body problem. And this activity, also in Symplectic Geometry and Lorentzian manifolds, takes place in the mathematics community, it is not done or interesting to physicists. And Prof. Singer herself is now doing Statistics...just like me: she left Lie Groups to do Stats, as I did too.</p>
<p>Statistical Mechanics is the future of Physics.</p>
<p>Now Lie Groups do play an important role in Statistical Mechanics, see Mackey's wonderful review article in the Bulletin of the American Mathematical Society, and Volume 4 of Gelfand and Naimark's Les Distributions: applications de l'analyse harmonique (I am sure there is an English translation, too). And some of that activity is indeed Physics, although what is grouped around Ergodic Theory and particularly interested Mackey was purely mathematical.</p>
| 1,026
|
differential equations
|
Solving differential equations without approximations?
|
https://physics.stackexchange.com/questions/133974/solving-differential-equations-without-approximations
|
<p>In physics, many problems start with a mathematical relationship of the physical phenomenon at hand, and then, in many occasion, always only leave whatever in the first order to get a nice and solvable differential equation. Then there may be terms of higher order considered later, by as far as I know, it rares goes to the third order or above.</p>
<p>My question is, are there any extensive analysis about how physical problems would behave when we no longer do any approximations, by keeping <em>all</em> orders intact?</p>
|
<p>This is a quite general question. Whether or not one should use an approximation depends on several things. Disclaimer: my answer is not restricted to differential equations and contains examples from perturbation theory, but the general idea still applies. </p>
<p>The most important question is whether an approximation makes sense from a physical point of view. One might lose important information when cutting of a Taylor series at low order. For example, there are cases in perturbative quantum field theory, where it is important to calculate Feynman diagrams up to several loops and in other problems, only going to tree level fully suffices in order to capture the desired physical effect. Of course, issues like convergence of the series can also play a role (to clarify: Feynman diagrams correspond to terms in a specific Taylor series, tree level is the lowest order, while higher orders are loops). </p>
<p>Another question concern limitations in the: is it even possible to solve exactly? If yes, is the exact solution difficult to acquire and is it necessary? When is it necessary to use an approximation? </p>
<p>There is also the other end of the spectrum: is an approximation even possible? An example would be quantum chromodynamics at low energies: due to asymptotic freedom, there is no small expansion parameter, and perturbation theory is doomed to fail. </p>
<p>To summarize: the answer to your question depends on many factors, and the guiding principle should be physical intuition. What you should always keep in mind is the question "Does what I am doing make sense?".</p>
| 1,027
|
differential equations
|
How to make sense of quantum fields differential equations?
|
https://physics.stackexchange.com/questions/320114/how-to-make-sense-of-quantum-fields-differential-equations
|
<p>A quantum field is an operator valued function, that is, a function $\varphi(x)$ defined on spacetime which assigns operators on a Hilbert space to each event $x$. In a more rigorous approach a quantum field could be defined as an operator valued distribution on spacetime.</p>
<p>Anyway, it is quite common that these quantum fields obey differential equations, like the Klein-Gordon equation $$(\Box +m^2)\varphi=0$$ and Dirac's equation $$(i\gamma^\mu \partial_\mu - m)\psi=0.$$</p>
<p>In that sense we need to understand what is the <em>derivative</em> of a quantum field. In this seems a little complicated.</p>
<p>Of course one can say: "a quantum field takes values in a Hilbert space so you can use the Frechet derivative", but it is not even clear what Hilbert space it is that the quantum fields takes values on. Also, as is clear from Quantum Mechanics, most of the operators we deal in QM are unbounded and hence discontinuous. I believe this would have a great impact on how should we deal with things like derivatives.</p>
<p>So, in order for quantum fields to satisfy differential equations, how is the correct way to define and understand the derivative of a quantum field? How can we make sense of quantum field differential equations?</p>
|
<p>The fields of a QFT are not functions of the spatial coordinates $\boldsymbol x\in\mathbb R^n$, but operator-valued distributions (borrowing Wightman's terminology). The notion of the fields being functions of time ("sharp-time fields") can be kept in general, but their dependence on $\boldsymbol x$ is "too singular", so that they become distributions; fields need be smeared out in space.</p>
<p>Therefore we should write $\phi[f]$ instead of $\phi(\boldsymbol x)$, in the same way we should write $\delta[f]$ for the Dirac delta. In this sense, the commutation relations
$$
[\phi(\boldsymbol x),\pi(\boldsymbol y)]=\delta(\boldsymbol x-\boldsymbol y)
$$
should be written as
$$
[\phi(f),\pi(g)]=(f,g)
$$
for a certain scalar product $(\cdot,\cdot)$ on your space of test functions (that depends on the spin of $\phi$).</p>
<p>Similarly, the field equations
$$
\dot\pi(\boldsymbol x)-\Delta \phi(\boldsymbol x)+m^2\phi(\boldsymbol x)=0
$$
should actually be written as
$$
\dot\pi[f]-\phi[\Delta f]+m^2\phi[f]=0
$$</p>
<p>More generally, the naïve field equations
$$
\mathscr D\phi(x)=0
$$
are nothing but a short-hand notation for
$$
\phi[\mathscr Df]=0
$$
for all $f$ in the domain of $\phi$. Therefore, the derivatives acting on fields are to be understood in the sense of <em>distributional derivatives</em> (if $T$ is a distribution, then we define $T'[f]\equiv -T[f']$, etc.).</p>
<p>For free theories, the whole framework of operator-valued distributions is perfectly well understood, and one may work with all mathematical rigour one may wish. For interacting theories though, we are far from a mathematically sound theory.</p>
<p>For more details see for example <a href="https://link.springer.com/chapter/10.1007%2F978-3-642-73104-4_11" rel="noreferrer">On Relativistic Irreducible Quantum Fields Fulfilling CCR</a> or <a href="https://arxiv.org/abs/1602.00662" rel="noreferrer">Haag's theorem in renormalised quantum field theories</a>. Also, anything by Wightman (e.g., <a href="http://rads.stackoverflow.com/amzn/click/0691070628" rel="noreferrer">PCT, Spin and Statistics, and All That</a>).</p>
| 1,028
|
differential equations
|
Higher-order derivatives than second-order differential equations
|
https://physics.stackexchange.com/questions/679352/higher-order-derivatives-than-second-order-differential-equations
|
<p>From <a href="https://doi.org/10.1063/1.2155755" rel="nofollow noreferrer">https://doi.org/10.1063/1.2155755</a></p>
<blockquote>
<p>he limited himself to second-order differential equations.</p>
</blockquote>
<blockquote>
<p>Our experience in elementary-particle physics has taught us that any term in the field equations of physics that is allowed by fundamental principles is likely to be there in the equations</p>
</blockquote>
<p>I guess the author means from the effective field theory point of view. Namely, effective actions include non-renormalizable terms, which can lead to higher derivatives. I try to see an example beyond second-order differential equations.</p>
<p>Let me start from <span class="math-container">$\phi^4$</span>. The effective Lagrangian is, e.g., Peskin & Schroeder eq. (12.23)
<span class="math-container">$$
\int d^d x \mathcal{L}_{\mathrm{eff}} = \int d^d x' \left[ \frac{1}{2} \left( \partial'_{\mu} \phi' \right)^2 + \frac{1}{2} m'^2 \phi'^2 + \frac{1}{4} \left( \lambda' \phi'^4 + C \left( \partial'_{\mu} \phi' \right)^4 + D' \phi'^6 +\cdots \right) \right] \tag{1}
$$</span></p>
<p>I suppose
<span class="math-container">$$
\mathcal{L}_{\mathrm{eff}} = \frac{1}{2} \left( \partial'_{\mu} \phi' \right)^2 + \frac{1}{2} m'^2 \phi'^2 + \frac{1}{4} \left( \lambda' \phi'^4 + C \left( \partial'_{\mu} \phi' \right)^4 + D' \phi'^6 +\cdots \right) \tag{2}
$$</span></p>
<p>Try to cook a classical equation of motion. From the Euler-Lagrangian equation,
<span class="math-container">$$
\frac{ \partial \mathcal{L} }{ \partial \phi} - \partial_{\mu} \frac{ \partial \mathcal{L} }{ \partial \left( \partial_{\mu} \phi \right) } = 0\tag{3}
$$</span>
plug in the effective lagrangian, we should get some extra terms than the Klein-Gordon equation
<span class="math-container">$$
\square \phi' - m^2 \phi' + C \partial'_{\mu} \left[ \left( \partial'^{\mu} \phi' \right) \left( \partial'_{\mu} \phi' \right)^2 \right] +\cdots = 0.\tag{4}
$$</span></p>
<p>So far the extra term with prefactor <span class="math-container">$C$</span> still looks like a second-order differential equation, as one first-order derivative outside the square bracket, <span class="math-container">$\partial'_{\mu}$</span>, acting on one first-order derivative term <span class="math-container">$\left( \partial'^{\mu} \phi' \right) $</span> times the other first-order derivative term (a first-order derivative times itself) <span class="math-container">$\left( \partial'_{\mu} \phi' \right)^2$</span>, i.e., <span class="math-container">$(fg)' = f'g + fg'$</span>. If I further organize the inside square bracket part of the extra term by <span class="math-container">$f' g = (fg)' - f g' $</span>,</p>
<p><span class="math-container">$$
C \partial'_{\mu} \left[ \left( \partial'^{\mu} \phi' \right) \left( \partial'_{\mu} \phi' \right)^2 \right] \\
\equiv C \partial'_{\mu} \left\{ \left( \partial'^{\mu} \phi' \right) \left( \partial'_{\mu} \phi' \right)^2 \right\} \\
= C \partial'_{\mu} \left\{ \partial'^{\mu} \left[ \phi' \left( \partial'_{\mu} \phi' \right)^2 \right] - \phi' \partial'^{\mu}\left[ \left( \partial'_{\mu} \phi' \right)^2 \right] \right\} \\
= C \underline{\partial'_{\mu}} \left\{ \partial'^{\mu} \left[ \phi' \left( \partial'_{\mu} \phi' \right)^2 \right] - 2 \phi' \left[ \left( \partial'^{\mu} \phi' \right) \left( \underline{ \partial'^{\mu} \partial'_{\mu}} \phi' \right) \right] \right\}.\tag{5}
$$</span></p>
<p>It seems I get a third-order differential equation from the underline part of the above equation. Is my reasoning right?</p>
<p>I think I did not impose any quantization in getting the equation of motion (except effective action from path integrals), since I think the view in the physics today essay is not much about quantization. Or I am not even wrong?</p>
<p>Or a second-order differential equation should be counted as the total number of the derivatives terms than taking a second-order differentiation on a single term?</p>
|
<ol>
<li><p>OP is right that if the Lagrangian density remains of 1st order, then the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Lagrange_equation" rel="nofollow noreferrer">Euler-Lagrange (EL) equations</a> will only be of 2nd order. See also e.g. <a href="https://physics.stackexchange.com/q/18588/2451">this</a> & <a href="https://physics.stackexchange.com/q/4102/2451">this</a> related Phys.SE posts.</p>
</li>
<li><p>However, the <a href="https://www.google.com/search?as_q=wilsonian+effective+action" rel="nofollow noreferrer">Wilsonian effective action</a><br />
<span class="math-container">$$\begin{align}
\exp&\left\{ -\frac{1}{\hbar}W_c[J^H,\phi_L] \right\}\cr
~:=~~~&\int \! {\cal D}\frac{\phi_H}{\sqrt{\hbar}}~\exp\left\{ \frac{1}{\hbar} \left(-S[\phi_L+\phi_H]+J^H_k \phi_H^k\right)\right\} \end{align}$$</span>
is defined by integrating out heavy/high modes <span class="math-container">$\phi^k_H$</span> and leaving the light/low modes <span class="math-container">$\phi^k_L$</span>. Here <span class="math-container">$J^H_k$</span> denotes sources for the heavy modes. The (possibly <strong>non-local</strong>!) Wilsonian effective action <span class="math-container">$W_c[J^H,\phi_L]$</span> is the generating functional of connected <span class="math-container">$\phi_H$</span> Feynman diagrams in a background <span class="math-container">$J^H,\phi_L$</span>.</p>
</li>
<li><p>Nevertheless, the heavy propagators are exponentially suppressed, so the <strong>non-locality</strong> is mild, and can be taking into account by a <strong>Taylor expansion</strong>, cf. e.g. my Phys.SE answer <a href="https://physics.stackexchange.com/a/695184/2451">here</a>.</p>
</li>
<li><p>The upshot is that, in the Wilsonian <a href="https://en.wikipedia.org/wiki/Renormalization_group" rel="nofollow noreferrer">renormalization group</a> flow, the Wilsonian Lagrangian density will in principle contain all possible terms that are not excluded by symmetry, e.g.
<span class="math-container">$$ \ldots
+ \ldots
+\frac{E}{2} (\partial_{\mu}\partial_{\nu}\phi)(\partial^{\mu}\partial^{\nu}\phi)
+ \frac{F}{2} (\partial_{\mu}\phi)(\partial^{\mu}\partial^{\nu}\phi)(\partial_{\nu}\phi)
+ \ldots ,$$</span>
i.e., the Lagrangian density becomes of <strong>higher order</strong>.</p>
</li>
<li><p>For <strong>higher-order</strong> Lagrangian theories,
the EL equations (3) become
<span class="math-container">$$ 0~\approx~\frac{\delta S}{\delta \phi}
~=~\frac{\partial {\cal L}}{\partial \phi}
-\sum_{\mu} \frac{d}{dx^{\mu}} \frac{\partial {\cal L}}{\partial (\partial_{\mu}\phi)} + \sum_{\mu\leq \nu} \frac{d}{dx^{\mu}} \frac{d}{dx^{\nu}} \frac{\partial {\cal L}}{\partial (\partial_{\mu}\partial_{\nu}\phi)} - \ldots. $$</span>
Here the <span class="math-container">$\approx$</span> symbol means equality modulo eoms, and the ellipsis <span class="math-container">$\ldots$</span> denotes possible higher-derivative terms.</p>
</li>
<li><p>In general, if the Lagrangian density is of <span class="math-container">$n$</span>'th order, then the EL equations will be of <span class="math-container">$2n$</span>'th order.</p>
</li>
</ol>
| 1,029
|
differential equations
|
On different methods of solving differential equations
|
https://physics.stackexchange.com/questions/778559/on-different-methods-of-solving-differential-equations
|
<p>I've studied the basic concepts of partial differential equations, and one question comes to my mind. What are the propuse of the diferent methods of resolution of differential equations. For example if you start with:
<span class="math-container">$$
\frac{1}{c^2}\frac{\partial^2 \Psi}{\partial t^2} - \frac{\partial^2 \Psi}{\partial x^2} = 0
$$</span>
and assume a solution of the type:
<span class="math-container">$$
\Psi(x, t) = X(x)T(x)
$$</span>
you will arrive a someting like:
<span class="math-container">$$
\Psi_n(x) = \left ( A \sin{(k_nx) + B\cos(k_n x)} \right)e^{ick_nt}
$$</span>
where the set of all eigenvectors <span class="math-container">$\Psi_n(x, t)$</span>, forms a basis of a Hilbert Space. From what I understand, by using this method, you find eigenfunctions (or eigenvectors), that are the stationary solutions to the equation. Solutions which are in equilibrium. (in a similar way, that of finding a matrix in a diagonal basis, which is the simplest form of a linear application you can find)</p>
<p>But now imagine you want dynamical solutions, solutions which are not stationary. You would use something like the Fourier Series (or Fourier-Bessel Series, etc.) (given an initial conditions), or a Fourier Transform. Also, in this case, for the wave equation, performing the change of variables:
<span class="math-container">$$
\eta = x + ct \\
\xi = x - ct
$$</span>
also yields a non-stationary solution. So my question is? Is this right? I mean, you always will get stationary solution by separation of variables, and non-stationary ones, with all the other methods? Is this an universal thing? Are there any other methods?</p>
|
<p>I do not understand what you mean when you say that the solutions you get with separation of variables are stationary, since they depend explicitly on time. Maybe it will help you to realize that your function
<span class="math-container">\begin{equation}
\Psi_n(t,x) = \left( A_n \sin(k_n x) + B_n \cos(k_n x) \right) e^{i c k_n t}
\end{equation}</span>
is actually of the form <span class="math-container">$f(x+ct) + g(x-ct)$</span>, which is what you seem to be referring to in the last paragraph. Using the relations
<span class="math-container">\begin{align}
\sin(k_n x) = \frac{e^{i k_n x} - e^{-i k_n x}}{2i}, && \cos(k_n x) = \frac{e^{i k_n x} + e^{-i k_n x}}{2}
\end{align}</span>
in your <span class="math-container">$\Psi_n(t,x)$</span>, you find that it becomes
<span class="math-container">\begin{equation}
\Psi_n(t,x) = \left( C_ne^{i k_n x} + \bar{C}_n e^{- ik_n x} \right)e^{ic k_n t} = C_n e^{ik_n(x+ct)} + \bar{C}_n e^{-i k_n(x-ct)},
\end{equation}</span>
where
<span class="math-container">\begin{align}
C_n = \frac{B_n -iA_n}{2}, && \bar{C}_n = \frac{B_n +iA_n}{2},
\end{align}</span>
which is of the form <span class="math-container">$f(x+ct) + g(x-ct)$</span>. Note that the wave equation is linear, which means that the sum of any number of solutions is also a valid solution. With separation of variables you found an infinite set of solutions (one for each <span class="math-container">$n$</span>), so the most general one is a linear combination of all of them. Indeed, any two functions <span class="math-container">$f(x+ct)$</span> and <span class="math-container">$g(x-ct)$</span> defined on some interval of the real line can be decomposed in Fourier modes as
<span class="math-container">\begin{align}
f(x+ct) = \sum_{n= - \infty}^{\infty} c_n e^{ik_n(x+ct)} && g(x-ct) = \sum_{n= - \infty}^{\infty} d_n e^{ik_n(x-ct)},
\end{align}</span>
which confirms that the two methods you mentioned lead to the same set of solutions for the equation.</p>
| 1,030
|
differential equations
|
Application for differential equation of higher order
|
https://physics.stackexchange.com/questions/283292/application-for-differential-equation-of-higher-order
|
<p>We found some interesting insights in differential equations of the form</p>
<p>$y^{(n)}(x)+F_\lambda(y(x),y'(x),...,y^{(n-1)}(x))=0$,</p>
<p>i.e. for ordinary differential equations of $n$-th order with $n\geq2$. The function $F$ is polynomial which can include a set of parameters $\lambda$.</p>
<p>We know, that in physics usually the highest derivative is of order two(?), but we are searching for applications of this kind of differential equations for $n\geq3$ in physics, engineering, or in any other area. If you have an idea or know models or theories in which such equations occur, you input would be appreciated very much.</p>
|
<p>That's just not true. If a linear system has $n$ independent ways in which energy can be stored as states, and energy can flow between these states, then you can model the system with an nth order polynomial.</p>
<p>Granted some systems can be approximated by a linear 2nd order rational polynomial function, but a closer look shows higher order fits might work better. For example you might assume a spring mass system as a lumped parameter 2nd order system, but find the spring has torsional as well as bending modes, and the rigid mass perhaps not so rigid. But we tend to model the states that more matter to our application, and so we might neglect the higher order modes.</p>
<p>And look carefully, the real world is hardly ever linear. Nonlinear systems are more the norm.</p>
| 1,031
|
differential equations
|
Differential equation
|
https://physics.stackexchange.com/questions/575708/differential-equation
|
<p>I am trying to solve the following differential equation;</p>
<p><span class="math-container">$$\frac{d^2 x}{d t^2}=-\omega^2 x \delta(t-t^\prime).$$</span></p>
<p>I know this is of the form</p>
<p><span class="math-container">$$x(t)= A \sin(\omega t) + B \cos(\omega t).$$</span></p>
<p>However this delta Dirac function is confusing me.
My reasoning is the following;
An increase in acceleration at <span class="math-container">$t^\prime$</span> results in an increase of constant velocity starting from <span class="math-container">$t^\prime$</span> and a linear constant increase in distance so;</p>
<p><span class="math-container">$$x(t) = \left[A \sin(\omega t) + B \cos(\omega t)\right] t \Theta(t>t^\prime).$$</span></p>
<p>With <span class="math-container">$\Theta$</span> being the step function. This still feels more like a guess than a straightforward answer, then I am not 100% sure of the first derivative.</p>
<p>Any help would be welcome.</p>
|
<p>You need to find the solutions for <span class="math-container">$t> t'$</span> and for <span class="math-container">$t> t'$</span> and then impose the integration constants by the boundary conditions. The problem is actually very similar to that of a delta-function barrier in quantum mechanics, but with zero energy.</p>
<p>The solution with sines and cosines does not belong here, since it comes from the theory of ordinary differential equations with constant coefficients, which is not the case here.</p>
<p><strong>Update</strong><br />
Here is another type of problem where such an equation appears: the field of an infinite charged plane <a href="https://physics.stackexchange.com/a/540392/247642">link</a>.</p>
| 1,032
|
differential equations
|
Why are differential equations used a lot in physics?
|
https://physics.stackexchange.com/questions/733203/why-are-differential-equations-used-a-lot-in-physics
|
<p>I have heard from my physics teacher that differential equations are very useful in physics. In what parts of physics exactly is it useful? Why are they generally useful?</p>
|
<p>I'd like to elaborate on an earlier answer.</p>
<p>In general the quantities that go into the equations come in chuncks, related by differentiation. The most known example is the trio: position, velocity, acceleration. Velocity is the first time derivative of position, acceleration being the second time derivative.</p>
<p>I think the relations we encounter are in fact <em>all</em> of the type where the rate of change of one quantity relates to the magnitude of some other quantity. There is the relation between rate of change of velocity and force: <span class="math-container">$F=ma$</span></p>
<p>Then there will be classes of cases where the rate of change of some quantity A relates to rate of change of quantity B.</p>
<br>
<p>As we know, equations with one or more derivatives in them are classified as 'differential equations'.</p>
| 1,033
|
differential equations
|
Complex exponential method of solving differential equations
|
https://physics.stackexchange.com/questions/623561/complex-exponential-method-of-solving-differential-equations
|
<p>In the <a href="https://www.feynmanlectures.caltech.edu/I_23.html" rel="nofollow noreferrer">twenty third Feynman lecture</a>, the solution of the following differential equation is discussed:</p>
<p><span class="math-container">$$ \frac{d^2 x}{dt^2} + \frac{kx}{m} = \frac{F}{m}$$</span></p>
<p>AFter 'complexifying' this differential equation, he gets:</p>
<p><span class="math-container">$$ \frac{d^2 x}{dt^2} + \frac{kx}{m} = \frac{\hat{F} e^{iwt} }{m}$$</span></p>
<p>And then it is written that we can write:</p>
<p><span class="math-container">$$ x = |x| e^{i \omega t}$$</span></p>
<p>Assuming <span class="math-container">$x$</span> is a complex number and this leads to:</p>
<p><span class="math-container">$$ \frac{dx}{dt} = i \omega x$$</span></p>
<p>However, the above result assumes that <span class="math-container">$|x|$</span> is constant, how do we rigorously justify this assumption?</p>
|
<p><span class="math-container">$x$</span> and <span class="math-container">$F$</span> can each be expressed as a Fourier integral:
<span class="math-container">$$x(t)=\int x(\omega)e^{i\omega t} d\omega$$</span>
<span class="math-container">$$F(t)=\int F(\omega)e^{i\omega t} d\omega$$</span>
This of course assumes that <span class="math-container">$F(t)$</span> is square integrable (<span class="math-container">$L^2$</span> Hilbert space).</p>
<p>The rest is just substitution into the equations of motion and comparison of coefficients (which is possible due to the perpendicularity of the Fourier basis functions and linearity of the equations of motion).</p>
| 1,034
|
differential equations
|
Numerical solution of differential equations, e.g. the three-body problem
|
https://physics.stackexchange.com/questions/815292/numerical-solution-of-differential-equations-e-g-the-three-body-problem
|
<p>What forms of differential equations have numerical solutions with errors that go to zero with sufficient computational power? For example, suppose I want to solve a differential equation <span class="math-container">$E$</span> for a position vector <span class="math-container">$r$</span> at time <span class="math-container">$t$</span>, where initial conditions <span class="math-container">$r(t_0)$</span>, <span class="math-container">$\dot{r}(t_0)$</span> are given. What classes of differential equations do we know to have a numerical solution in the form of an algorithm <span class="math-container">$A_E(r(t_0),\dot{r}(t_0),t,c)$</span> to predict <span class="math-container">$\hat{r}(t)$</span>, where is it true that</p>
<p><span class="math-container">$\forall \epsilon, \exists c \ni \| \hat{r}(t)-r(t) \| < \epsilon$</span> (eq. 1),</p>
<p>meaning given any desirable error <span class="math-container">$\epsilon$</span>, I can use sufficient computational cycles <span class="math-container">$c$</span> to predict <span class="math-container">$\hat{r}(t)$</span> such that <span class="math-container">$\|\hat{r}(t)-r(t)\| < \epsilon$</span>?</p>
<p>For a specific example, does the three body problem have a numerical solution</p>
<p><span class="math-container">$\hat{r}(t)=A_E(c)$</span> for <span class="math-container">$r(t)=(r_{1x}(t),r_{1y}(t),r_{1z}(t),r_{2x}(t),r_{2y}(t),r_{2z}(t),r_{3x}(t),r_{3y}(t),r_{3z}(t))^T$</span></p>
<p>that satisfies (eq. 1)?</p>
<p>And I believe if such a solution does exist for the three body problem, it is computationally intractable for larger <span class="math-container">$|t-t_0|$</span>, but that's not what I'm asking. (And I also believe certain initial configurations have been solved analytically). What I really what to know is, what differential equations have numerical solutions <span class="math-container">$A_E(c)$</span> satisfying (eq. 1), and is the three body problem in this class?</p>
|
<p>Any algorithm for the numerical solution of ordinary differential equations provides a discretized approximation of the exact solution and should guarantee that the global error after a finite time <span class="math-container">$T$</span> should vanish by refining the discretization. The quality of different algorithms can be measured by the computational cost of achieving a given threshold of accuracy.</p>
<p>In cases like the gravitational N-body problem, the real issue is not directly related to the numerical algorithms but to the strong dependence of the results on the initial conditions (i.e., to the existence of chaotic motions). In such cases, the form of the differential equation plays only an indirect role in the numerical algorithm. We need more accuracy in the numerical algorithms just to better control the fast divergence of the numerical with respect to the exact solution.</p>
<p>There is no universal recipe for the best algorithm in such cases. It is helpful to compare results with good algorithms from different classes based on the specific problem one is dealing with.</p>
| 1,035
|
differential equations
|
Hamiltonian from a differential equation
|
https://physics.stackexchange.com/questions/249567/hamiltonian-from-a-differential-equation
|
<p>In my differential equations course an example is given from the Lotka-Volterra system of equations:</p>
<p>$$ x'=x-xy$$</p>
<p>$$y'=-\gamma y+xy.\tag{1}$$</p>
<p>This is then transformed by the substitution: $q=\ln x, p=\ln y$. </p>
<p>$$ q'=1-e^p$$</p>
<p>$$p'=-\gamma +e^q.\tag{2}$$</p>
<p>Then without any explanation they say the Hamiltonian is then equal to:
$$H(p,q)=\gamma q -e^q+p-e^p\tag{3}$$</p>
<p>How is this Hamiltonian derived?</p>
|
<p>This is explained in part II of my Phys.SE answer <a href="https://physics.stackexchange.com/a/53637/2451">here</a>, which shows that a 2D system always has a Hamiltonian description locally.</p>
<p>It turns out, that before the non-canonical transformation $(x,y) \to (q,p)$, from the first pair of eoms (1) alone, the Hamiltonian and non-canonical Poisson bracket can be derived as $$H~=~\gamma \ln x -x +\ln y -y $$ and $$\{x,y\}_{PB} ~=~ xy,$$ respectively. Next the canonical coordinates $(q,p)$ can be easily determined.</p>
| 1,036
|
differential equations
|
Tensor differential equation
|
https://physics.stackexchange.com/questions/822202/tensor-differential-equation
|
<p>How to solve the following differential equation with tensor indices?</p>
<p><span class="math-container">$\epsilon_{\mu\nu}\partial^{\gamma}\partial_{\gamma}f-2i\epsilon_{\mu\nu}p.\partial f+ip_{\mu}x^{\gamma}\epsilon_{\nu\gamma}+ip_{\nu}x^{\gamma}\epsilon_{\mu\gamma}-i(p.x)\epsilon_{\mu\nu}+2\epsilon_{\mu\nu}=0$</span></p>
<p>Here <span class="math-container">$\epsilon_{\mu\nu}$</span> is the usual polarization tensor for gravity and only depends on the momentum <span class="math-container">$p$</span>. The derivatives are with respect to <span class="math-container">$x$</span>.</p>
| 1,037
|
|
differential equations
|
Decoupling coupled differential equations in dynamically coupled two state system
|
https://physics.stackexchange.com/questions/252692/decoupling-coupled-differential-equations-in-dynamically-coupled-two-state-syste
|
<p>Consider the following dynamically coupled two state hamiltonian, $$H=-B\sigma_z-V(t)\sigma_x.$$Taking the eigenfunctions of $\sigma_z$ ($|+>$ and $|- >$) as basis vectors, we have the wave function to be $$\Phi=c_
1|+>+ c_2|->$$ and we get coupled differential equations for the time evolution of these two coefficients.</p>
<p>$$\left[ \begin{array}{c} \frac{dc_1}{dt} \\ \frac{dc_2}{dt} \end{array} \right] = \begin{bmatrix} -B & -V(t) \\ -V(t) & B \end{bmatrix} \times \left[ \begin{array}{c} c_1 \\ c_2 \end{array} \right]$$</p>
<p>To decouple the equations I tried diagonalyzing the Hamiltonian involved. But, then the eigenvectors themselves involve time dependence due to $V(t)$ and thus, i'm not able to decouple the differential equations. So, is there any other method do it? Any hints are welcome. </p>
|
<p>The system can be separated, but not necessarily in nice form. For instance, the time derivative of the first eq. reads
$$
i\hbar {\ddot c}_1 = - B {\dot c}_1 - {\dot V}c_2 - V {\dot c}_2
$$
Now remove $c_2$ using again the first eq.,
$$
c_2 = -\frac{i\hbar}{V} {\dot c}_1 - \frac{B}{V} c_1
$$
and ${\dot c}_2$ using the second eq., ${\dot c_2} = \frac{i}{\hbar}Vc_1 - \frac{i}{\hbar}B c_2$:
$$
i\hbar {\ddot c}_1 = - B {\dot c_1} + i\hbar \frac{d\ln V}{dt} {\dot c}_1 + B \frac{d\ln V}{dt} c_1 - \frac{i}{\hbar}V^2c_1 + \frac{i}{\hbar} BV\left(-\frac{i\hbar}{V} {\dot c}_1 - \frac{B}{V} c_1\right) = 0
$$
Simplify, rearrange, and obtain
$$
{\ddot c}_1 - \frac{d\ln V}{dt} {\dot c}_1 + \left[\frac{i}{\hbar}B\frac{d\ln V}{dt} + \frac{B^2 + V^2}{\hbar^2} \right]c_1 = 0
$$
Similarly for $c_2$.</p>
<p><strong>Better way</strong>:</p>
<p>Change from $c_1$, $c_2$ to
$$
c_+ = c_2 + c_1\\
c_- = c_2 - c_1
$$
such that the system becomes
$$
i\hbar {\dot c}_+ = -V(t) c_+ + B c_-\\
i\hbar {\dot c}_- = B c_+ + V(t) c_-\\
$$
Applying the same elimination procedure for $c_-$, this time using
$$
c_- = \frac{i\hbar}{B}{\dot c}_+ + \frac{V}{B}c_+\\
{\dot c_-} = - \frac{i}{\hbar}Bc_+ - \frac{i}{\hbar}Vc_- = - \frac{i}{\hbar}Bc_+ - \frac{i}{\hbar}V\left[\frac{i\hbar}{B}{\dot c}_+ + \frac{V}{B}c_+\right] = \frac{V}{B}{\dot c}_+ - \frac{i}{\hbar}\frac{B^2 + V^2}{B}c_+
$$
yields a much simpler looking eq. for $c_+$:
$$
{\ddot c}_+ - \frac{i}{\hbar}{\dot V}c_+ - \frac{i}{\hbar} V {\dot c}_+ + \frac{i}{\hbar} B {\dot c}_- = 0 \\
{\ddot c}_+ - \frac{i}{\hbar}{\dot V}c_+ - \frac{i}{\hbar} V {\dot c}_+ + \frac{i}{\hbar} V {\dot c}_+ +\frac{B^2 + V^2}{\hbar^2}c_+ = 0\\
{\ddot c}_+ + \left[\frac{B^2 + V^2}{\hbar^2} - \frac{i}{\hbar}{\dot V} \right]c_+ = 0
$$</p>
| 1,038
|
differential equations
|
Differential equation in non-uniform circular motion
|
https://physics.stackexchange.com/questions/359220/differential-equation-in-non-uniform-circular-motion
|
<p>I have a question which states</p>
<blockquote>
<p>An astronaut is conducting an experiment on a spaceship under conditions of zero gravity. A bead is threaded on a circular wire, and set in motion with angular velocity $ \omega _0 $ about the centre. If the coefficient of friction between the bead and the wire is $\mu$, show that the angular velocity $\omega$ at time $t$ satisfies the differential equation $ \dot{\omega} = -\mu \omega ^ 2 $ . Solve this equation, and hence find an expression for $\theta$, the angle turned after time $t$. Show that, according to this model, the bead will never come to a complete stop. </p>
</blockquote>
<p>I have shown that the differential equation $\dot{\omega} = - \mu \omega ^2$ is satisfied. However, I am struggling to solve the differential equation. Am i correct in thinking it is a second order non-linear ordinary differential equation? If so how do I solve this. My textbook doesn't require you to know how to solve non-linear differential equations, so is there some special way to go about this? </p>
|
<p>I have solved the equation by doing it in two steps. First, rather than thinking of it as $\ddot{\theta} = -\mu \dot{\theta} ^2$, treat it as $\frac{d\omega}{dt} = -\mu \omega ^2$, which can be solved by separating the variables:
$$ \begin{align} \int \frac{1}{\omega ^2} \ d\omega &= \int -\mu \ dt \\ -\frac{1}{\omega} &= c - \mu t \end{align}$$ $c$ can be found by inputting the intial conditions, $t=0 , \ \omega = \omega _0$, to give $c = -\frac{1}{\omega _0}$
$$ \omega = -\frac{\omega _0}{1 + \mu \omega _0 t}$$
From this, $\theta$ can be found by doing:
$$ \begin{align} \frac{d\theta}{dt} &= -\frac{\omega _0}{1 + \mu \omega _0 t} \\ \theta &= \int -\frac{\omega _0}{1 + \mu \omega _0 t} \ dt \\ \theta &= \frac{1}{\mu} \ln|1 + \mu \omega _0t|\end{align}$$</p>
| 1,039
|
differential equations
|
How do I know which equations can be treated as differential equations and which can't?
|
https://physics.stackexchange.com/questions/614395/how-do-i-know-which-equations-can-be-treated-as-differential-equations-and-which
|
<p>I'm sometimes mystified by the use of differentials in physics. I don't understand which formulas—on which occasions—can be thought of as differential equations and which cannot.</p>
<p>While discussing work done by a piston during an isothermal process, my textbook does not treat <span class="math-container">$PV=nRT$</span> as a differential equation. Let me illustrate how <span class="math-container">$W$</span> is derived:</p>
<p><span class="math-container">$$W=\int_{V_1}^{V_2}P\mathrm{d}V=\int_{V_1}^{V_2}\frac{nRT}{V}\mathrm{d}V=nRT\ln\frac{V_2}{V_1}$$</span></p>
<p>My question is, why could I not treat the ideal gas law as a differential equation and say <span class="math-container">$P\mathrm{d}V=nR\mathrm{d}T$</span>? If I could, I'd then say:</p>
<p><span class="math-container">$$W=\int_{V_1}^{V_2}P\mathrm{d}V=\int_{T_1}^{T_1}nR\mathrm{d}T=0$$</span></p>
<p>Since <span class="math-container">$\mathrm{d}T=0$</span> during an isothermal process. The result is erroneous, but why can I not argue in the following way? Why can't <span class="math-container">$PV=nRT$</span> be treated as a differential equation? How do I know which equations can be treated as such and which can't?</p>
|
<p><span class="math-container">$$PV=nRT\tag{1}$$</span></p>
<p>can not be considered a differential equation, simply because it <strong>contains no differentials</strong>.</p>
<p>Now, you can't just go and differentiate that equation as:</p>
<p><span class="math-container">$$P\mathrm{d}V=nR\mathrm{d}T$$</span></p>
<p>because <span class="math-container">$V=f(n,P,T)$</span> and <span class="math-container">$T=g(n,P,V)$</span> where <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are multi-variable functions. To differentiate them you would have to use <em>partial differentials</em> (<span class="math-container">$\partial$</span>).</p>
<p>Differentiating <span class="math-container">$(1)$</span> you need to apply the product rule:</p>
<p><span class="math-container">$$\mathrm{d}(PV)=\mathrm{d}(nRT)$$</span></p>
<p>Assuming <span class="math-container">$n=\text{constant}$</span>:</p>
<p><span class="math-container">$$P\mathrm{d}V+V\mathrm{d}P=nR\mathrm{d}T$$</span></p>
<p>But to derive the work done by an isothermal expansion/compression we simply use the general definition of work:</p>
<p><span class="math-container">$$\mathrm{d}W=F(x)\mathrm{d}x$$</span></p>
<p>It's easy to show that for a piston <span class="math-container">$F(x)\mathrm{d}x=P\mathrm{d}V$</span>, so:</p>
<p><span class="math-container">$$\mathrm{d}W=P\mathrm{d}V$$</span></p>
<p>Then extract <span class="math-container">$P$</span> from the Ideal Gas Law.</p>
<hr>
<p>Regarding your <strong>title question</strong>. Differential equations (DEs) typically arise to describe <em>dynamic</em> problems, where change, often (but not exclusively) in time, occurs.</p>
<p>Let's take a simple example. A mass <span class="math-container">$m$</span> sits on a rough incline, <strong>motionless</strong>. This is a static problem and requires no DEs.</p>
<p>Now we apply sufficient force on the mass for it to <strong>start moving</strong>. Newton's Second Law now states:</p>
<p><span class="math-container">$$F_{net}=ma$$</span>
or:
<span class="math-container">$$F_{net}=m\frac{\mathrm{d}v}{\mathrm{d}t}$$</span></p>
<p>This is of course a DE which allows us to calculate the <em>rate of change</em> of the velocity <span class="math-container">$v$</span>.</p>
| 1,040
|
differential equations
|
Rewriting the Hydrogen Schrodinger Equation as a system of differential equations
|
https://physics.stackexchange.com/questions/141238/rewriting-the-hydrogen-schrodinger-equation-as-a-system-of-differential-equation
|
<p>I have only ever seen the Schrodinger equation for the hydrogen atom written out in a form like this:
$$
-\frac{\hbar^2}{2\mu}\left[\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial \psi}{\partial r}\right) + \frac{1}{r^2\sin{\theta}}\frac{\partial}{\partial \theta}\left(\sin{\theta}\frac{\partial\psi}{\partial\theta}\right)+\frac{1}{r^2\sin^2{\theta}}\frac{\partial^2\psi}{\partial \phi^2}\right]-\frac{Ze^2}{4\pi\epsilon_0 r}\psi=E\psi
$$</p>
<p>I'm still learning the necessary skills to solve PDEs, let alone get to the point of solving this problem, but I wanted to know if someone could show me what this differential equation would look like in a matrix notation or as a system of differential equations.</p>
|
<p>If you assume <a href="http://en.wikipedia.org/wiki/Separation_of_variables" rel="nofollow">separability</a> of the wave function, i.e., $\psi(\mathbf x)=u(x)v(y)w(z)$, you can solve the individual components separately:
\begin{align}
-\frac{\hbar^2}{2\mu}\frac{d^2u(x)}{dx^2}+V_1(x)u(x)&=E_1u(x)\\
-\frac{\hbar^2}{2\mu}\frac{d^2v(y)}{dy^2}+V_2(y)v(y)&=E_2v(y)\tag{1}\\
-\frac{\hbar^2}{2\mu}\frac{d^2w(z)}{dz^2}+V_3(z)w(z)&=E_3w(z)
\end{align}
with the further constraint that
$$
E_1+E_2+E_3=E
$$</p>
<p>We <em>can</em> express (1) as the <a href="http://en.wikipedia.org/wiki/Matrix_differential_equation" rel="nofollow">matrix differential equation</a>,
$$
\mathbf u''=A\mathbf u,\tag{2}
$$
in which case $A$ is clearly diagonal and $\mathbf u=(u(x),\,v(y),\,w(z))^T$. In the case that the wave-function is <em>not</em> separable, then this method is not appropriate as you'd have a single scalar equation.</p>
<p>For your case of the spherical wave function, you can solve the radial component and the angular component separately, $\psi(\mathbf r)=R(r)Y(\theta,\phi)$ with $Y(\theta,\phi)$ the <a href="http://en.wikipedia.org/wiki/Spherical_harmonics" rel="nofollow">spherical harmonics</a>, as
\begin{align}
\frac{1}{R}\frac{d}{dr}\left(r^2\frac{dR(r)}{dr}\right)&=\lambda \\
\frac1Y\frac{1}{\sin\theta}\frac{\partial}{\partial\theta}\left(\sin\theta\frac{\partial Y}{\partial\theta}\right)+\frac1Y\frac1{\sin^2\theta}\frac{\partial^2Y}{\partial\phi^2}&=-\lambda
\end{align}
where $\lambda$ is a parameter to be discovered. This is the typical method of solving this particular problem in quantum mechanics textbooks.</p>
| 1,041
|
differential equations
|
Relativity and differential equation
|
https://physics.stackexchange.com/questions/488335/relativity-and-differential-equation
|
<p>I have a question regarding Misner, Charles W.; Thorne, Kip S.; Wheeler, John Archibald (1973), Gravitation ISBN 978-0-7167-0344-0. It is a book about Einstein's theory of gravitation.</p>
<p>In page 166 of chapter 6.2 about Hyperbolic Motion, the authors present a person feeling constant acceleration <span class="math-container">$g$</span> along the direction <span class="math-container">$x^1$</span>. The authors get the following equations:</p>
<blockquote>
<p><span class="math-container">$$a^0 = \frac{du^0}{d \tau} = gu^1$$</span>
<span class="math-container">$$a^1 = \frac{du^1}{d \tau} = gu^0$$</span></p>
</blockquote>
<p>And the system is solved by the authors to get:</p>
<blockquote>
<p><span class="math-container">$$t=g^{-1} \sinh{g\tau}$$</span>
<span class="math-container">$$x=g^{-1} \cosh{g\tau}$$</span></p>
</blockquote>
<p>So obviously <span class="math-container">$u^0 = t$</span> and <span class="math-container">$u^1 = x$</span>, but I am not sure about the change of variable between <span class="math-container">$t$</span> and <span class="math-container">$\tau$</span> to make the appearance of the Lorentz factor.</p>
<p>How do the authors solve the differential equation?</p>
|
<p>solution:</p>
<p><span class="math-container">$${\frac {d}{d\tau}}u_{{0}} \left( \tau \right) -gu_{{1}} \left( \tau
\right) =0\tag 1
$$</span></p>
<p><span class="math-container">$${\frac {d}{d\tau}}u_{{1}} \left( \tau \right) -gu_{{0}} \left( \tau
\right) =0\tag 2
$$</span></p>
<p>and the constraint condition that </p>
<p><span class="math-container">$$dsq=\left( {\frac {d}{d\tau}}u_{{0}} \left( \tau \right) \right) ^{2}-
\left( {\frac {d}{d\tau}}u_{{1}} \left( \tau \right) \right) ^{2}=
\epsilon\tag 3
$$</span></p>
<p>where <span class="math-container">$\epsilon=0$</span> or <span class="math-container">$1$</span> </p>
<p>with <span class="math-container">$\frac{d}{d\tau}eq(1)$</span> and equation (2) we obtain:</p>
<p><span class="math-container">$${\frac {d^{2}}{d{\tau}^{2}}}u_{{0}} \left( \tau \right) -{g}^{2}u_{{0}
} \left( \tau \right) =0
\tag 4$$</span>
<span class="math-container">$\Rightarrow$</span>
<span class="math-container">$$u_0(\tau)=1/2\,{\frac { \left( gA+B \right) {{\rm e}^{g\tau}}}{g}}+1/2\,{\frac {
\left( -B+gA \right) {{\rm e}^{-g\tau}}}{g}}\tag 5
$$</span>
where <span class="math-container">$A=u_0(0)$</span> and <span class="math-container">$B=D(u_0)(0)$</span> are arbitrary initial conditions </p>
<p>with equation (3) and (2) we get for dsq</p>
<p><span class="math-container">$$dsq(\tau)=\left( {\frac {d}{d\tau}}u_{{0}} \left( \tau \right) \right) ^{2}-
(g\,u_0(\tau))^2=
\epsilon
$$</span></p>
<p>thus: for <span class="math-container">$dsq(0)=\epsilon$</span> we can obtain the initial condition
<span class="math-container">$B=B(A)$</span> and get:</p>
<p><span class="math-container">$$B=\sqrt {{g}^{2}{A}^{2}+\epsilon}$$</span></p>
<p>with <span class="math-container">$A=0$</span> and <span class="math-container">$\epsilon=1$</span> we get:</p>
<p><span class="math-container">$$u_0(\tau)=1/2\,{\frac {{{\rm e}^{g\tau}}}{g}}-1/2\,{\frac {{{\rm e}^{-g\tau}}}{g
}}=\frac{1}{g}\sinh(g\tau)
$$</span></p>
<p><span class="math-container">$$u_1(\tau)={\frac {1/2\,{{\rm e}^{g\tau}}+1/2\,{{\rm e}^{-g\tau}}}{g}}=\frac{1}{g}\cosh(g\tau)
$$</span>
and for
<span class="math-container">$A=1 \,,\epsilon=0$</span> we get</p>
<p><span class="math-container">$$u_0(\tau)=e^{g\,\tau}$$</span>
<span class="math-container">$$u_1(\tau)=e^{g\,\tau}$$</span></p>
| 1,042
|
differential equations
|
Multiple time dimensions and understanding ultrahyperbolic differential equations
|
https://physics.stackexchange.com/questions/836427/multiple-time-dimensions-and-understanding-ultrahyperbolic-differential-equation
|
<p>On article "On the dimensionality of spacetime (<a href="https://space.mit.edu/home/tegmark/dimensions.pdf" rel="nofollow noreferrer">https://space.mit.edu/home/tegmark/dimensions.pdf</a>) Max Tegmark writes about ultrahyperbolic differential equations leading to unpredictability:</p>
<blockquote>
<p>"If an observer is to be able to make any use of its self-awareness and
information-processing abilities, the laws of physics must be such that it can make at
least some predictions. Specifically, within the framework of a field theory, it should, by
measuring various nearby field values, be able to compute field values at some more distant
spacetime points (ones lying along its future world line being particularly useful) with non infinite error bars."</p>
</blockquote>
<p>He also writes that</p>
<blockquote>
<p>"The last requirement means that the solution <span class="math-container">$u$</span> at a given point will only change by a finite amount if the boundary data is changed by a finite amount. Therefore, even if an ill-posed problem can be formally solved, this solution would in practice be useless to an observer, since it would need to measure the initial data with infinite accuracy to be able to place finite error bars on the solution (any measurement error would cause the error bars on the solution to be infinite)."</p>
</blockquote>
<p>So, if I understand correctly, he states that these equations are extremely sensitive to small changes in boundary conditions. Equation would yield very different results with boundary conditions like <span class="math-container">$y(0)=0$</span> or <span class="math-container">$y(0)=10^{-10}$</span>? I am having hard time to understand what this really means. Does this mean that if I would plug any ultrahyperbolic differential equation to some numerical solver, I would get wildly different behavior with small changes in boundary conditions. Is there any intuition or numerical examples of such behavior?</p>
| 1,043
|
|
differential equations
|
What is the partial differential equation expansion of the Einstein Field Equations?
|
https://physics.stackexchange.com/questions/189515/what-is-the-partial-differential-equation-expansion-of-the-einstein-field-equati
|
<p>I have read that the Einstein Field Equations (<a href="http://en.wikipedia.org/wiki/Einstein_field_equations" rel="nofollow">http://en.wikipedia.org/wiki/Einstein_field_equations</a>) can be expressed as a series of differential equations. Some say 16, others say 10 (The disparity seems to stem from a simplification involving the Bianchi identities). However, no source actually lists them.</p>
<p>What are they?</p>
|
<p>As asked in the comments, here is one answer : </p>
<p>One formalism where it is somewhat common to expand the Einstein equations into a full set of equations is the Newman-Penrose formalism. Not quite common as it uses both spinors instead of tensors and the coordinates are weird complex null-vectors, but it should give an idea of the whole thing. </p>
<p><a href="https://en.wikipedia.org/wiki/Newman%E2%80%93Penrose_formalism#NP_field_equations" rel="nofollow">https://en.wikipedia.org/wiki/Newman%E2%80%93Penrose_formalism#NP_field_equations</a> </p>
| 1,044
|
differential equations
|
How second-order differential equations do not violate causality?
|
https://physics.stackexchange.com/questions/323233/how-second-order-differential-equations-do-not-violate-causality
|
<p>The second order differential equations are time reversible. That means: they don't distinguish the time arrow direction. There is no reason for the time to flow forward. </p>
<p>My professor told me that there are two solutions to such equations, one of which describes processes going forward and one backward in time. The "backward" solution violates causality, so we say that only the "forward" solution is physical and the other simply don't exist.</p>
<p>Is this explanation correct?</p>
<p>And a second question: How is causality not violated?</p>
|
<p>Causality is not a hard-science topic as much as it is a philosophy of science topic. Causality is actually a <em>huge</em> issue in philosophy because, while typically want to say causality exists, it's actually <em>markedly</em> difficult to pen a description of it in a language which can stand up to the rigors of philosophy.</p>
<p>So your professor, in describing these equations, is showing an assumption he has made which is that the universe is causal. He's got a lifetime of empirical evidence to defend that assumption, but philosophy would say it isn't quite enough to be a "proof."</p>
<p>So when facing a reversible 2nd order equation, your professor is simply saying "ignore the 'other' solution as an artifact of the mathematics."</p>
<p>In "reality," it is not possible to set up a <em>perfect</em> second order system. In the real world, there's all sorts of other real life effects like thermal effects and gravitational effects that lead your real-life experimental apparatus to demonstrate a preference to <em>approximate</em> the "forward" solution from your differential equation. A great example of this is mentioned by JMac in the comments. A damped oscillator has some entropic force such as friction taking energy out of the system, and that almost compels the system to progress in the "forward" direction.</p>
<p>Similar issues show up in quantum mechanics. Some of the interpretations of QM involve a backwards propagating waveform to make all of the equations line up. Such interpretations open themselves to the philosophical question of what does a backwards propagating waveform <em>mean</em> in the real world, a world which appears to be subject to the laws of causality.</p>
| 1,045
|
differential equations
|
Differential equation for an accelerometer
|
https://physics.stackexchange.com/questions/317718/differential-equation-for-an-accelerometer
|
<p>I am having troubles deriving the 2nd order differential equation for the system below, where $r=y-s$. According to my lecture notes the differential equation is</p>
<p>$$
M\frac{d^2r}{dt^2}+b\frac{dr}{dt}+kr=-M\frac{d^2s}{dt^2}=-Ma \\
\ddot r+2\zeta \omega_0 \dot r+\omega_0² r=-a,
$$</p>
<p>whereas $ \omega_0 = \sqrt{\frac{k}{M}} $ and $ \zeta = \sqrt{\frac{b}{2M\omega_0}} $. </p>
<p>My understanding:
So I know that the force exerted by a spring follows $ F_F=-kr $ and the force by a damper $ F_D=-b\dot r $. The resulting force then equals $F_a = F_F + F_D$ or $F_a - F_F - F_D = 0$. This can also be seen in the formula from the lecture notes, but the $-M\frac{d^2s}{dt^2}$ on the right hand side confuses me a bit. Why does the absolute acceleration equal the differential equation on the left?</p>
<p><a href="https://i.sstatic.net/Iz7Zo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iz7Zo.png" alt="enter image description here"></a></p>
| 1,046
|
|
differential equations
|
General question about making differential equations dimensionless
|
https://physics.stackexchange.com/questions/446939/general-question-about-making-differential-equations-dimensionless
|
<p>Suppose you have a set of differential equations that you wish to normalize/make dimensionless. From what I've seen, you can usually use dimensional analysis to figure out a good choice of constants to make your variables and parameters dimensionless. However, in fluid dynamics, for example, you also have a few dimensionless quantities at your disposal e.g. Reynolds number, characteristic thermal velocity to flow velocity ratio, etc. How do you figure out how to incorporate these dimensionless quantities into your normalization factors? The goal (I think) is to make your normalized variables and parameters order 1, but I'm confused on how you figure out which dimensionless quantities get the job done.</p>
<p><a href="https://physics.stackexchange.com/questions/446795/question-about-normalizing-fluid-quantities-plasma-physics">Here's a link</a> to a different post I made if you want an example of what I'm talking about. It's a little complicated though, and I think this post boils down to what my actual question is.</p>
<p>Any and all insight/help is appreciated, including pointing me to a source which would answer this question. </p>
| 1,047
|
|
differential equations
|
Differential equations of a forced coupled spring-pendulum system
|
https://physics.stackexchange.com/questions/615333/differential-equations-of-a-forced-coupled-spring-pendulum-system
|
<p>Currently working on a problem and I can really figure out how to write the differential equations for it. Here's the situation:</p>
<p><a href="https://i.sstatic.net/O1WAM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O1WAM.png" alt="Image of the system" /></a></p>
<p>So we have a mass <span class="math-container">$m$</span> tied to the wall with a spring of constant <span class="math-container">$k$</span>. The wall itself is oscillating and its position is given by <span class="math-container">$x_0 \cos(\omega t)$</span>. Tied to the mass is a pendulum of length <span class="math-container">$L$</span> and mass <span class="math-container">$m$</span>. I have to figure out the equation for the amplitude. My guess so far is the following (1 being the spring-mass and 2 being the pendulum):</p>
<p><span class="math-container">$$m_1\ddot{x}_1=-k(x_1-x_0\cos(\omega t))+\frac{mg}{L}(x_2-x_1)$$</span>
<span class="math-container">$$m_2\ddot{x}_2=\frac{-mg}{L}(x_2-x_1)$$</span></p>
<p>But I'm not entirely sure about them. Since the pendulum is tied to the moving mass, I've considered writing the second one as:</p>
<p><span class="math-container">$$m_2(x_2-x_1)''=\frac{-mg}{L}(x_2-x_1)$$</span></p>
<p>But I'm not sure the logic holds. If that's of any help, I have to prove that the equations for the amplitude are:</p>
<p><span class="math-container">$$A_1=\frac{kx_0(g-L\omega^2)}{mL\omega^4-(2mg+kL)\omega^2+kg}$$</span>
<span class="math-container">$$A_2=\frac{kgx_0}{mL\omega^4-(2mg+kL)\omega^2+kg}$$</span></p>
<p><strong>EDIT:</strong></p>
<p>It seems the first differential equations were right, I managed to obtain the correct amplitudes by substituting the complex solutions <span class="math-container">$x_1 \rightarrow z_1=A_1e^{i\omega t}$</span> and <span class="math-container">$x_2 \rightarrow z_2=A_2e^{i\omega t}$</span>. Leaving this here in case it could help someone.</p>
| 1,048
|
|
differential equations
|
Kerr geodesics differential equations in equatorial plane
|
https://physics.stackexchange.com/questions/43629/kerr-geodesics-differential-equations-in-equatorial-plane
|
<p>With friend, we are writing an interactive educational simulation of particle falling into a black hole. </p>
<p>Currently we use <a href="http://en.wikipedia.org/wiki/Schwarzschild_geodesics#Geodesic_equation">Schwarzschild geodesics</a>. However, we want to generalize it to the case of <a href="http://en.wikipedia.org/wiki/Kerr_metric">rotating</a> (and perhaps <a href="http://en.wikipedia.org/wiki/Kerr%E2%80%93Newman_metric">rotating and charged</a>) black hole. We are mostly interested in the equatorial plane, as then we can plot it on a 2D tablet.</p>
<p>So, <strong>what are the differential equations for a particle (with given initial position and velocity) falling in the Kerr (or Kerr-Newman) metrics in the equatorial plane?</strong></p>
<p>I'm interested in an explicit form (<strong>plug & play</strong> - should work after insertion of the black hole parameters (i.e. $M, L, Q$) and the initial contitions (i.e. $\vec{x}, \vec{v}, q$); $Q$ and $q$ are optional, as Kerr metrics is nice by itself).</p>
<p>Side notes:</p>
<p>Yes, I know the general procedure. Just I'm short of time (so now I'm even no longer coding it). So I may self-answer, but rather later than sooner.</p>
<p>It's almost in <a href="http://www.roma1.infn.it/teongrav/VALERIA/TEACHING/ONDE_GRAV_STELLE_BUCHINERI/AA2010_2011/LEZIONI_MIE_BH/kerrgeod.pdf">Chapter 20 of something: Geodesic motion in Kerr spacetime</a> (i.e. (20.25) and (20.31) for the equations of motion; (20.18) and (20.19) for energy and angular momentum). However, some parameters are not introduced (perhaps there are in the previous chapters...). </p>
|
<p>I'll follow <em>Gravitation</em> by Misner, Thorne, and Wheeler (hereafter MTW), which is the standard <strike>reference textbook</strike> encyclopedic tome for the field despite its age.</p>
<p>Let $\lambda$ parametrize the path such that the derivative with respect to it gives the 4-momentum. Using Boyer-Lindquist coordinates, MTW Box 33.5 gives
$$ \left(\frac{\mathrm{d}r}{\mathrm{d}\lambda}\right)^2 = \frac{1}{r^4} \left(\alpha E^2 - 2\beta E + \gamma_0\right), $$
where
$$ E = \frac{1}{\alpha} \left(\beta + \sqrt{\beta^2 - \alpha\gamma_0 + \alpha r^4(p^r)^2}\right) $$
is a constant of the motion (energy at infinity) and we have
$$ \alpha = \left(r^2 + a^2\right)^2 - \Delta a^2 \\
\beta = \left(L_z a + qQr\right) \left(r^2 + a^2\right) - L_z a\Delta \\
\gamma_0 = \left(L_z a + qQr\right)^2 - \Delta L_z^2 - m^2r^2\Delta \\
\Delta = r^2 - 2Mr + a^2 + Q^2.
$$
Here $m$ is the test particle's rest mass and $L_z$ is its (conserved) angular momentum at infinity. $a = L/M$ is the black holes's angular momentum per unit mass. ($L$ not related to $L_z$ - sorry about that.)</p>
<p>For the azimuthal motion, I turn to MTW Eq. 33.32c, which states (after setting $\theta = \pi/2$)
$$ \frac{\mathrm{d}\phi}{\mathrm{d}\lambda} = -\frac{1}{r^2} \left(\frac{aP}{\Delta} - aE + L_z\right). $$
Here we define
$$ P = E \left(r^2 + a^2\right) - L_z a - qQr. $$</p>
<p>The final step is finding the relation between time $t$ (of the Boyer-Lindquist variety, which means it's not crazy) and $\lambda$. MTW Eq. 33.32d tells us (again after setting $\theta = \pi/2$)
$$ \frac{\mathrm{d}t}{\mathrm{d}\lambda} = \frac{1}{r^2} \left(\frac{P}{\Delta} \left(r^2 + a^2\right) - a^2E + aL_z\right). $$</p>
<p>Hope this helps. I remember coding something similar (alright, plugging the ODEs into Mathematica) once upon a time. It seemed to work reasonably well without needing any fancy numerical techniques to ensure stability... at least for a few orbits, after which I couldn't tell what it was supposed to be doing.</p>
| 1,049
|
differential equations
|
Schwarzschild metric system of geodesic differential equations
|
https://physics.stackexchange.com/questions/675913/schwarzschild-metric-system-of-geodesic-differential-equations
|
<p>Suppose that we are given the Schwarzschild metric and its Lagrangian <span class="math-container">$L=-(1-\frac{R}{r})t'^2 + (1-\frac{R}{r})^{-1}r'^2+r^2 \theta'^2+r^2 \sin^2(\theta)\phi'^2$</span> where R=<span class="math-container">$r_s=2GM$</span> and <span class="math-container">$x'=\frac{d}{d \tau}$</span> <span class="math-container">$\forall x \in A:A={t',\theta',r',\phi'}$</span> The set of geodesic equations given by the Schwarzschild metric are <span class="math-container">$$2(1-\frac{r_s}{r})t''=0$$</span> <span class="math-container">$$2r\theta'^2-\frac{r_st'^2}{r^2}-\frac{r_sr'^2}{(r-r_s)^2}-\frac{2rr''}{r-r_s}=0$$</span> <span class="math-container">$$\phi'^2\sin(2\theta)-2r^2\theta''=0$$</span> <span class="math-container">$$-2\phi''^2\sin^2(\theta)=0.$$</span> To solve for these geodesics we need to consider a transformation of differential equation order. We need to change this system of 4 second order ODEs into a system of 8 first order ODEs. Question is how would I accomplish this? I know how to transform single second order ODEs like <span class="math-container">$y''-4y'-2y=0$</span> into a system of 2 first order ODEs. How would I go about doing this for a system,not individual ODEs?</p>
|
<p>First, it is most definitely not necessary to transform your second order differential equations into first order differential equations in order to solve them. However, if you wish to do so for whatever reason, then the systematic way to do that is by using the Hamiltonian approach.</p>
<p>Here we define the momenta conjugate to each variable as <span class="math-container">$p_i = \partial \mathcal{ L}/\partial \dot q_i$</span>. This gives us a set of equations which we can solve for the <span class="math-container">$\dot q_i$</span> in terms of the <span class="math-container">$p_i$</span>. Then we calculate the Hamiltonian as <span class="math-container">$H = \Sigma p_i \dot q_i -\mathcal{L}$</span>, where we substitute the above expressions for <span class="math-container">$\dot q_i$</span> so that we have a function of the <span class="math-container">$p_i$</span> and the <span class="math-container">$q_i$</span> with no <span class="math-container">$\dot q_i$</span> terms remaining.</p>
<p>Once we have done that, we can get a system of first order differential equations by solving Hamilton's equations: <span class="math-container">$$\frac{dq_i}{d\tau}=\frac{\partial H}{\partial p_i}$$</span> <span class="math-container">$$\frac{dp_i}{d\tau}=-\frac{\partial H}{\partial q_i}$$</span></p>
<p>Here this gives us: <span class="math-container">$$\begin{array}{c}
\dot t=\frac{r p_t}{2 (R-r)} \\
\dot p_t=0 \\
\dot r=\frac{1}{2} p_r \left(1-\frac{R}{r}\right) \\
\dot p_r=\frac{1}{4} \left(\frac{2 \left(\csc ^2(\theta ) p_{\phi }^2+p_{\theta
}^2\right)}{r^3}-\frac{R p_r^2}{r^2}-\frac{p_t^2}{R-r}-\frac{r p_t^2}{(R-r)^2}\right) \\
\dot \theta =\frac{p_{\theta }}{2 r^2} \\
\dot p_{\theta }=\frac{\cot (\theta ) \csc ^2(\theta ) p_{\phi }^2}{2 r^2} \\
\dot \phi =\frac{\csc ^2(\theta ) p_{\phi }}{2 r^2} \\
\dot p_{\phi }=0 \\
\end{array}$$</span></p>
<p>Notice that <span class="math-container">$\dot p_t = 0$</span> gives us a conserved energy <span class="math-container">$p_t=E$</span> and <span class="math-container">$\dot p_\phi=0$</span> gives us a conserved angular momentum <span class="math-container">$p_\phi = L$</span> which we can substitute into the remaining six equations to get a system of six first order differential equations.</p>
| 1,050
|
differential equations
|
Wave propagation speed in non-linear differential equations
|
https://physics.stackexchange.com/questions/750453/wave-propagation-speed-in-non-linear-differential-equations
|
<p>Could it happen than a solitary travelling wave (soliton) had a different propagation speed when seen from the usual wave equations from that in a non-linear equation. I mean, suppose a solution <span class="math-container">$F=f(x-vt)+g(x+vt)$</span> of the usual wave equation.</p>
<p>Could it happen than the "propagation speed" (if any) in a non-linear partial differential equation were different to <span class="math-container">$v$</span>? I suppose the general response is "no", unless we speak of phase velocity and group velocity, but how to say then they are the "propagation speed". Also, I think that if we change the question into the dispersion relation, I suppose dispersion relationship from solitons into the general wave equation can differ from that of non-linear waves. Is that then possible?</p>
| 1,051
|
|
differential equations
|
Solution to pendulum differential equation
|
https://physics.stackexchange.com/questions/653845/solution-to-pendulum-differential-equation
|
<p>In a chapter on oscillations in <a href="https://openstax.org/books/university-physics-volume-1/pages/15-4-pendulums" rel="noreferrer">a physics book</a>, the differential equation <span class="math-container">$$\ddot{\theta}=-\frac{g}{L}\sin(\theta)$$</span> is found and solved using the small-angle-approximation <span class="math-container">$$\sin(\theta)\approx\theta$$</span> for small values of <span class="math-container">$\theta$</span>, which yields the solution <span class="math-container">$$\theta=\sin\left(t\sqrt{\frac{g}{L}}\right).$$</span> It also mentions that this solution tends to work best with angles smaller than <span class="math-container">$15^\circ$</span>.</p>
<p><br />
<br />
My question is: <strong>Is it possible to solve the <a href="https://en.wikipedia.org/wiki/Pendulum_(mathematics)" rel="noreferrer">pendulum</a> differential equation/do any solutions exist to it without the use of the small-angle-approximation?</strong></p>
|
<p>The pendulum problem can be solved exactly if an elliptic integral is used.</p>
<p>The elliptic integral in question is defined via
<span class="math-container">\begin{equation}
F(\phi,k)=\int_{0}^{\phi}\frac{dt}{\sqrt{1-k^{2}\sin^{2}t}}\, .
\end{equation}</span>
This integral originated when mathematicians investigated elliptic curves.</p>
<p>In the case of the pendulum problem, the conservation of energy yields the equation of motion
<span class="math-container">\begin{equation}
\frac{1}{2}l\dot{\theta}^{2}-g\cos\theta=-g\cos\theta_{m}
\end{equation}</span>
where <span class="math-container">$\theta_{m}$</span> denote the angle of highest height, then the equation can be inverted to
<span class="math-container">\begin{equation}
\frac{d\theta}{dt}=\sqrt{\frac{2g}{l}}\sqrt{\cos\theta-\cos\theta_{m}}
\end{equation}</span>
this expression can be simplified by using a trigonometric identity,
<span class="math-container">\begin{equation}
\cos\theta=1-2\sin^{2}(\theta/2),
\end{equation}</span>
and then changing variables according to
<span class="math-container">\begin{equation}
\sin\left(\frac{\theta}{2}\right)=\sin\left(\frac{\theta_{m}}{2}\right)\sin s.
\end{equation}</span>
Now differentiate this variable with respect to <span class="math-container">$t$</span> and use the chain rule, then revert to integrate with respect to <span class="math-container">$t$</span>. This gives
<span class="math-container">\begin{equation}
t=\sqrt{\frac{l}{g}}{\int_{0}^{\phi}}\frac{ds}{\sqrt{1-\sin^{2}(\theta_{m}/2) \sin^{2}s}}\, ,
\end{equation}</span>
the solution of which is given by the elliptic integral stated earlier.</p>
| 1,052
|
differential equations
|
Hypergeometric Function: Differential Equation
|
https://physics.stackexchange.com/questions/299251/hypergeometric-function-differential-equation
|
<p>In Birrel & Davies: <em>QFT in curved spacetime</em> it is written that the following differential equation can be solved in terms of hypergeometric functions.
$$(\partial_t^2 +(k^2+c(t)m^2))\phi(t)=0.$$
But there is no reference and no method listen.
Could somebody please help me solve this equation for $c(t)=(a+b\cdot \operatorname{tanh}(dt))$?</p>
|
<p>This example in Birrell & Davies is quite tricky and in order to get the exact answer given, you need to manipulate the differential equation and solve it by hand as far as you can get.
This involves quite a bit of algebraic manipulation and properties of the hypergeometric functions, but the outline is this.</p>
<p>You want to solve the equation </p>
<p>$$\frac{d^2\chi_k}{d\eta^2}+[k^2+(A+B\tanh(\rho\eta)m^2)]\,\chi_k=0.\tag{1}$$</p>
<p>This can be solved with the substitution </p>
<p>$$u=\frac{1}{2}[1+\tanh(\rho\eta)].\tag{2}$$</p>
<p>Next define the variables like in Birrell & Davies:</p>
<p>$$\omega_{\mathrm{in}}^2=k^2+m^2(A-B)\\
\omega_{\mathrm{out}}^2=k^2+m^2(A+B)\\
\omega_{\pm}=\frac{1}{2}(\omega_{\mathrm{out}}\pm\omega_{\mathrm{in}}).\tag{3}$$</p>
<p>Now making the substitution $(2)$ and $(3)$ into $(1)$ and making some algebraic manipulations involving partial fractions you arrive at</p>
<p>$$\frac{d^2\chi_k}{du^2}+\Big[\frac{1}{u}-\frac{1}{1-u} \Big]\frac{d\chi_k}{du}+\frac{1}{4\rho^2}\Big[\frac{\omega_{\mathrm{in}}^2}{u}+\frac{\omega_{\mathrm{out}}^2}{1-u} \Big]\frac{\chi_k}{u(1-u)}=0.\tag{4}$$</p>
<p>You could manipulate this further into the <a href="https://en.wikipedia.org/wiki/Hypergeometric_function#The_hypergeometric_differential_equation" rel="nofollow noreferrer">hypergeometric differential equation</a>, but the easiest way (to me) is to solve this with Mathematica.</p>
<p>Now however notice, that $(4)$ has singularities at $u=0$ and $u=1$. But these correspond the asymptotic values $\eta\to -\infty$ and $\eta\to\infty$. So you get the asymptotic mode solutions by investigating the solutions of $(4)$ at the singular points.
Substituting $(2)$ into the solution and after quite a bit of algebra, you should arrive at the solutions $(3.87)$ and $(3.89)$.</p>
| 1,053
|
differential equations
|
Differential Equations - Waves (Physics self-study suggestions)
|
https://physics.stackexchange.com/questions/75506/differential-equations-waves-physics-self-study-suggestions
|
<p>I apologize ahead of time, in case this post is not allowed. </p>
<p>After taking a few courses at a community college, I've taken the fall 2013 semester off (I was accepted into a university for the spring 2014 semester). I'm really looking to spend the next 5 months on concentrated self-study to be a bit ahead of the game for next year.</p>
<p>I'm having trouble deciding if I should spend time studying differential equations from a rigorous mathematical point of view (DEs are a weak point of mine). On one hand, it couldn't ever HURT, but it might end up being an unnecessary drain on time. </p>
<p>And I'm also pondering studying from a book such as "Vibrations and Waves - A.P. French". I'm aware that MIT has a semester course taught from this book, and it also couldn't only be advantageous to know this material. I've studied from Morin's Classical Mechanics, which of course has a chapter dedicated to SHM, but this isn't exactly what I would call "in depth". </p>
<p>Once again, I'm sorry if this is off-topic.</p>
<p>Thanks for reading </p>
| 1,054
|
|
differential equations
|
Why do we use differential equations in physics instead of $h$-difference ones?
|
https://physics.stackexchange.com/questions/369481/why-do-we-use-differential-equations-in-physics-instead-of-h-difference-ones
|
<p>Since we don't know whether space and time are discrete or continuous wouldn't it be a better idea to use $h$-difference equations where the derivative is $$f'(x) =\frac{f(x+h)-f(x)}{h},$$ since they are more general and by sending $h$ to 0, we would have the usual differential equations. So why do we prefer differential equations instead? </p>
|
<p>We can name a lot of reasons why one should prefer <em>differentials</em> to <em>finite differences</em>, but I guess one of the most physical ones is the relativity!</p>
<p>We assume that there is a finite speed limit for information transfer, that is the speed of light, hence any physical quantity should be local, in the sense that its constituents do not require <em>instantaneous</em> interaction.</p>
<p>The way we define, and you define above, the differences mean that they are dependent on two positions which are apart a finite distance. Hence the value of $f'(x)$ require an instantaneous interaction between the points at $x$ and $x+h$!</p>
<p>Oh, by the way, this is a problem if we use difference instead of differential while assuming that spacetime is continuous. If we also assume that spacetime is discretized, then we need to modify <em>Special Relativity</em> (SR): As it stands, SR states that different observers measure lengths differently, hence there cannot be a naive universal minimum length. There are theories which break SR (for example Doubly Special Relativity), but the vast mainstream physics, and almost all of its foundations, require differentiation rather than differences.</p>
| 1,055
|
differential equations
|
Solving the rocket differential equation
|
https://physics.stackexchange.com/questions/449027/solving-the-rocket-differential-equation
|
<p>I'm trying to derive the rocket equation.</p>
<p>I'm pretty sure that the differential equation for the rocket equation is</p>
<p><span class="math-container">$$v(t)\delta t =\frac{m(t)\delta t }{m(t)} V_e$$</span></p>
<p>where </p>
<ul>
<li><span class="math-container">$v(t)\delta t$</span> is the rate of change in velocity of the rocket over time.</li>
<li><span class="math-container">$m(t)\delta t$</span> is the rate of change of the mass of the rocket over time.</li>
<li><span class="math-container">$m(t)$</span> is the mass of the rocket at a time <span class="math-container">$t$</span>.</li>
<li><span class="math-container">$V_e$</span> is the exhaust velocity (a constant)</li>
</ul>
<p>Now I want to solve for <span class="math-container">$v(t)$</span>, so I integrate on both sides.</p>
<p><span class="math-container">$$\int_0^t v(t)\delta t =\int_0^t \frac{m(t)\delta t }{m(t)} V_e$$</span></p>
<p>I belive I should get
<span class="math-container">$$v(t) = ln\left(\frac{m(0)}{m(t)}\right)V_e$$</span></p>
<p>But whenever I try to actually solve the integral I come up with stuff that does not look remotely like that. </p>
<p>I tried a ton of videos and posts on the internet but most of the time there is some magic involved or some questionable not-quite-rigorous math going on.</p>
<p>So my question is:</p>
<p>Is this differential equation correctly formulated to get the rocket equation?</p>
<p>How can I go around solving this differential equation? </p>
|
<p>First of all, I believe part of the reason you're getting confused is that you're using confusing notation. Instead of <span class="math-container">$v(t) \delta t$</span>, the usual way of writing the rate of change of the rocket's velocity is <span class="math-container">$dv/dt$</span>; and for the rate of change of the mass, one usually writes <span class="math-container">$dm/dt$</span>. That way we avoid using <span class="math-container">$m(t)$</span> in two different ways.</p>
<p>OK, let's now solve the problem. Starting from your correct differential equation (where I've made the right side negative to make it easier to remember that the exhaust velocity is in the opposite direction of the rocket velocity):</p>
<p><span class="math-container">\begin{align}
\frac{dv}{dt} &= -\frac{dm/dt}{m} V_e \\
\Rightarrow dv &= -V_e \frac{dm}{m} \\
\Rightarrow \int_0^t dv &= -V_e \int_0^t \frac{dm}{m}
\end{align}</span></p>
<p>and we can now integrate both sides to obtain</p>
<p><span class="math-container">\begin{align}
v(t)-v(0) &= -V_e \left( \ln m(t) - \ln m(0) \right) \\
\Rightarrow v(t) &= v_0 + V_e \ln \frac{m(0)}{m(t)}
\end{align}</span></p>
<p>where <span class="math-container">$v_0$</span> is the rocket's initial velocity (usually zero).</p>
| 1,056
|
differential equations
|
Constructing differential equation from arbitrary Hamiltonian
|
https://physics.stackexchange.com/questions/190164/constructing-differential-equation-from-arbitrary-hamiltonian
|
<p>Suppose I begin with the time-independent Schrodinger equation
$$ \left(-\frac{1}{2m}\partial_x^2 + V(x)\right)\psi_n(x) = E_n\psi_n(x), $$
ordinarily we specify the function $V$ and then solve for a set of eigenfunctions and eigenvalues. And just to be slightly more general, we do the same thing with Sturm-Liouville equations, which I'll write in terms of the momentum operator and an extra function $U$,
$$ \left(\hat{p} U(\hat{x}) \hat{p} + V(\hat{x})\right)\psi_n = E_n\psi_n.$$</p>
<p>Now nothing is stopping us from defining a new Hamiltonian operator with the same eigenvectors but different arbitrary eigenvalues $\lambda_n$,</p>
<p>$$\hat{H}\psi_n = \lambda_n \psi_n$$
Under what conditions can this eigenvalue equation for the new Hamiltonian be represented as a (not-necessarily second order) differential equation in $x$ with the same eigenfunctions? In other words when does $\hat{H}$ belong to the operator algebra generated by $\hat{x}$ and $\hat{p}$?</p>
<p>I see if I define the new eigenvalues by some $n$-independent function $f$ of the original eigenvalues $\lambda_n = f(E_n)$, I can come up with a new differential equation, but does this exhaust the possibilities?</p>
|
<p>After thinking about it, as long as the original eigenvalues are non-degenerate it should be possible to have the new Hamiltonian be represented by a differential equation of arbitrarily high order. The key is that the projection operators $P_n$ onto the eigenfunctions exist in the algebra generated by the original Hamiltonian $\hat{H_0}$.</p>
<p>For instance say the nth eigenvalue is $E_n=2$, and there are no other eigenvalues between 3 and 1. Then we can choose an indicator function $f_n(x)$ such that $f_n(2)=1$ but $f_n(x)=0$ if $x$ is less than 1 or greater than 3. Given sufficient continuity the <a href="https://en.wikipedia.org/wiki/Stone%E2%80%93Weierstrass_theorem" rel="nofollow">Stone-Weierstrass theorem</a> applies and we can represent $f$ by a polynomial basis
$$ f_n(x) =\sum_k c_{n,k} x^k.$$
Then the operator
$$ P_n \equiv f_n(\hat{H}) = \sum_k c_{n,k} \hat{H_0}^k $$
will project onto the eigenfunction with eigenvalue 2. The details that this works even though we are dealing with infinite sums comes in the proofs of <a href="http://ncatlab.org/nlab/show/Gelfand+duality" rel="nofollow">Gelfand duality</a>.</p>
<p>Since the projectors are in the algebra generated by $\hat{H}_0$, the arbitrary Hamiltonian $\hat{H}$ is also in the algebra
$$H=\sum_{n} \lambda_n P_n=\sum_{n,k} \lambda_n c_{n,k}\hat{H_0}^k,$$</p>
<p>and since the original Hamiltonian can be expanded in terms of functions of $\partial_x$ and $x$, the Hamiltonian $\hat{H}$ also can, although now in general the differential equation will be of arbitrarily high order.</p>
| 1,057
|
differential equations
|
Trouble Solving Partial Differential Equation
|
https://physics.stackexchange.com/questions/534832/trouble-solving-partial-differential-equation
|
<p>I'm solving the velocity profile of a fluid flow for a circular channel with an oscillating pressure gradient <span class="math-container">$\frac{dp}{dx}=\frac{\Delta p}{\rho L}e^{-i\omega t}$</span>. I plugged in to the Navier Stokes equations and am having trouble figuring out how to approach the solution to the partial differential equation below for u(r,t).</p>
<p><span class="math-container">$$ \frac{\partial u}{\partial t} - \frac{\Delta p}{\rho L} e^{-i\omega t} = \mu\left(\frac{\partial ^2}{\partial r^2} + \frac{1}{r}\frac{\partial}{\partial r}\right)u$$</span></p>
<p>I know if the <span class="math-container">$\frac{\Delta p}{\rho L} e^{-i\omega t}$</span> term weren't in the equation, I could use separation of variables, but the addition of that term seems to throw a wrench in things.</p>
<p>Any advice would be greatly appreciated.</p>
|
<p>I disagree with Chet, I think complex numbers are the way to go here.</p>
<p>As ever in these kind of problems, given that we have an oscillating pressure gradient, it makes sense to look for an oscillating velocity profile,
<span class="math-container">$$
u(r,t) = \hat{u}(r)e^{-i\omega t}.
$$</span>
Then your PDE reduces to the ODE
<span class="math-container">$$
r^2\frac{d^2\hat{u}}{dr^2} + r\frac{d\hat{u}}{dr} + \frac{i\omega r^2}{\mu}\hat{u} + \frac{\Delta p r^2}{\rho L\mu} = 0.
$$</span>
Now make the change of variables <span class="math-container">$\hat{u} = U(r) - \frac{\Delta p}{\rho L i \omega}$</span> and <span class="math-container">$x = r\sqrt{\frac{i\omega}{\mu}}$</span>. Denoting derivatives with respect to <span class="math-container">$x$</span> with primes, this becomes
<span class="math-container">$$
x^2U'' + xU' + x^2U=0.
$$</span>
This is <a href="https://en.wikipedia.org/wiki/Bessel_function" rel="nofollow noreferrer">Bessel's Equation</a>, which has solutions given by Bessel functions.</p>
| 1,058
|
differential equations
|
Why must the field equations be differential?
|
https://physics.stackexchange.com/questions/13466/why-must-the-field-equations-be-differential
|
<p>In Landau–Lifshitz's <em>Course of Theoretical Physics</em>, Vol. 2 (‘Classical Fields Theory’), Ch. IV, § 27, there is an explanation why the field equations should be linear differential equations. It goes like this:</p>
<blockquote>
<p>Every solution of the field equations gives a field that can exist in nature. According to the principle of superposition, the sum of any such fields must be a field that can exist in nature, that is, must satisfy the field equations.</p>
<p>As is well known, linear differential equations have just this property, that the sum of any solutions is also a solution. Consequently, the field equations must be linear differential equation.</p>
</blockquote>
<p>Actually, this reasoning is not logically valid. Not only the authors forget to explain the word ‘differential’, but they also do not actually prove that the field equations must be linear. (Just in case: this observation is not due to me.) But it seems that the last issue can be easily overcome. However, it is exactly the word ‘differential’, not ‘linear’, that is bothering me.</p>
<p>There is a nice <a href="http://en.wikipedia.org/wiki/Peetre_theorem" rel="noreferrer">theorem of Peetre</a> stating that the linear operator <span class="math-container">$D$</span> that acts on (the ring of) functions and does not increase supports, that is, <span class="math-container">$\mathop{\mathrm{supp}} f \supset \mathop{\mathrm{supp}} Df$</span>, must be a differential operator. The property of preserving supports can be considered as a certain <em>locality</em> property. Hence, the field equations must be differential because all interactions must propagate with a finite velocity.</p>
<p>But there is another notion of ‘locality’ of an operator: the operator <span class="math-container">$D$</span> is called <em>local</em> if the function <span class="math-container">$Df$</span> in the neighbourhood <span class="math-container">$V$</span> can be computed with <span class="math-container">$f$</span> determined only on <span class="math-container">$V$</span> as well, i.e., <span class="math-container">$(Df)|_V$</span> is completely defined by <span class="math-container">$f|_V$</span>. The locality in this sense is not equivalent to the locality in the sense of supports' preserving. (Unfortunately, I do not have an illustrative example at hand right now, so there is a possibility of mistake <span class="math-container">$M$</span> hiding here.)</p>
<p>The question is: what physical circumstances determine the (correct one) notion of locality for a given physical problem? (Assuming there is no mistake <span class="math-container">$M$</span>.) And does my reasoning <em>really</em> justifies the word ‘differential’ in the context of field equations? If so, are there any references containing more accurate argument than the one presented in Landau–Lifshitz's Course?</p>
|
<p>This does not really answer your question why the equation should be differential. But
I think that the two notions of locality you mentioned are just equivalent, if I am not mistaken.</p>
<p>Let us prove that the second definition impies the first one. One has to show that if a point $x$ does not belong to $supp(f)$ then $x$ does not belong to $supp(Df)$. Indeed, then there exists an open neighborhood $V$ of $x$ such that $f|_V=0$. Hence by the assumption $Df|_V=0$. Hence $x\not\in supp(Df)$ as requested.</p>
<p>Let us prove now the converse statement that the first definition implies the second one. Assume that $f|_V=g|_V$. Then $(f-g)|_V=0$. Hence $supp(f-g)\cap V=\emptyset$. Consequently $supp(D(f-g))\cap V=\emptyset$, i.e. $D(f-g)|_V=0$. That means that $Df|_V=Dg|_V$.</p>
| 1,059
|
differential equations
|
Is the $Ψ$ in the Schrödinger Equation the same as the $Ψ$ in Exact Equations First Order Differential Equations?
|
https://physics.stackexchange.com/questions/416260/is-the-%ce%a8-in-the-schr%c3%b6dinger-equation-the-same-as-the-%ce%a8-in-exact-equations-fi
|
<p>The Schrödinger equations have the term $\Psi$, which is the wave function.</p>
<p><a href="http://scienceworld.wolfram.com/physics/SchroedingerEquation.html" rel="nofollow noreferrer">http://scienceworld.wolfram.com/physics/SchroedingerEquation.html</a></p>
<p>I do not know what type of equation the wave function in the Schrödinger Equations is but I noticed that the symbol $\Psi$ is also used in Exact Equation First Order Differential Equations.</p>
<p><a href="https://www.youtube.com/watch?v=iEpqcdaJNTQ&index=4&list=PL96AE8D9C68FEB902" rel="nofollow noreferrer">https://www.youtube.com/watch?v=iEpqcdaJNTQ&index=4&list=PL96AE8D9C68FEB902</a></p>
<p>Does the symbol $\Psi$ mean the same thing in the Schrödinger Equations as it does in exact Equation First Order Differential Equations? If it doesn't mean the same thing what type of equation is the wave function in the Schrödinger Equations?</p>
|
<p>The Schrödinger equation is a partial differential equation. Its type depends on the Hamilton operator and the fact whether we have time-independent or time-dependent equation. In fact, it is not even strictly speaking a wave-equation in the <a href="https://en.wikipedia.org/wiki/Wave_equation" rel="noreferrer">mathematical sense</a>, because it is at most first order in the time derivative.</p>
<p>Arguably the most important Schrödinger equation is the <a href="https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator" rel="noreferrer">harmonic oscillator</a>, which is a second order partial differential equation. Most reasonable Hamiltonians will be at least second order, since they contain a term for the kinetic energy, which is of second order.</p>
<p>Hence, no, it is hardly ever (never?) an exact equation first order differential equation. </p>
<p>Also note that the number of symbols is limited so you are bound to find the same symbols in similar but different locations. But that's okay, they are only symbols and their meaning should be clear from the context.</p>
| 1,060
|
differential equations
|
Applications of partial differential equations in material science
|
https://physics.stackexchange.com/questions/160332/applications-of-partial-differential-equations-in-material-science
|
<p>I've been asked to find a partial differential equation that has applications in material science. However we are not allowed to use the heat equation. I have found Fick's laws (basically the heat equation), and the Schrodinger equation, but I was wondering if there were any other prominent applications in material science.</p>
|
<p>The <a href="http://en.wikipedia.org/wiki/Navier%E2%80%93Stokes_equations" rel="nofollow">Navier-Stokes Equations</a> and its variants are some of the most important/relevant partial differential equations in material science.</p>
| 1,061
|
differential equations
|
Is resonance a general property of second-order differential equations?
|
https://physics.stackexchange.com/questions/749963/is-resonance-a-general-property-of-second-order-differential-equations
|
<p>I have read at this site as an answer at a question about how antennas work but that is not important</p>
<p>The resonant frequency of an antenna is determined by its constitution. Mathematically speaking, this is a general property of second order differential equations but in down-to-earth terms any AC circuit with some inductors and capacitors in it has a resonant frequency etc etc</p>
<p>What is this general property?</p>
|
<p>Consider the second order differential equation
<span class="math-container">\begin{align}
f'' - \alpha^2 f = C \cos(\omega t)
\end{align}</span>
with <span class="math-container">$\alpha$</span> a real constant. (Note the sign of the second term, which makes this equation different from the equation of motion for a driven harmonic oscillator.) The particular solution to the inhomogeneous equation is
<span class="math-container">\begin{align}
f_I(t) = -\frac{C}{\alpha^2 + \omega^2}\cos(\omega t)
\end{align}</span>
Assuming we want "resonance" to mean something like "the amplitude of the system's motion becomes relatively large if the system is driven near a natural frequency scale of the system," this system does not exhibit resonance. The constant <span class="math-container">$\alpha$</span> provides a natural frequency scale of the undriven (homogeneous) system, but nothing special happens when the driving frequency <span class="math-container">$\omega$</span> is equal to this natural frequency. Instead, the amplitude of the motion is maximized when <span class="math-container">$\omega = 0$</span>, i.e. for a constant driving force.</p>
<p>The point of this example is to show that no, resonance is not a general feature of second-order differential equations.</p>
| 1,062
|
differential equations
|
Modifying differential equations representing a projectile system to account for an arbitrary force
|
https://physics.stackexchange.com/questions/175197/modifying-differential-equations-representing-a-projectile-system-to-account-for
|
<p>The following series of differential equations represents a projectile's path when solved (g=9.81):</p>
<p><img src="https://i.sstatic.net/1gXIu.png" alt="The system"></p>
<p>Here is some sample output from this system (with initial values x,y=0, v=1500, theta=1.33):</p>
<p><img src="https://i.sstatic.net/26pdx.png" alt="Initial output"></p>
<p>I need to modify this series of differential equations to account for a force F with components a and b acting on the projectile. I have tried to duplicate gravity's effect on the projectile and then adding terms to the equations:</p>
<pre><code>vdot = -g*sin(theta) + a*cos(theta) + b*sin(theta)
</code></pre>
<p>and</p>
<pre><code>thetadot = -g/v*cos(theta) + a/v*sin(theta) + b/v*cos(theta)
</code></pre>
<p>But this series of differential equations does not behave properly, instead resulting in the following with a force with i-component 20 and j-component -20:</p>
<p><img src="https://i.sstatic.net/jxGoH.png" alt="Incorrect series"></p>
<p>Intuitively, the positive i-component should push the projectile in the forwards x-direction, but instead, it pushes it backwards, over the y-axis into a negative x. </p>
<p>What is the proper solution to this problem? Thanks.</p>
<p><strong>Edit: Thanks to Joshua Lin, I've gotten the direction component of the second term in thetadot worked out, however, I still am not sure if this is correct (I don't understand it geometrically). Here's output from the fixed term sign:</strong></p>
<p><img src="https://i.sstatic.net/UH3FV.png" alt="Attempt 2"></p>
<p>New thetadot:</p>
<pre><code>thetadot = -g/v*cos(theta) - a/v*sin(theta) + b/v*cos(theta)
</code></pre>
|
<p>UPDATE: While I was typing the answer below, I see that you came to the same conclusion as I do in my answer below.</p>
<hr>
<p>From your $\dot \theta$ equation, it appears that a positive $x$ component of force acts to <em>increase</em> the rate of change of the angle when $0 \lt \theta \lt \pi$.</p>
<p>But, that <em>can't</em> be correct. If you set $g$ and $b$ to zero and launch the projectile with speed $v$ and positive $\theta$ and $a$, the angular rate of change $\dot \theta$ should be negative, i.e., $\theta$ should asymptotically approach zero from the initial positive value.</p>
<p>This is because $v_x$ will grow without bound while $v_y = v_0$</p>
| 1,063
|
differential equations
|
Must multiple forces be expressed as a differential equation?
|
https://physics.stackexchange.com/questions/262473/must-multiple-forces-be-expressed-as-a-differential-equation
|
<p>This may be a stupidly obvious question, but can multiple forces (such as acceleration due to gravity and air resistance acting on a falling object) be expressed algebraicly or must it be written in the form of a differential equation? Since I don't know much about differential equations I have struggled to figure this out.</p>
|
<p>You only need differential equations when you are trying to find movements, and the forces change over time (or position).</p>
<p>At any given time, the forces can be written as a normal equation. Differential equations come into play when you look at their changing over time.</p>
| 1,064
|
differential equations
|
Why can you integrate with different bounds in thermal expansion differential equations?
|
https://physics.stackexchange.com/questions/814191/why-can-you-integrate-with-different-bounds-in-thermal-expansion-differential-eq
|
<p>I am just an independent student and was learning thermal expansion with differential equations and i saw someone on the internet solving the differential equation for the law like below:</p>
<p><span class="math-container">$$\frac{1}{L}\frac{dL}{dT}=\alpha$$</span>
<span class="math-container">$$\int_{L_{0}}^{L}\frac{1}{L}dL=\int_{T_{0}}^{T}\alpha \,dT$$</span>
<span class="math-container">$$[ln(L)]_{L_{0}}^{L}=[\alpha T]_{T_{0}}^{T}$$</span>
<span class="math-container">$$ln(\frac{L}{L_{0}})=\alpha(T-T_{0})$$</span>
Then, it continues. The part I dont understand is why on the second line you can integrate both sides but with different bounds on each side.</p>
|
<p>It is recommended to use clearer notation such as the one Riley Scott Jacob has introduced, where the limits of integration are <span class="math-container">$T_A$</span> and <span class="math-container">$T_B$</span>, with <span class="math-container">$L_A=L(T_A)$</span> and <span class="math-container">$L_B = L(T_B)$</span>. That way, we are avoiding the abuse of notation where <span class="math-container">$T$</span> is both a limit of integration and a dummy variable inside the integral.</p>
<p>Starting with your first equation, with each side viewed as a function of <span class="math-container">$T$</span>, we can write
<span class="math-container">$$\int_{T_A}^{T_B} \frac 1 {L} L'(T)~dT = \int_{T_A}^{T_B} \alpha~dT$$</span>
where for clarity I've defined <span class="math-container">$L'(T)=\frac{d}{dT}L(T)$</span>.</p>
<p>Now formally apply the <a href="https://en.wikipedia.org/wiki/Integration_by_substitution#Statement_for_definite_integrals" rel="nofollow noreferrer">change of variables</a> <span class="math-container">$T\to L$</span> to the first integral. The lower and upper bounds become <span class="math-container">$L(T_A) = L_A$</span> and <span class="math-container">$L(T_B) = L_B$</span>, respectively. We also have <span class="math-container">$dL = L'(T)~dT$</span>, so we are left with the equation you are after:
<span class="math-container">$$\int_{L_A}^{L_B} \frac 1 L ~dL = \int_{T_A}^{T_B} \alpha~dT$$</span></p>
| 1,065
|
differential equations
|
Vortex solution to Differential Equation
|
https://physics.stackexchange.com/questions/400029/vortex-solution-to-differential-equation
|
<p>I am looking for a differential equation whose solutions are what I call "open" vortices.
These vortices are not closed in themselves, but sort of "absorb" the surrounding "fluid" and also "emit" it.
I know that the Gross–Pitaevskii equation has vortices as solutions, but these are closed vortices as far as I know.</p>
|
<p>It is not too hard to come up with dynamical systems with both attractive and repulsive fixed points. Here is a simple one: $$x'=(1-k)\sin(x)+k\sin(y)$$
$$y'=(1-k)\sin(y)-k\cos(x)$$ where $k$ is a mixing constant. Here is the vector field for $k=0.8$.</p>
<p><a href="https://i.sstatic.net/BiH0w.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BiH0w.png" alt="Vector field with numerous sources and sinks"></a></p>
| 1,066
|
differential equations
|
Lagrangian for two coupled second order linear differential equations
|
https://physics.stackexchange.com/questions/545343/lagrangian-for-two-coupled-second-order-linear-differential-equations
|
<p>Consider a system of two coupled linear differential equations
<span class="math-container">$$
\left(
\begin{bmatrix}
\Omega
\end{bmatrix}^{-1}
+ \frac{d^2}{dt^2} \right)
\vec{V}(t)
=
\begin{bmatrix}
C
\end{bmatrix}^{-1}
\vec{J}(t)
+ \begin{bmatrix}
\Omega
\end{bmatrix}^{-1} \vec{K}(t)
$$</span>
where <span class="math-container">$\vec{V}(t)$</span> is a two-element vector describing the degree of freedom of the system, <span class="math-container">$\vec{J}(t)$</span> and <span class="math-container">$\vec{K}(t)$</span> are drive sources, and <span class="math-container">$[\Omega]^{-1}$</span> and <span class="math-container">$[C]^{-1}$</span> are constant 2x2 matrices.
This system represents two coupled harmonic resonators with time-dependent (but position independent) drive forces.
For whatever it's worth, suppose we can decompose <span class="math-container">$[\Omega]^{-1}$</span> as
<span class="math-container">$$ [\Omega]^{-1} = [C]^{-1}[L]^{-1}$$</span>
where <span class="math-container">$[L]^{-1}$</span> is another 2x2 matrix<span class="math-container">$^{[1]}$</span>.
Both <span class="math-container">$[L]$</span> and <span class="math-container">$[C]$</span> are symmetric.</p>
<p><strong>Is there a systematic way to find the Lagrangian for this system of equations?</strong></p>
<p>[1]: Both <span class="math-container">$[C]$</span> and <span class="math-container">$[L]$</span> have the property that their off-diagonal elements are smaller than their diagonal elements, which is probably useful for approximations.</p>
|
<p><span class="math-container">$\boldsymbol{\S}$</span> <strong>A. A special case : symmetric</strong> <span class="math-container">$\Omega^{\boldsymbol{-}1}$</span></p>
<p>Let the <span class="math-container">$2\times2$</span> real symmetric matrices
<span class="math-container">\begin{equation}
C^{\boldsymbol{-}1}\boldsymbol{=}
\begin{bmatrix}
\xi_1 & \xi \vphantom{\dfrac{a}{b}}\\
\xi &\xi_2 \vphantom{\dfrac{a}{b}}
\end{bmatrix}
\quad \text{and} \quad
L^{\boldsymbol{-}1}\boldsymbol{=}
\begin{bmatrix}
\eta_1 & \eta \vphantom{\dfrac{a}{b}}\\
\eta &\eta_2 \vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{A-01}\label{A-01}
\end{equation}</span>
Then
<span class="math-container">\begin{equation}
\Omega^{\boldsymbol{-}1}\boldsymbol{=}C^{\boldsymbol{-}1}L^{\boldsymbol{-}1}\boldsymbol{=}
\begin{bmatrix}
\xi_1\eta_1 \boldsymbol{+} \xi\eta & \xi_1\eta \boldsymbol{+} \xi\eta_2 \vphantom{\dfrac{a}{b}}\\
\hphantom{_1}\hphantom{_2}\xi\eta_1 \boldsymbol{+} \xi_2\eta & \hphantom{_1}\hphantom{_2}\xi\eta \boldsymbol{+}\xi_2\eta_2 \vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{A-02}\label{A-02}
\end{equation}</span>
With respect to the coordinates
<span class="math-container">\begin{equation}
\mathbf{V}
\boldsymbol{=}
\begin{bmatrix}
V_1\vphantom{\dfrac{a}{b}}\\
V_2\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{A-03}\label{A-03}
\end{equation}</span><br />
the two coupled equations are
<span class="math-container">\begin{equation}
\dfrac{\mathrm d}{\mathrm dt}\left(\mathbf{\dot{V}}\right)\boldsymbol{-}\left(C^{\boldsymbol{-}1}\mathbf{J}\boldsymbol{+}\Omega^{\boldsymbol{-}1}\mathbf{K}\boldsymbol{-}\Omega^{\boldsymbol{-}1}\mathbf{V}\right)\boldsymbol{=}\boldsymbol{0}
\tag{A-04}\label{A-04}
\end{equation}</span>
Now, if there exists a Lagrangian <span class="math-container">$\mathrm L\left(\mathbf{V},\mathbf{\dot{V}},t\right)$</span> for the problem then the Euler-Lagrange equations are
<span class="math-container">\begin{equation}
\dfrac{\mathrm d}{\mathrm dt}\left(\dfrac{\partial \mathrm L}{\partial \mathbf{\dot{V}}}\right)\boldsymbol{-}\dfrac{\partial \mathrm L}{\partial \mathbf{V}}\boldsymbol{=}\boldsymbol{0}
\tag{A-05}\label{A-05}
\end{equation}</span>
where
<span class="math-container">\begin{equation}
\dfrac{\partial \mathrm L}{\partial \mathbf{V}}\boldsymbol{=}
\begin{bmatrix}
\dfrac{\partial \mathrm L}{\partial V_1} \vphantom{\dfrac{a}{\dfrac{a}{b}}}\\
\dfrac{\partial \mathrm L}{\partial V_2} \vphantom{\dfrac{a}{b}}
\end{bmatrix}
\quad \text{and} \quad
\dfrac{\partial \mathrm L}{\partial \mathbf{\dot{V}}}\boldsymbol{=}
\begin{bmatrix}
\dfrac{\partial \mathrm L}{\partial \dot{V}_1} \vphantom{\dfrac{a}{\dfrac{a}{b}}}\\
\dfrac{\partial \mathrm L}{\partial \dot{V}_2} \vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{A-06}\label{A-06}
\end{equation}</span>
Comparing equations \eqref{A-04} and \eqref{A-05} we note that the Lagrangian <span class="math-container">$\mathrm L\left(\mathbf{V},\mathbf{\dot{V}},t\right)$</span> must satisfy, except constants, the following two equations
<span class="math-container">\begin{align}
\dfrac{\partial \mathrm L}{\partial \mathbf{\dot{V}}} & \boldsymbol{=}\mathbf{\dot{V}}\vphantom{\dfrac{a}{\dfrac{a}{b}}}
\tag{A-07a}\label{A-07a}\\
\dfrac{\partial \mathrm L}{\partial \mathbf{V}} & \boldsymbol{=}C^{\boldsymbol{-}1}\mathbf{J}\boldsymbol{+}\Omega^{\boldsymbol{-}1}\mathbf{K}\boldsymbol{-}\Omega^{\boldsymbol{-}1}\mathbf{V}
\tag{A-07b}\label{A-07b}
\end{align}</span>
From equation \eqref{A-07a} and partly because of the first two terms in the rhs of equation \eqref{A-07b} we note that one part <span class="math-container">$\mathrm L_1\left(\mathbf{V},\mathbf{\dot{V}},t\right)$</span> of the Lagrangian would be
<span class="math-container">\begin{equation}
\mathrm L_1\left(\mathbf{V},\mathbf{\dot{V}},t\right)\boldsymbol{=}\frac12\left(\mathbf{\dot{V}}\boldsymbol{\cdot}\mathbf{\dot{V}}\right)\boldsymbol{+}\left[\left(C^{\boldsymbol{-}1}\mathbf{J}\right)\boldsymbol{\cdot}\mathbf{V}\right]\boldsymbol{+}\left[\left(\Omega^{\boldsymbol{-}1}\mathbf{K}\right)\boldsymbol{\cdot}\mathbf{V}\right]
\tag{A-08}\label{A-08}
\end{equation}</span>
while a second part <span class="math-container">$\mathrm L_2\left(\mathbf{V},\mathbf{\dot{V}},t\right)$</span> of the Lagrangian must satisfy the equation
<span class="math-container">\begin{equation}
\dfrac{\partial \mathrm L_2}{\partial \mathbf{V}} \boldsymbol{=}\boldsymbol{-}\Omega^{\boldsymbol{-}1}\mathbf{V}
\tag{A-09}\label{A-09}
\end{equation}</span>
If the matrix <span class="math-container">$\Omega^{\boldsymbol{-}1}$</span> of equation \eqref{A-02} is symmetric, that is if the elements of the matrices <span class="math-container">$C^{\boldsymbol{-}1}$</span> and <span class="math-container">$L^{\boldsymbol{-}1}$</span> satisfy the condition
<span class="math-container">\begin{equation}
\left(\xi_1\boldsymbol{-}\xi_2\right)\eta\boldsymbol{=}\left(\eta_1\boldsymbol{-}\eta_2\right)\xi
\tag{A-10}\label{A-10}
\end{equation}</span>
then
<span class="math-container">\begin{equation}
\mathrm L_2\left(\mathbf{V},\mathbf{\dot{V}},t\right) \boldsymbol{=}\boldsymbol{-}\frac12\left[\left(\Omega^{\boldsymbol{-}1}\mathbf{V}\right)\boldsymbol{\cdot}\mathbf{V}\right]
\tag{A-11}\label{A-11}
\end{equation}</span>
and so
<span class="math-container">\begin{align}
&\mathrm L\left(\mathbf{V},\mathbf{\dot{V}},t\right) \boldsymbol{=}\mathrm L_1\left(\mathbf{V},\mathbf{\dot{V}},t\right)\boldsymbol{+}\mathrm L_2\left(\mathbf{V},\mathbf{\dot{V}},t\right) \qquad \textbf{for symmetric } \Omega^{\boldsymbol{-}1}
\nonumber\\
& \boldsymbol{=}\frac12\left(\mathbf{\dot{V}}\boldsymbol{\cdot}\mathbf{\dot{V}}\right)\boldsymbol{-}\frac12\left[\left(\Omega^{\boldsymbol{-}1}\mathbf{V}\right)\boldsymbol{\cdot}\mathbf{V}\right]\boldsymbol{+}\left[\left(C^{\boldsymbol{-}1}\mathbf{J}\right)\boldsymbol{\cdot}\mathbf{V}\right]\boldsymbol{+}\left[\left(\Omega^{\boldsymbol{-}1}\mathbf{K}\right)\boldsymbol{\cdot}\mathbf{V}\right]
\tag{A-12}\label{A-12}
\end{align}</span></p>
<p><span class="math-container">$\boldsymbol{=\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!=}$</span></p>
<p><span class="math-container">$\boldsymbol{\S}$</span> <strong>B. The general case : A systematic way to find the Lagrangian for two coupled second order linear differential equations</strong></p>
<p>The efforts to find a Lagrangian for two coupled second order linear differential equations (as in the question) would be unsuccessful because of the so called <span class="math-container">$^{\prime\prime}$</span>cross terms<span class="math-container">$^{\prime\prime}$</span> that appear at an intermediate step , for example terms like <span class="math-container">$V_1 V_2, \dot{V}_1 \dot{V}_2, \dot{V}_1 V_2$</span> etc. These terms "couple" the two equations. So we must find a method to eliminate terms of this kind. This will give us at first two uncoupled second order linear differential equations and next a well-defined Lagrangian.</p>
<p>Because of linearity we make a change of the variables from old <span class="math-container">$V_1, V_2$</span> to new <span class="math-container">$q_1, q_2$</span> via a linear transformation
<span class="math-container">\begin{align}
V_1 & \boldsymbol{=}a_{11}q_1\boldsymbol{+}a_{12}q_2
\tag{B-01a}\label{B-01a}\\
V_2 & \boldsymbol{=}a_{21}q_1\boldsymbol{+}a_{22}q_2
\tag{B-01b}\label{B-01b}
\end{align}</span>
or
<span class="math-container">\begin{equation}
\mathbf{V}\boldsymbol{=}
\begin{bmatrix}
V_1\vphantom{\dfrac{a}{b}}\\
V_2\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\boldsymbol{=}
\begin{bmatrix}
a_{11} & a_{12}\vphantom{\dfrac{a}{b}}\\
a_{21} & a_{22}\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\begin{bmatrix}
p_1\vphantom{\dfrac{a}{b}}\\
p_2\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\boldsymbol{=}A\mathbf{q}
\tag{B-02}\label{B-02}
\end{equation}</span><br />
that is
<span class="math-container">\begin{equation}
\mathbf{V}\boldsymbol{=}A\mathbf{q}
\,,\qquad
A\boldsymbol{=}
\begin{bmatrix}
a_{11} & a_{12}\vphantom{\dfrac{a}{b}}\\
a_{21} & a_{22}\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{B-03}\label{B-03}
\end{equation}</span>
and we'll try to find, if there exists, an invertible transformation <span class="math-container">$\:A\:$</span> that eliminates the cross terms so uncoupling the two equations.</p>
<p>If on our initial equation<br />
<span class="math-container">\begin{equation}
\mathbf{\ddot{V}}\boldsymbol{+}\Omega^{\boldsymbol{-}1}\mathbf{V}\boldsymbol{=}C^{\boldsymbol{-}1}\mathbf{J}\boldsymbol{+}\Omega^{\boldsymbol{-}1}\mathbf{K}
\tag{B-04}\label{B-04}
\end{equation}</span>
we apply from the left the transformation <span class="math-container">$\:A^{\boldsymbol{-}1}\:$</span> we have
<span class="math-container">\begin{equation}
A^{\boldsymbol{-}1}\mathbf{\ddot{V}}\boldsymbol{+}A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1}\mathbf{V}\boldsymbol{=}A^{\boldsymbol{-}1}C^{\boldsymbol{-}1}\mathbf{J}\boldsymbol{+}A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1}\mathbf{K}
\tag{B-05}\label{B-05}
\end{equation}</span>
Making use of \eqref{B-03} we replace <span class="math-container">$\:\mathbf{V}\:$</span> by <span class="math-container">$\:A\mathbf{q}\:$</span> so
<span class="math-container">\begin{equation}
A^{\boldsymbol{-}1}\left(A\mathbf{\ddot{q}}\right)\boldsymbol{+}A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1}\left(A\mathbf{q}\right)\boldsymbol{=}A^{\boldsymbol{-}1}C^{\boldsymbol{-}1}\mathbf{J}\boldsymbol{+}A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1}\mathbf{K}
\nonumber
\end{equation}</span>
that is
<span class="math-container">\begin{equation}
\mathbf{\ddot{q}}\boldsymbol{+}\left(A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1} A\right)\mathbf{q}\boldsymbol{=}\left(A^{\boldsymbol{-}1}C^{\boldsymbol{-}1 }A\right)\mathbf{j}\boldsymbol{+}\left(A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1 }A\right)\mathbf{k}
\tag{B-06}\label{B-06}
\end{equation}</span>
or
<span class="math-container">\begin{align}
&\mathbf{\ddot{q}}\boldsymbol{+}W\,\mathbf{q} \boldsymbol{=}U\,\mathbf{j}\boldsymbol{+}W\,\mathbf{k}
\tag{B-07a}\label{B-07a}\\
&\text{where} \nonumber\\
&W\boldsymbol{=}A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1}A\,, \quad U\boldsymbol{=}A^{\boldsymbol{-}1}C^{\boldsymbol{-}1}A\,, \quad \mathbf{j}\boldsymbol{=}A^{\boldsymbol{-}1}\mathbf{J}\,,\quad \mathbf{k}\boldsymbol{=}A^{\boldsymbol{-}1}\mathbf{K}
\tag{B-07b}\label{B-07b}
\end{align}</span>
Now, the two second order linear differential equations \eqref{B-07a} would be uncoupled if the matrix <span class="math-container">$\:W\:$</span> could be diagonal
<span class="math-container">\begin{equation}
W\boldsymbol{=}A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1 }A\boldsymbol{=}
\begin{bmatrix}
\mathrm w_1 & 0 \vphantom{\dfrac{a}{b}}\\
0 & \mathrm w_2\vphantom{\dfrac{a}{b}}
\end{bmatrix}
\tag{B-08}\label{B-08}
\end{equation}</span>
This uncoupling is shown explicitly below
<span class="math-container">\begin{align}
\ddot{q}_1\boldsymbol{+}\mathrm w_1 p_1 &\boldsymbol{=}\left(U\,\mathbf{j}\right)_1 \boldsymbol{+}\left(W\,\mathbf{k}\right)_1
\tag{B-09a}\label{B-09a}\\
\ddot{q}_2\boldsymbol{+}\mathrm w_2 p_2 &\boldsymbol{=}\left(U\,\mathbf{j}\right)_2 \boldsymbol{+}\left(W\,\mathbf{k}\right)_2
\tag{B-09b}\label{B-09b}
\end{align}</span>
These two independent <span class="math-container">$^{\prime\prime}$</span>motions<span class="math-container">$^{\prime\prime}$</span> are called <em>normal modes</em> and the variables <span class="math-container">$q_1,q_2$</span> <em>normal coordinates</em>.</p>
<p>Now, from \eqref{B-08} the constants <span class="math-container">$\:\mathrm w_1,\mathrm w_2\:$</span> are the <em>eigenvalues</em> of the matrix <span class="math-container">$\:\Omega^{\boldsymbol{-}1}\:$</span> while the columns of the matrix <span class="math-container">$\:A\:$</span> are the <em>eigenvectors</em> respectively
<span class="math-container">\begin{align}
\mathbf{a}_1 & \boldsymbol{=}
\begin{bmatrix}
a_{11} \vphantom{\dfrac{a}{b}}\\
a_{21} \vphantom{\dfrac{a}{b}}
\end{bmatrix}\boldsymbol{=}\text{eigenvector of eigenvalue } \mathrm w_1
\tag{B-10a}\label{B-10a}\\
\mathbf{a}_2 & \boldsymbol{=}
\begin{bmatrix}
a_{12} \vphantom{\dfrac{a}{b}}\\
a_{22} \vphantom{\dfrac{a}{b}}
\end{bmatrix}\boldsymbol{=}\text{eigenvector of eigenvalue } \mathrm w_2
\tag{B-10b}\label{B-10b}
\end{align}</span>
Note that depending on the matrix <span class="math-container">$\:\Omega^{\boldsymbol{-}1}\:$</span> the eigenvalues <span class="math-container">$\:\mathrm w_1,\mathrm w_2\:$</span> could be either both real or both complex conjugates.</p>
<p>Now, since the diagonal matrix <span class="math-container">$\:W\:$</span> is symmetric we make use of the results of <span class="math-container">$\boldsymbol{\S}$</span> <strong>A</strong> and we build the Lagrangian for the Euler-Lagrange equations \eqref{B-09a},\eqref{B-09b} according to equation \eqref{A-12}<br />
<span class="math-container">\begin{equation}
\mathrm L\left(\mathbf{q},\mathbf{\dot{q}},t\right) \boldsymbol{=}
\tfrac12\left(\mathbf{\dot{q}}\boldsymbol{\cdot}\mathbf{\dot{q}}\right)\boldsymbol{-}\tfrac12\left[\left(W\mathbf{q}\right)\boldsymbol{\cdot}\mathbf{q}\vphantom{\dfrac{a}{b}}\right]\boldsymbol{+}\left[\left(U\mathbf{j}\right)\boldsymbol{\cdot}\mathbf{q}\vphantom{\dfrac{a}{b}}\right]\boldsymbol{+}\left[\left(W\mathbf{k}\right)\boldsymbol{\cdot}\mathbf{q}\vphantom{\dfrac{a}{b}}\right]
\tag{B-11}\label{B-11}
\end{equation}</span>
Explicitly
<span class="math-container">\begin{align}
\mathrm L\left(\mathbf{q},\mathbf{\dot{q}},t\right) & \boldsymbol{=}
\tfrac12\left(\dot{q}^2_1\boldsymbol{+}\dot{q}^2_2\right)\boldsymbol{-}\tfrac12\left(\mathrm w_1 q^2_1\boldsymbol{+}\mathrm w_2 q^2_2\right)
\tag{B-12}\label{B-12}\\
&\boldsymbol{+} \left[\left(U\mathbf{j}\right)_1\boldsymbol{+}\left(W\mathbf{k}\right)_1\vphantom{\dfrac{a}{b}}\right]q_1\boldsymbol{+} \left[\left(U\mathbf{j}\right)_2\boldsymbol{+}\left(W\mathbf{k}\right)_2\vphantom{\dfrac{a}{b}}\right]q_2
\nonumber
\end{align}</span>
Note that the above Lagrangian doesn't contain <span class="math-container">$^{\prime\prime}$</span>cross terms<span class="math-container">$^{\prime\prime}$</span> like <span class="math-container">$q_1 q_2, \dot{q}_1 \dot{q}_2, \dot{q}_1 q_2$</span> etc.
Use of this Lagrangian in the equations below
<span class="math-container">\begin{align}
\dfrac{\mathrm d}{\mathrm dt}\left(\dfrac{\partial \mathrm L}{\partial \dot{q}_1}\right)\boldsymbol{-}\dfrac{\partial \mathrm L}{\partial q_1}\boldsymbol{=}0
\tag{B-13a}\label{B-13a}\\
\dfrac{\mathrm d}{\mathrm dt}\left(\dfrac{\partial \mathrm L}{\partial \dot{q}_2}\right)\boldsymbol{-}\dfrac{\partial \mathrm L}{\partial q_2}\boldsymbol{=}0
\tag{B-13b}\label{B-13b}
\end{align}</span>
yields equations \eqref{B-09a} and \eqref{B-09b} as expected.</p>
<p>Now, based on \eqref{B-11} we can build the Lagrangian <span class="math-container">$\:\mathrm L\left(\mathbf{V},\mathbf{\dot{V}},t\right)\:$</span> for the initial coordinates <span class="math-container">$\:V_1,V_2\:$</span> from <span class="math-container">$\:\mathrm L\left(\mathbf{q},\mathbf{\dot{q}},t\right)$</span>. We simply replace <span class="math-container">$\:\mathbf{q}\:$</span> by <span class="math-container">$\:A^{\boldsymbol{-}1}\mathbf{V}\:$</span> in \eqref{B-11} and we have
<span class="math-container">\begin{align}
&\mathrm L\left(\mathbf{V},\mathbf{\dot{V}},t\right)\boldsymbol{=}
\tag{B-14}\label{B-14}\\
&\tfrac12\left[\left(A^{\boldsymbol{-}1}\mathbf{\dot{V}}\right)\boldsymbol{\cdot}\left(A^{\boldsymbol{-}1}\mathbf{\dot{V}}\right)\vphantom{\dfrac{a}{b}}\right]\boldsymbol{-}\tfrac12\left[\left(A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1}\mathbf{V}\right)\boldsymbol{\cdot}\left(A^{\boldsymbol{-}1}\mathbf{V}\right)\vphantom{\dfrac{a}{b}}\right]
\nonumber\\
&\boldsymbol{+}\left[\left(A^{\boldsymbol{-}1}C^{\boldsymbol{-}1}\mathbf{J}\right)\boldsymbol{\cdot}\left(A^{\boldsymbol{-}1}\mathbf{V}\right)\vphantom{\dfrac{a}{b}}\right]\boldsymbol{+}\left[\left(A^{\boldsymbol{-}1}\Omega^{\boldsymbol{-}1}\mathbf{K}\right)\boldsymbol{\cdot}\left(A^{\boldsymbol{-}1}\mathbf{V}\right)\vphantom{\dfrac{a}{b}}\right]
\nonumber
\end{align}</span>
If <span class="math-container">$\:\Omega^{\boldsymbol{-}1}\:$</span> is (real) symmetric then the Lagrangian of \eqref{B-14} must yield that of \eqref{A-12}. But these two expressions are very different and it seems that we have a contradiction here. But there is no contradiction : in case of symmetric matrix <span class="math-container">$\:\Omega^{\boldsymbol{-}1}\:$</span> the eigenvalues <span class="math-container">$\:\mathrm w_1,\mathrm w_2\:$</span> are both real, the eigenvectors <span class="math-container">$\:\mathbf{a}_1,\mathbf{a}_2 $</span> of equations \eqref{B-10a},\eqref{B-10b} are orthogonal and the matrix <span class="math-container">$\:A\:$</span> of equations \eqref{B-02},\eqref{B-03} is orthogonal . For this matrix we have <span class="math-container">$\:A^{\boldsymbol{-}1}\boldsymbol{=}A^{\boldsymbol{\top}}\boldsymbol{=}\text{transpose of }A$</span>. Replacing <span class="math-container">$\:A^{\boldsymbol{-}1}\:$</span> by <span class="math-container">$\:A^{\boldsymbol{\top}}\:$</span> the expression \eqref{B-14} becomes identical to \eqref{A-12}.In other words, since <span class="math-container">$\:A^{\boldsymbol{-}1}\:$</span> is also orthogonal it leaves the inner product of two vectors invariant, so in \eqref{B-14} we could replace any inner product <span class="math-container">$\:\left(A^{\boldsymbol{-}1}\mathbf{x}\right)\boldsymbol{\cdot}\left(A^{\boldsymbol{-}1}\mathbf{y}\right)\vphantom{\dfrac{a}{b}}\:$</span> by
<span class="math-container">$\:\left(\mathbf{x}\boldsymbol{\cdot}\mathbf{y}\right)\vphantom{\dfrac{a}{b}}$</span>.</p>
<p><span class="math-container">$\boldsymbol{=\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!==\!=\!=\!=}$</span></p>
<p>Related 1 : <a href="https://physics.stackexchange.com/questions/34241/deriving-lagrangian-density-for-electromagnetic-field/270950#270950">Deriving Lagrangian density for electromagnetic field</a>.</p>
<p>Related 2 : <a href="https://physics.stackexchange.com/questions/89002/why-treat-complex-scalar-field-and-its-complex-conjugate-as-two-different-fields/487935#487935">The Lagrangian Density of the Schroedinger equation</a>.</p>
<p>Related 3 : <a href="https://physics.stackexchange.com/questions/201462/obtain-the-lagrangian-from-the-system-of-coupled-equation">Obtain the Lagrangian from the system of coupled equation</a>.
</p>
| 1,067
|
differential equations
|
Numerical solution of two coupled second order differential equations of motion
|
https://physics.stackexchange.com/questions/100368/numerical-solution-of-two-coupled-second-order-differential-equations-of-motion
|
<p>Is there a numerical algorithm for solving a pair of coupled second order differential equations?</p>
<p>This question arises from a homework problem that I have that involves two dimensional projectile motion. The problem is as follows:</p>
<blockquote>
<p><em>An object is fired through a viscous fluid that has a dampening force proportional to the velocity raised to the $n$'th power. For what values of $n$ and magnitude of the force is the maximum range achieved for an launch angle greater than $\pi$/4?</em></p>
</blockquote>
<p>The equations of motion I worked out are below.</p>
<p>$ m \ddot{x}=-km\dot x(\dot x^2+\dot y^2)^{\frac{n-1}2}$</p>
<p>$ \ddot{x}=-k\dot x(\dot x^2+\dot y^2)^{\frac{n-1}2}$</p>
<p>$ m \ddot{y}=-km\dot y(\dot x^2+\dot y^2)^{\frac{n-1}2}-mg$</p>
<p>$ \ddot{y}=-k\dot y(\dot x^2+\dot y^2)^{\frac{n-1}2}-g$</p>
<p>If you make the substitutions $\tilde x=\frac{x}{k^{n-1}}$, $\tilde y=\frac{y}{k^{n-1}}$, and $\tilde g={g}{k^{n-1}}$ you then eliminate one degree of freedom <em>k</em> and the equations become:</p>
<p>$ \ddot{\tilde{x}}=-\dot{\tilde{x}}(\dot{\tilde{x}}^2+\dot{\tilde{y}}^2)^{\frac{n-1}2}$</p>
<p>$ \ddot{\tilde{y}}=-\dot{\tilde{y}}(\dot{\tilde{x}}^2+\dot{\tilde{y}}^2)^{\frac{n-1}2}-\tilde g$</p>
<p>There should be a numerical algorithm for solving a pair of coupled second order differential equations with the following starting conditions with the answer ending at some time $\ t$</p>
<p>$\tilde{x}_0=0, \tilde{y}_0=0, v_0=1, \dot{\tilde{x}}_0=\cos(\theta _0), \dot{\tilde{y}}_0=\sin(\theta _0)$</p>
<p>Then use a root finding algorithm, I planned on bisection, to find the $ t_f$ for which $y=0$ and the $x(t_f)=Range$ I would then cycle through different values $\theta_0$ and compare the value of the range to the range for $\theta _0=\pi /4$ and note if any of the values are larger. Then this process could be repeated for varying $n$.</p>
<p>Does this sound solid? And if so is there a numerical recipe for coupled second order differential equations?</p>
| 1,068
|
|
differential equations
|
Help recognizing partial differential equation
|
https://physics.stackexchange.com/questions/192341/help-recognizing-partial-differential-equation
|
<p>I would be very grateful if someone could tell me something about the following partial differential equation:</p>
<p>$$
\frac{\partial U}{\partial t} = K * (\frac{\partial^2 U}{\partial r^2} + (1/r)\frac{\partial U}{\partial r}).
$$</p>
<p>A friend told me that the equation models the <a href="http://en.wikipedia.org/wiki/Heat_equation" rel="nofollow">heat equation</a>, but I don't think he is right.</p>
<p>Any help?</p>
|
<p>That is the heat equation in polar coordinates with axial symmetry. The (isotropic) heat equation without sources or sinks is</p>
<p>$$
\frac{\partial U}{\partial t} - K\nabla^2U =0.
$$</p>
<p>If you look up the Laplacian operator in cylindrical coordinates, you will find that your expression matches this exactly.</p>
| 1,069
|
differential equations
|
Solving differential equation in perturbation theory
|
https://physics.stackexchange.com/questions/774534/solving-differential-equation-in-perturbation-theory
|
<p>The differential equation of an anharmonic Oscillator with Newtonian friction is
<span class="math-container">$$
\ddot{x}+\varepsilon \dot{x}^2+x=0
.$$</span>
The initial conditions of the System are
<span class="math-container">$$
\begin{align*}
x(0)&=1\\
\dot{x}(0)&=1
.\end{align*}
$$</span>
The System can be approximated with perturbation theory to the first order, so
<span class="math-container">$$
x=x_0+\varepsilon x_1
.$$</span>
Plugging this into the differential equation and comparing the coefficients leads to
<span class="math-container">$$
\begin{align*}
\ddot{x}_0+x_0&=0\\
\ddot{x}_1+x_1&=-\dot{x}_0^2
.\end{align*}
$$</span>
The new initial conditions are
<span class="math-container">$$
\begin{align*}
x_0(0)&=1\\
\dot{x}_0(0)&=0\\
x_1(0)&=0\\
\dot{x}_1(0)&=0
.\end{align*}
$$</span>
All this leads to the solution for <span class="math-container">$x_0$</span> and <span class="math-container">$x_1$</span>
<span class="math-container">$$
\begin{align*}
x_0(t)&=\cos (t)\\
x_1(t)&=-\dfrac{1}{3}\left(\cos (t)-1\right)^2
.\end{align*}
$$</span></p>
<p><strong>Question:</strong> How is the differential equation for <span class="math-container">$x_1(t)$</span> solved?</p>
<p><strong>Attempt:</strong> I can solve the equation for <span class="math-container">$x_0(t)$</span>. My attempt for <span class="math-container">$x_1(t)$</span> was to split it into the homogeneous and inhomogeneous part and add them together in the end. I can solve the homogeneous part, but not the inhomogeneous part. I tried an approach with the variation of constants, but it doesn't seem to work. The problem could be that variation of constants only works with first order differential equation, but that is just a guess.</p>
<p><strong>Note:</strong> I'm citing the German Wikipedia on perturbation theory <a href="https://de.wikipedia.org/wiki/St%C3%B6rungstheorie_(klassische_Physik)" rel="nofollow noreferrer">https://de.wikipedia.org/wiki/St%C3%B6rungstheorie_(klassische_Physik)</a>. It is the first example.</p>
|
<p>you want to solve this ODE</p>
<p><span class="math-container">$$\ddot x+x=-\sin^2(t)=-\frac 12\,(1-\cos(2t))$$</span></p>
<p>with the initial conditions <span class="math-container">$~x(0)=0~,\dot x(0)=0~$</span></p>
<p>transfer it to Laplace domain ,obtain the partial fraction and transfer each fraction back to time domain, this is the solution <span class="math-container">$~x(t)~$</span></p>
<p><span class="math-container">$$X(s)\,(s^2+1)=-\frac {2}{s\,(s^2+4)}\quad\Rightarrow$$</span>
<span class="math-container">$$X(s)=-\frac{1}{s^2+1}\frac {2}{s\,(s^2+4)}=\underbrace{-\frac{1}{2s}}_{-\frac 13}+\underbrace{\frac 23\frac{s}{s^2+1}}_{+\frac 23\cos(t)}
\underbrace{-\frac 16\frac{s}{s^2+4}}_{-\frac 13\,\cos^2(t)}$$</span></p>
<p><span class="math-container">$$x(t)=-\frac 13+\frac 23\cos(t)-\frac 13\cos^2(t)=-\frac 13(\cos(t)-1)^2$$</span></p>
| 1,070
|
differential equations
|
Differential equation with step function
|
https://physics.stackexchange.com/questions/810499/differential-equation-with-step-function
|
<p>I want to solve the equations of motion for a system with a unit step function. Are there any methods that can be used to solve these? As a toy model, I picked a sliding mass bouncing off a spring.
The setup for the problem is:
<span class="math-container">$$mx''= -\Theta(x)kx \\ x(0)=u \qquad u > 0 \\ x'(0)=-v \qquad v > 0$$</span></p>
<p>The non-linearity of the step function prevents the use of most methods to solve ODEs. However, the function can be transformed into linear regions.</p>
<p>Region 1: sliding freely towards the spring:
<span class="math-container">$$mx'' = 0, x(0)=u, x'(0)=-v $$</span></p>
<p>Region 2: in contact with the spring:
<span class="math-container">$$mx'' = -kx, x(T1)=0, x'(T)=-v$$</span></p>
<p>Region 3: sliding freely away from the spring:
<span class="math-container">$$mx'' = 0, x(T2)=0, x'(T2)=v$$</span></p>
<p>These are all solvable however for more complicated problems it will become difficult to predict what the future boundary conditions will be without iteratively solving through the regions before it (e.g. a second spring mirrored to this one would oscillate forever creating endless regions). At that point I may as well compute the solution numerically which I would like to avoid.</p>
<p>Are there any methods to solve differential equations with a step function that don't have this limitation?</p>
| 1,071
|
|
differential equations
|
Differential equation for radiation absorption
|
https://physics.stackexchange.com/questions/673412/differential-equation-for-radiation-absorption
|
<p>Let the radiation absorbed by a material be given as a function <span class="math-container">$N(x)$</span>, where <span class="math-container">$x$</span> is the material's layer thickness. In a piece with a thickness of <span class="math-container">$dx$</span>, <span class="math-container">$dN$</span> particles are absorbed. This number is proportional to the number <span class="math-container">$N(x)$</span> and to the layer thickness <span class="math-container">$dx$</span> with a proportionality constant of <span class="math-container">$\alpha$</span>.</p>
<p>How can I write down the differential equation for <span class="math-container">$N(x)$</span>?</p>
|
<p>Given that <span class="math-container">$dN$</span> is proportional to <span class="math-container">$N(x)$</span>, we must include the beam loss with it's passage through the absorbing medium by a distance <span class="math-container">$dx$</span> to get the following:</p>
<p><span class="math-container">$$ dN \propto - N(x) dx $$</span></p>
<p>Rearranging the relation above after inserting the constant of proportionality <span class="math-container">$\alpha$</span>, you get:</p>
<p><span class="math-container">$$ dN = -\alpha N(x) dx$$</span></p>
<p>Upon rearranging, you get the differential equation:</p>
<p><span class="math-container">$$dN/dx + \alpha N(x) = 0$$</span></p>
| 1,072
|
differential equations
|
What are the differential equations that model a self-propagating gravitational wave in space-time?
|
https://physics.stackexchange.com/questions/704462/what-are-the-differential-equations-that-model-a-self-propagating-gravitational
|
<p>Light is a self-propagating wave, but it's very complicated.</p>
<p>Imagine, if you will, a wave in space-time that by assumption was self-propagating like light, except that it was a <a href="https://en.wikipedia.org/wiki/Gravitational_wave" rel="nofollow noreferrer">gravitational wave</a>.</p>
<p>What are the differential equations and boundary conditions that would govern the transfer of a wave between two absorption points?</p>
<p>I'm familiar with differential equations, but not the specifics of differential geometry that might better address this.</p>
|
<p>Wave equation for Gravitational wave(GW) comes from Einstein field equation in general relativity, with linearized approximation. Einstein equation is originally non-linear DE, but we can approximate it become linear. Set metric:</p>
<p><span class="math-container">$$g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}, \; |h_{\mu\nu}| \ll 1$$</span></p>
<p>Calculate Christoffel symbol, Riemann curvature tensor, and so on. These things consist of many derivative terms of metric, but we only consider 1st-order <span class="math-container">$\mathcal{O}(h_{\mu\nu})$</span> terms. Then, Einstein equation will be reduced in linear DE. (I skip many details of process to derive wave equation from Einstein equation)</p>
<p>Small perturbation of metric <span class="math-container">$h_{\mu\nu}$</span> can be decomposed in each component <span class="math-container">$h_{00}, \; h_{0i}, \; h_{ij}$</span>. For simplicity, we will use only spatial part with transverse gauge. Also, assume it is vacuum case. Then, DE is reduced like that:</p>
<p><span class="math-container">$$\square h_{\mu\nu} = 0 $$</span></p>
<p>where <span class="math-container">$h_{\mu\nu}$</span> satisfies <span class="math-container">$h_{0 \nu} = 0$</span> (purely spatial), <span class="math-container">$ \eta^{\mu\nu} h_{\mu\nu} = 0$</span> (tracelss), <span class="math-container">$\partial_{\mu} h^{\mu\nu} = 0$</span> (transverse).</p>
<p>General solution of this DE is plane wave.</p>
<p><span class="math-container">$$h_{\mu\nu} = C_{\mu\nu} e^{i k_{\sigma} x^{\sigma}}$$</span></p>
<p>and <span class="math-container">$C_{\mu \nu}$</span> will have such form. (assume propagating to <span class="math-container">$z$</span> direction)</p>
<p><span class="math-container">$$C_{\mu\nu} = \begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & C_1 & C_2 & 0 \\
0 & C_2 & -C_1 & 0 \\
0 & 0 & 0 & 0
\end{pmatrix} $$</span></p>
<p>If we recover inhomogenous term in RHS,</p>
<p><span class="math-container">$$\square h_{\mu\nu} \simeq 8 \pi G T_{\mu\nu} $$</span></p>
<p>then solution can be expressed with Green function and retarded time.</p>
<p><span class="math-container">$$h_{\mu\nu}(t, \vec{x}) \simeq 8 \pi G \int \frac{1}{4\pi |\vec{x}-\vec{y}|} T_{\mu\nu}(t',\vec{y}) d^3 y $$</span></p>
<p>where <span class="math-container">$t = t' + |\vec{x}-\vec{y}|$</span></p>
<p><span class="math-container">$\textbf{Edit}$</span></p>
<p>Here is a brief process of linearization:</p>
<p>Terms in Christoffel symbol are replaced to <span class="math-container">$h_{\mu\nu}$</span> instead of <span class="math-container">$g_{\mu\nu}$</span>.</p>
<p><span class="math-container">$$\Gamma^{\rho} _{\mu \nu} = \frac{1}{2} \eta^{\rho \lambda} (\partial_{\mu} h_{\nu \lambda} + \partial_{\nu} h_{\mu \lambda} - \partial_{\lambda} h_{\mu\nu} ) $$</span></p>
<p><span class="math-container">$O((h_{\mu\nu})^2)$</span> order terms in Riemann curvature tensor are neglected. Also, Ricci tensor <span class="math-container">$R_{\mu\nu}$</span> and Ricci scalar <span class="math-container">$R$</span> have linear forms, too.</p>
<p><span class="math-container">$$R_{\mu\nu\rho\sigma} = \eta_{\mu\lambda} \partial_{[\rho,} \Gamma^{\lambda}_{\sigma],\nu} + O((h_{\mu\nu})^2) $$</span></p>
<p>Now, put them all to Einstein equation, then linearized form is yieleded.</p>
<p><span class="math-container">$$ R_{\mu\nu} - \frac{1}{2} Rg_{\mu\nu}
= 8\pi G T_{\mu\nu} $$</span></p>
<p><span class="math-container">$$\frac{1}{2} (\partial_{\sigma}\partial_{(\nu,} h^{\sigma} \; _{\mu)} -\partial_{\mu} \partial_{\nu} h - \square h_{\mu\nu} - \eta_{\mu\nu} \partial_{\rho}\partial_{\sigma} h^{\rho \lambda} + \eta_{\mu\nu} \square h ) = 8\pi G T_{\mu\nu} $$</span></p>
<p>Some redudant terms can be removed with transverse gauge assumption.</p>
| 1,073
|
differential equations
|
Specific differential equation in RLC circuit
|
https://physics.stackexchange.com/questions/112713/specific-differential-equation-in-rlc-circuit
|
<p>I have been studying differential equations in RLC circuits: specifically I am looking at </p>
<p><strong><em>a generator with fixed EMF $=E$,<br>a capacitor $C$, <br>an inductor with inductance $L$ and internal resistance $r$,<br> and a separate resistor $R$</em></strong> </p>
<p>with the elementary cases accounting for <br>
$q$ (the charge on the capacitor), <br>
$V_c$ its voltage or <br>
$i$ the current flowing through the circuit</p>
<p>For example $$\ddot q+\frac{R+r}{L}\dot q+\frac{q}{LC}=E$$</p>
<p>I've been trying to find such a differential equation for the compound voltage </p>
<p>$$V_{L,r}=V_L +V_r=ri+L\frac{di}{dt}$$ </p>
<p>which didn't seem to satisfy the criteria for a "regular ODE": $$\fbox{$\ddot V_{L,r}+\frac{R}{L}\dot V_{L,r}+\alpha V_{L,r}=\frac{\alpha r}{L}e^{-rt/L}\int e^{rt/L} \ V_{L,r} \ dt$}$$ with $\alpha=\frac{-Rr}{L^2}+\frac{1}{LC}$</p>
<p>I started with trying to express $i$ through $V_{L,r}$ as all relevant voltages are expressed in $i$ (resistor), $q$ (capacitor) and $\frac{di}{dt}$ ($V_{L,r}$). At first through this relation by applying regular ODE properties: $V_{L,r}=ri+L\frac{di}{dt} \rightarrow \fbox{$i=\frac{1}{L}e^{-rt/L} \int e^{rt/L} \ V_{L,r} \ dt$}$, and then replaced in : $E=V_{L,r}+Ri+\frac{q}{C} \rightarrow 0=\frac{dV_{L,r}}{dt}+R\frac{di}{dt}+\frac{i}{C}$ and obtained the aforementioned DE. </p>
<p>Should I be using any other physical relation?</p>
|
<p>It is not clear to me why you want to do such a complicated thing. But if you want to follow this way, a slight easier approach is to resolve for the voltage $V_L$. The KVL for your circuit is
$$\tag{1}
E=V_L+(R+r)i+\frac{q}{C}
$$
Now assuming zero initial conditions you have to express $i$ and $q$ in terms of $V_L$. The current $i$ is easily derived from the constitutive relation of the inductor:
$$\tag{2}
i(t)=\frac{1}{L}\int_0^{t}V_L(t')dt'
$$
while for the (2) the charge $q(t)$ is
$$\tag{3}
q(t)=\int_0^{t}i(t')dt'=\frac{1}{L}\int_0^{t}dt'\int_0^{t}V_L(t'')dt''
$$
substituting (2) and (3) in (1) you have the equation</p>
<p>$$\tag{4}
E=V_L+\frac{R+r}{L}\int_0^{t}V_L(t')dt'+\frac{1}{LC}\int_0^{t}dt'\int_0^{t}V_L(t'')dt''
$$</p>
<p>Differentiating (4) twice you get:
$$
\tag{5}
\ddot{V_L}+\frac{R+r}{L}\dot{V_L}+\frac{1}{LC}V_L=0
$$</p>
| 1,074
|
differential equations
|
Confusion about Coulomb Gauge Differential Equations For $\vec{A}$ and $V$
|
https://physics.stackexchange.com/questions/307845/confusion-about-coulomb-gauge-differential-equations-for-veca-and-v
|
<p>I am currently reading Griffiths, 'Introduction to Electrodynamics', 3rd ed, Chapter 10.1.3, the section on Gauge Invariance, and was reached a point of confusion. In particular, the differential equations that arose from choosing the Coulomb gauge $\nabla \cdot \vec{A}=0$:</p>
<p>$$\nabla^2 V=-\frac{1}{\epsilon_0}\rho \tag{10.9}$$ </p>
<p>$$\nabla^2\vec{A}-\mu_0\epsilon_0\frac{\partial^2\vec{A}}{\partial t^2}=-\mu_0\vec{J} + \mu_0\epsilon_0\nabla(\frac{\partial V}{\partial t}) \tag{10.11}$$</p>
<p>I am confused about the structure of $\vec A$ and $V$, shouldn't they have some sort of $\nabla \lambda$ and $-\frac{\partial \lambda}{\partial t}$ to account for the gauge choice?</p>
<p>Here is my logical progression:</p>
<ol>
<li>We have differential equations directly from Maxwell's equations (no choice of gauge yet):</li>
</ol>
<p>$$\nabla^2 V+\frac{\partial}{\partial t}(\nabla \cdot \vec{A})=- \frac{1}{\epsilon_0}\rho\tag{10.4}$$</p>
<p>$$(\nabla^2 \vec{A} - \mu_0 \epsilon_0 \frac{\partial^2 \vec{A}}{\partial t^2})-\nabla(\nabla \cdot \vec{A}+\mu_0 \epsilon_0 \frac{\partial V}{\partial t})=-\mu_0\vec{J}\tag{10.5}$$</p>
<ol start="2">
<li>We choose the Coulomb gauge of $\nabla \cdot \vec{A_C}=0$ and therefore choose a scalar function, $\lambda$, which has the following properties:</li>
</ol>
<p>\begin{align*}
\nabla \cdot \vec{A_C}& =\nabla \cdot (\vec{A_0}+\nabla\lambda_C)\\
& =\nabla \cdot \vec{A_0} + \nabla^2\lambda_C\\
&=0
\end{align*}</p>
<p>$$ \nabla \cdot \vec{A_0} =- \nabla^2\lambda_C$$
So, in order for the Coulomb Gauge to be satisfied, we must add a scalar field, $\lambda_C$, that will satisfy the last expression above. This should have some consequences on the $V$ as well since we need to subtract $\frac{\partial \lambda_C}{\partial t}$ from it.</p>
<ol start="3">
<li>Now, we try simplifying the scalar and vector potential differential equations 10.4 and 10.5 with our new gauge choice (terms with $\nabla \cdot \vec{A}$ will be zero):</li>
</ol>
<p>$$\nabla^2 (V-\frac{\partial \lambda_C}{\partial t})+\frac{d}{\partial t}(0)=- \frac{1}{\epsilon_0}\rho\tag{New 10.4}$$</p>
<p>$$(\nabla^2 (\vec{A}+\nabla\lambda_C) - \mu_0 \epsilon_0 \frac{\partial^2 (\vec{A}+\nabla\lambda_C)}{\partial t^2})-\nabla(0+\mu_0 \epsilon_0 \frac{\partial (V-\frac{\partial \lambda_C}{\partial t})}{\partial t})=-\mu_0\vec{J}\tag{New 10.5}$$</p>
<p>Now, this is where my confusion starts. How does Griffiths go from my "New 10.4 and New 10.5" to his "10.4 and 10.5? Seems like he just ignored the Coulomb scalar function relations $\lambda_C$, but I doubt that can be it.</p>
|
<p>You have done all the work. Now just set $$\vec{A_c} = \vec{A} + \nabla\lambda_C$$ and $$V_c = V - \frac{\partial\lambda_C}{\partial t}\,,$$ and your "New" equations reduce to the Coulomb gauge equations.</p>
| 1,075
|
differential equations
|
Differential Equations in a Discharging RC Circuit in Parallel
|
https://physics.stackexchange.com/questions/556163/differential-equations-in-a-discharging-rc-circuit-in-parallel
|
<p>Please consider the following RC circuit as context:</p>
<p><a href="https://i.sstatic.net/o0gAI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o0gAI.png" alt="enter image description here"></a>
Assume that the circuit has been connected for a long time. If switch S has been opened at <span class="math-container">$t=0,$</span> the differential equation used to solve for the charge on the capacitor <span class="math-container">$Q$</span> would be, by using Kirchhoff's loop rule:
<span class="math-container">$$\frac{Q}{C}-R\frac{dQ}{dt}=0, \quad Q(0)=C\mathcal{E}$$</span></p>
<p>since, discarding the portion to the left of the capacitor, the voltage drop through the capacitor would oppose that of the resistor following the clockwise current flow. However, a professor told me that the right differential equation in this case would be:
<span class="math-container">$$\frac{Q}{C}+R\frac{dQ}{dt}=0, \quad Q(0)=C\mathcal{E}$$</span>
I simply do not understand how the voltage drop of the capacitor is negative from its bottom to top plate. Wouldn't it act the same as a battery? Namely, wouldn't it add the the voltage to the circuit?</p>
|
<p>I believe that your Professor is correct. The equation by bobD is correct but what you are missing is that the charge on capacitor on that time is Q and therefore current flowing through that instant is <span class="math-container">$d(Q(0)-Q)/dt$</span> and that gives your correct equation.
Even if you consider your equation correct then when you integrate it the charge will increase exponentially which is not possible.
HOPE THIS HELPS</p>
| 1,076
|
differential equations
|
Existence of a solution for geodesic differential equations for a singular metric
|
https://physics.stackexchange.com/questions/220298/existence-of-a-solution-for-geodesic-differential-equations-for-a-singular-metri
|
<p>In order to determine the geodesics, one must solve the following set of differential
equations
\begin{align}
\frac{d^2 x^j}{ds^2} + {j\brace h\,\,k}\frac{dx^h}{ds}\frac{dx^k}{ds} = 0,
\end{align}
where ${j\brace h\,\,k}$ is the Christoffel symbol of second kind, which is defined as
\begin{align*}
{j\brace h\,\,k} = \frac{1}{2} g^{jk}\left[\frac{\partial g_{hk}}{\partial x^l} +
\frac{\partial g_{kl}}{\partial x^h} -
\frac{\partial g_{hl}}{\partial x^k}\right].
\end{align*}
Often, these equations can not be solved analytically, and we can only solve them
numerically.</p>
<p>When solving the geodesic differential equations, <strong>one of the main concerns seem to be the conjugate metric $g^{jk}$, and it's existence</strong>. If the metric $g_{jk}$ is singular, i.e $det(g) = 0$, then the conjugate metric does not exist, as
\begin{align*}
g_{jk}g^{jk} = \delta_j^{\phantom{j}k}.
\end{align*}</p>
<p>My <strong>questions</strong> is as following : </p>
<p>Given an initial condition, does there still exist a geodesic that is a unique solution, or is it simply not possible to solve the equation?</p>
| 1,077
|
|
differential equations
|
Non-integrable differential equation and non-holonomic contraints
|
https://physics.stackexchange.com/questions/482949/non-integrable-differential-equation-and-non-holonomic-contraints
|
<p><a href="https://i.sstatic.net/4hemJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4hemJ.png" alt="enter image description here"></a></p>
<p>From the constraint <span class="math-container">$v=a\dot{\phi}$</span> of a rolling disk over a plane, where <span class="math-container">$a$</span> is the radius of the disk we can derive these two equations:</p>
<p>we have two differential equations of constraint:</p>
<p><span class="math-container">$dx=asin\theta d\phi$</span> </p>
<p><span class="math-container">$dy=-acos\theta d\phi$</span></p>
<p>Can you rigorously explain to me why can't I integrate these functions? Is it their non-integrability that makes these constraints non-holonomic?</p>
|
<p>HINT: Rewrite the first constraint as
<span class="math-container">$$
f \left[ dx +(- a \sin \theta) d\phi + (0) d\theta \right] = 0.
$$</span>
where <span class="math-container">$f(x, \theta, \phi)$</span> is some unknown integrating function. We want to know whether this can be written as
<span class="math-container">$$
dg = \frac{\partial g}{\partial x} dx + \frac{\partial g}{\partial \phi} d\phi+ \frac{\partial g}{\partial \theta} d\theta
$$</span>
Assuming that <span class="math-container">$g$</span> is a "nicely-behaved" function of the coordinates, its mixed partial derivatives are independent of the order of differentiation. Using this fact, can you show that this implies <span class="math-container">$f = 0$</span>?</p>
<p>To answer your second question: there are different definitions of "non-holonomic" constraints used by different authors. If you're following Goldstein, any constraint that cannot be written in the form <span class="math-container">$f(x_i) = 0$</span> is non-holonomic; but there are also non-holonomic constraints that can only be written as equalities among the higher derivatives, or as inequalities.</p>
| 1,078
|
differential equations
|
Solution of quadratic stochastic differential equation
|
https://physics.stackexchange.com/questions/842189/solution-of-quadratic-stochastic-differential-equation
|
<p>One can write down the solution of a linear stochastic differential equation in Ito convention of the form
<span class="math-container">$$
d\vec{x} = F\vec{x}dt+G\vec{x}dW
$$</span>
where <span class="math-container">$G,F$</span> are constant matrices and <span class="math-container">$dW$</span> is a Wiener process, e.g. as a stochastic Magnus series (assuming F and G do not commute). If I taylor expand this series up to first order in <span class="math-container">$dt$</span> I will get back to the typical form of the solution
<span class="math-container">$$
\vec{x}(t) = e^{F+\frac{G^2}{2}t+GW(t)}\vec{x}(0)
$$</span>
My question is the following: I know that there exists no general solution for a non-linear stochastic differential equation, but is it possible to write down a general solution if I only keep terms up to first order in <span class="math-container">$dt$</span>?</p>
| 1,079
|
|
differential equations
|
Integrating Differential equations in General relativity
|
https://physics.stackexchange.com/questions/763593/integrating-differential-equations-in-general-relativity
|
<p>Let's say we have an equation of the form</p>
<p><span class="math-container">\begin{equation}
\nabla _\vec{u}u^\mu(\tau)=F^\mu\big[x(\tau)\big]
\end{equation}</span></p>
<p>The operation on the left hand side is the usual covariant derivative along the worldline</p>
<p><span class="math-container">\begin{equation}
\nabla_\vec{u}=u^\mu \nabla_\mu
\end{equation}</span></p>
<p>and <span class="math-container">$u$</span> is the usual 4-velocity</p>
<p><span class="math-container">\begin{equation}
u^\mu\equiv \frac{dx^\mu}{d\tau}
\end{equation}</span></p>
<p>with <span class="math-container">$x(\tau)$</span> the trajectory of the particle and <span class="math-container">$\tau$</span> proper time along its worldline. The term in the right hand side is just a forcing term evaluated at the position of the particle. Its specific form is not relevant, only that it is a vector field evaluated on the worldline.</p>
<p><strong>My question is: Is there a sense in general relativity where we can integrate this to obtain the 4-velocity?</strong></p>
<p>I'm thinking something like</p>
<p><span class="math-container">\begin{equation}
\int_{t_0}^{t_f}d\tau \nabla _\vec{u}u^\mu(\tau) = \int_{t_0}^{t_f}d\tau F^\mu\big[x(\tau)\big]\\
u^\mu(t_f)-u^\mu(t_0)=\int_{t_0}^{t_f}d\tau F^\mu\big[x(\tau)\big]
\end{equation}</span></p>
<p>I understand that integration of tensors is ill defined, but I'm wondering if there's some generalization of the fundamental theorem of calculus or tensors in curved spacetime.</p>
<p><strong>EDIT:</strong></p>
<p>I am aware of the Stokes theorem in General relativity and integrating forms against a volume element. However, in my question I'm refering to integrating over proper time along a worldline, not integrating over spacetime. Also, I'd like the result to be <span class="math-container">$u^\mu$</span> with its index free, and not contracted with some normal versor.</p>
<p><strong>EDIT:</strong></p>
<p>If there's no such notion of integration. Then the question is: How do you find the 4-velocity given that differential equation? Do parallel propagators do the job?</p>
| 1,080
|
|
differential equations
|
Number of differential equations and unknown functions in spherically symmetric black hole solution
|
https://physics.stackexchange.com/questions/620661/number-of-differential-equations-and-unknown-functions-in-spherically-symmetric
|
<p>In General Relativity, when we are obtaining the Schwarzchild solution, we get from Einstein's equation three differential equations but only two unknown functions [A(r) and B(r)]:</p>
<p><span class="math-container">$R_{00}=-\frac{A''}{2B}+\frac{A'}{4B}\left(\frac{A'}{A}+\frac{B'}{B}\right)-\frac{A'}{rB}=0,\\
R_{11}=\frac{A''}{2A}-\frac{A'}{4A}\left(\frac{A'}{A}+\frac{B'}{B}\right)-\frac{B'}{rB}=0,\\
R_{22}=\frac{1}{B}-1+\frac{r}{2B}\left(\frac{A'}{A}-\frac{B'}{B}\right)=0.$</span></p>
<p>Shouldn't the number of differential equations be equal to the number of unknown functions?</p>
<p>Here A(r) and B(r) are defined as</p>
<p><span class="math-container">$ds^2=A(r)dt^2-B(r)dr^2-r^2(d\theta^2+\sin\theta^2d\phi^2).$</span></p>
|
<p>You're correct that there are only two functions, meaning we only need two differential equations. Therefore one must be redundant, which is the case. Showering this can be slightly awkward though.</p>
<p>One trick is to take the derivative of the equations and work with these too. With the Einstein tensor it's a lot easier to do, but of course it can also be done with your differential equations here. I'm not sure there's any foolproof method of approaching this though.</p>
<p>A hand-wavy method is to take the derivative of all three equations (which I'll label <span class="math-container">$R'_{00}$</span>, <span class="math-container">$R'_{11}$</span> and <span class="math-container">$R'_{22}$</span>). Then if you solve <span class="math-container">$R'_{11}$</span> for <span class="math-container">$A'''(r)$</span>, solve <span class="math-container">$R'_{22}$</span> for <span class="math-container">$B''(r)$</span> and solve <span class="math-container">$R_{11}$</span> for <span class="math-container">$A''(r)$</span>, you can plug these into <span class="math-container">$R'_{00}$</span> and see it vanishes. Perhaps there's a better method here but not one that's obvious to me.</p>
<p>Alternatively, note that you can use just two of the equations to solve for <span class="math-container">$A(r)$</span> and <span class="math-container">$B(r)$</span> completely, then the third equation is automatically satisfied. Therefore one is made redundant, answering your question. However, when you have field equations that you can't find closed form solutions for, the method above is also useful for verifying the number of independent equations.</p>
| 1,081
|
differential equations
|
Drawing the circuit from a differential equation
|
https://physics.stackexchange.com/questions/77258/drawing-the-circuit-from-a-differential-equation
|
<p>Can you please help me in modelling a circuit using the differential equation? In the following equation, $u(t)$ is the input voltage and $y(t)$ is the output voltage.</p>
<p>$$y(t)=2u(t)+3\frac{du(t)}{dt}+4\int_0^tu(t)dt.$$</p>
<p>How do I draw a circuit such that the input voltage $u(t)$ and the output voltage $y(t)$ are related by this differential equation?</p>
|
<p>I believe you simply need 4 op-amps. Here is what you need for differentiation and integration.</p>
<p><a href="http://en.wikipedia.org/wiki/Operational_amplifier_applications#Integration_and_differentiation" rel="nofollow">http://en.wikipedia.org/wiki/Operational_amplifier_applications#Integration_and_differentiation</a></p>
<p>so you simply amplify u by factor 2 with an opamp-circuit then use 2 op-amps for differentiation and integration at the end you should add them all with an opamp another. Here is what you need for ampflication and addition</p>
<p><a href="http://en.wikipedia.org/wiki/Operational_amplifier_applications#Amplifiers" rel="nofollow">http://en.wikipedia.org/wiki/Operational_amplifier_applications#Amplifiers</a></p>
| 1,082
|
differential equations
|
Differential equation from Schwarzschild metric
|
https://physics.stackexchange.com/questions/837660/differential-equation-from-schwarzschild-metric
|
<p>I tried to solve an exercise related to Schwarzschild metric, and at some point found next question <a href="https://physics.stackexchange.com/q/620576/">Question</a></p>
<p>I can't figure out how the first line turns out.</p>
<blockquote>
<p>With studying Schwarzschild metric geodesics one can easily come up with the following differential equation
<span class="math-container">\begin{equation}
\dfrac{dr}{d\tau} = - \sqrt{C^2-\left( 1-\dfrac{2GM}{r}\right)}
\end{equation}</span>
which relates the radial coordinate and the proper time outside the event horizon <span class="math-container">$r_H=2GM$</span> (I'm using, of course, <span class="math-container">$c=1$</span>).</p>
</blockquote>
<p>Can anyone explain how we get this?</p>
|
<p>It looks like i found an answer in book</p>
<blockquote>
<p>Chandrasekhar S. The Mathematical Theory of Black. Vol. 1. Cambridge: Oxford Univ. Press, 1983. 107 p.</p>
</blockquote>
<p>, but i still appreciate any help for answering some questions (that are well known for physicist) across the proof.</p>
<p>Let us have Schwarzschild metric
<span class="math-container">$$\mathrm{d} s^2=(1-2 M / r)(\mathrm{d} t)^2-\frac{(\mathrm{d} r)^2}{1-2 M / r}-r^2\left[(\mathrm{~d} \theta)^2+(\mathrm{d} \varphi)^2 \sin ^2 \theta\right]$$</span></p>
<p>Lagrangian in our case looks like this
<span class="math-container">$$\mathscr{L}=1 / 2\left[(1-2 M / r) \dot{t}^2-\dot{r}^2 /(1-2 M / r)-r^2 \dot{\theta}^2-\left(r^2 \sin ^2 \theta\right) \dot{\varphi}^2\right],$$</span> where the dot means differentiation by <span class="math-container">$\tau$</span>.</p>
<p>We are interested in canonical impulse <span class="math-container">$p_t$</span>:
<span class="math-container">$$
p_t=\frac{\partial \mathscr{L}}{\partial t}=\left(1-\frac{2 M}{r}\right)\dot{t} ,
$$</span></p>
<p><span class="math-container">$$
\frac{\mathrm{d} p_t}{\mathrm{~d} \tau}=\frac{\partial \mathscr{L}}{\partial t}=0 \text{ (by definition?)},
$$</span></p>
<p><span class="math-container">$$
p_t=\left(1-\frac{2 M}{r}\right) \frac{\mathrm{d} t}{\mathrm{~d} \tau}=\mathrm{const}=E,
$$</span></p>
<p>At this point everything became clear. In the problem there is no spherical movement, so Schwarzschild metric reduces to
<span class="math-container">$$\mathrm{d} s^2=(1-2 M / r)(\mathrm{d} t)^2-\frac{(\mathrm{d} r)^2}{1-2 M / r}$$</span></p>
<p>Since <span class="math-container">$ds^2 \text{(why?)}=c^2d\tau^2=\tau^2$</span> , so we have:</p>
<p><span class="math-container">$$\mathrm{d} \tau^2=(1-2 M / r)(\mathrm{d} t)^2-\frac{(\mathrm{d} r)^2}{1-2 M / r}$$</span></p>
<p>And substituting <span class="math-container">$\frac{dt}{d\tau}$</span> into the expression we simply get:
<span class="math-container">$$
\left(\frac{d r}{d \tau}\right)^2=2 M / r-\left(1-E^2\right)
$$</span></p>
<p>I am not sure about all sign's here, but is it mostly right?</p>
| 1,083
|
differential equations
|
Book recommendations for Fourier Series, Dirac Delta Function and Differential Equations?
|
https://physics.stackexchange.com/questions/518442/book-recommendations-for-fourier-series-dirac-delta-function-and-differential-e
|
<p>I'm a second-year undergrad and currently taking a course in Mathematical Physics which covers the topics of Dirac delta functions, Fourier series, Fourier transforms and Differential equations. They recommended using Boas' "Mathematical Methods in Physical Sciences" book. However, I find the book too "wishy-washy" and focusing on irrelevant points for my taste. I was wondering if anyone can suggest any book which explains the concepts in the most straightforward and to the point manner, with real-world examples?</p>
|
<p>For a down to earth but rigorous account distributions and delta functions (but not so much differential equations) you can't beat James Lighthill's <em>Introduction to Fourier analysis and generalised functions</em>, Cambridge University Press. ISBN 978-0-521-05556-7.</p>
<p>The book is quite thin, only 70 pages or so. It is written at the undergraduate level. Although he uses test function (He calls them "good functions") to define how distributions work --- just as in advanced books for mathematicians--- there is not much sophisticated mathematical formalism and what there is, is well matched to what physics students learn.</p>
<p>The book has many applications to Fourier series and Fourier integrals of exactly the type one meets in physics papers and that are not often explained in regular mathematical methods classes. Lighthill is a great expositor (I took his "Waves in fluids" class when I was an undergrad and it was one of the best classes I had) and the book is well set out for self study. Amazon has used copies for about $14.</p>
| 1,084
|
differential equations
|
Understanding the Terms in Coupled Springs Differential Equation
|
https://physics.stackexchange.com/questions/392542/understanding-the-terms-in-coupled-springs-differential-equation
|
<p>I am teaching differential equations and I got myself totally confused about the physics of a problem.</p>
<p>Consider a coupled spring system in series: there is a mass $m_1$ on a horizontal track which is connected to a wall by a spring (with natural length $L_1$ and spring constant $k_1$). Also attached to the first mass is a second mass $m_2$ on the same horizontal track and is connected to the first mass by a spring (with natural length $L_2$ and spring constant $k_2$).</p>
<p>The reference that I'm using goes through the standard derivation where $x_1$ is the displacement of the first mass and $x_2$ is the displacement of the second mass, deriving
\begin{align*}
m_1\frac{d^2x_1}{dt^2}&=-(k_1+k_2)x_1+k_2x_2\\
m_2\frac{d^2x_2}{dt^2}&=k_2x_1-k_2x_2.
\end{align*}</p>
<p>I wanted to rewrite this system in terms of the stretch/compression of each spring. In particular, $y_1=x_1$ is the displacement of the first spring from its natural length and $y_2=x_2-x_1$ is the displacement of the second spring from its natural length. Substituting these into the differential equations, we get
\begin{align*}
m_1\frac{d^2y_1}{dt^2}&=-k_1y_1+k_2y_2\\
m_2\left(\frac{d^2y_1}{dt^2}+\frac{d^2y_2}{dt^2}\right)&=-k_2y_2.
\end{align*}
The first equation makes sense to me (in terms of the net force on the first mass), but I don't see where the force
$$
m_2\frac{d^2y_1}{dt^2}
$$
is coming from. I tried to think of the second half of the system moving rigidly as $m_1$ moves, but this didn't lead to this differential equation.</p>
<p><strong>TL;DR</strong></p>
<p>What is the physical significance of the
$$
m_2\frac{d^2y_1}{dt^2}
$$
term?</p>
|
<p>When you apply Newton's laws of motion you are assuming (usually without even thinking about it) that the frame(s) of reference are inertial frames. </p>
<p>In this case you can think of your coordinates $x_1, \, x_2$ and $y_1$ being measured in inertial frames of reference which are all fixed to the Earth. </p>
<p>However note that your coordinate $y_2$ has its zero referenced to a position $x_1$ which is accelerating relative to the Earth. </p>
<p>You therefore cannot directly apply Newton's laws of motion in that non-inertial frame of reference in which $y_2$ is measured. </p>
<p>In that non-inertial frame of reference you can measure a position coordinate $y_2$ of mass $m_2$ and you know the force applied by the spring on that mass $-k_2y_2$. </p>
<p>To use Newton's laws a pseudo force of magnitude $-m_2\ddot y_1 \,(=-m_2 \ddot x_2)$ must be introduced. </p>
<p>Applying Newton's second law in that non-inertial frame of reference one gets $$-k_2y_2-m_2\ddot y_1 =m_2 \ddot y_2$$ which is your fourth equation. </p>
<hr>
<p>Just imagine the the second spring not being there and mass $m_2$ is connected by a rigid rod to mass $m_1$. </p>
<p>You are observing mass $m_2$ in the non-inertial frame in which you measure $y_2$. </p>
<p>In that non-inertial frame $y_2$ does not change but there is a force on mass $m_2$ due to the rod and that force is equal to $m_2\ddot y_1$.</p>
<p>If you now includes the pseudo force and apply newton's second law in that non-inertial frame $m_2\ddot y_1- m_2\ddot y_1 = m\ddot y_2 \Rightarrow \ddot y_2 =0$, which is what is observed.</p>
| 1,085
|
differential equations
|
Dimensional analysis in differential equations
|
https://physics.stackexchange.com/questions/273711/dimensional-analysis-in-differential-equations
|
<p>I know how to use Buckingham Pi Theorem to, for example derive from the functional equation for a simple pendelum, with the usual methods also described <a href="https://projects.exeter.ac.uk/fluidflow/Courses/FluidDynamics3211-2/DimensionalAnalysis/dimensionalLecturese4.html" rel="nofollow">here</a></p>
<p>$1=fn\left[T_{period}, m, g, L\right]$</p>
<p>$1=fn\left[\frac{g}{L}T_{period}^2\right]=fn\left[\Pi_1\right]$</p>
<p>Everything seemed to work well until I tried to apply the theorem to the governing differential equation:</p>
<p>$(I): m\frac{d^2\Theta}{dt^2}L = - sin(\Theta)mg$</p>
<p>$1 = fn\left[\Theta, m, g, L, t\right]$</p>
<p>$1 = fn\left[\Theta, \frac{g}{L}t^2\right] = fn\left[\Theta, \Pi_1\right]$</p>
<p>Seems to work so far, but when I now try to rewrite $I$ in terms of $\Theta, \Pi_1$, I can't deal with the derivative.</p>
<hr>
<p>I tried something myself and it <em>seems</em> to work out, but I don't know how rigorous the argumentation is and if the result can be stated in a better way. See the following:</p>
<p>Since we want to take a derivative with respect to time, we define a new dimensional variable $\xi$ with $\xi\bar{t} = t$ with arbitrary $\bar{t} \neq 0$. We also introduce a new function $\Omega\left(\xi\right) = \Theta\left(\xi\bar{t}\right)=\Theta\left(t\right)$. Computing $\frac{d\Omega}{d\xi} = \bar{t}\Theta\left(\xi\bar{t}\right)=\bar{t}\Theta\left(t\right)$ and likewise for higher derivatives.</p>
<p>We now write the functional equation including derivatives like so</p>
<p>$1 = fn\left[\frac{d^2\Theta}{dt^2}, \Theta, m, g, L, t\right]$</p>
<p>substituting $t=\xi\bar{t}, \frac{d^2\Theta}{dt^2}=\frac{1}{\bar{t}^2}\frac{d^2\Omega}{d\xi^2}, \Theta=\Omega$ we have the new functional equation</p>
<p>$1 = fn\left[\frac{1}{\bar{t}^2}\frac{d^2\Omega}{d\xi^2}, \Omega, m, g, L, \bar{t}, \xi\right]$</p>
<p>because the derivative is now in terms of two nondimensional parameters, everything else seems to work great with Buckingham Pi:</p>
<p>$1 = fn\left[\frac{d^2\Omega}{d\xi^2}, \Omega, \frac{g}{L}\bar{t}^2, \xi\right] = fn\left[\frac{d^2\Omega}{d\xi^2}, \Omega, \Pi_2, \xi\right]$</p>
<p>When resubstituting in the actual equation we get</p>
<p>$\frac{d^2\Omega}{d\xi^2} = -sin\left(\Omega\right)\Pi_2$</p>
<p>which indeed yields the correct results.</p>
<hr>
<p>Although I can't seem to find any mistakes in my reasoning, I am not quite happy with the arbitrary choice of $\bar{t}$. The choice <em>does</em> influence the \Pi-group 2 (although only in magnitude) and also <em>will</em> influence boundary conditions when they are given. As far as I can see, it does not influence the result, when given in natural parameter form (as terms of $m, g, L, t$) because every $\bar{t}$ will get paired again with one $\xi$, yielding just $t$. But I have not been able to prove this point.</p>
<hr>
<p>I remain with three questions:</p>
<ol>
<li>What is the standard approach in literature that I can study?</li>
<li>Is there a problem with my derivations, so far?</li>
<li>Can I somehow prove that the choice of $\bar{t}$ does not matter in the final solution?</li>
</ol>
|
<p>You have done more than half the work yourself. It is convenient to define, $\Pi_1\equiv \sqrt{\frac{g}{L}}t$. There is nothing wrong with the way you have defined it, but my definition reduces work in what follows. Rewrite derivative as:</p>
<p>$\frac{d\theta}{dt}=\frac{d\theta}{d\Pi_1}\frac{d\Pi_1}{dt}=\frac{d\theta}{d\Pi_1}\sqrt{\frac{g}{L}}$</p>
<p>$\frac{d^2\theta}{dt^2}=\frac{d^2\theta}{d\Pi_1^2}\frac{d\Pi_1}{dt}=\frac{d^2\theta}{d\Pi_1^2}\frac{g}{L}$</p>
<p>So your differential equation becomes</p>
<p>$\frac{d^2\theta}{d\Pi_1^2}=-\sin \theta$.</p>
| 1,086
|
differential equations
|
A differential equation of Buckling Rod
|
https://physics.stackexchange.com/questions/40885/a-differential-equation-of-buckling-rod
|
<p>I tried to solve a differential equation, but unfortunately got stuck at some point. </p>
<p>The problem is to solve the diff. eq. of hard clamped on both ends rod.
And the force compresses the rod at both ends.
the solution(v(x)) is the value of bending I need.</p>
<p>I assuming, that the differential equation of buckling rod is
$$ EI_{x}v''''+Pv''=0$$
where $$P$$ is a force.
and $$EI_x$$ is inflexibility.</p>
<p>Then I find the solution for the diff. eq:
$$v(x) = \frac{(\frac{(c_2 \sin(\sqrt(P) x))}{\sqrt(P)}+\frac{(c_1 \cos(\sqrt(P) x))}{\sqrt(P)})}{\sqrt(P)}+c_4 x+c_3$$
the boundary conditions: $$v(0)=v(l)=0=v'(0)=v'(l)$$ gives the trivial solution for $$c_{1},c_{2},c_{3},c_{4}$$
but I need non-trivial solution.</p>
<p>Could you please help me to find the mistake or explain what's wrong in my equation?</p>
|
<p>First, the solution to your equation is not exactly what you got, but,</p>
<p>$$v(x) = C_1 \cos ax + C_2 \sin ax + C_3x + C_4$$</p>
<p>where $a^2 = \frac{P}{EI_x}$. And then you need to look more carefully at your boundary conditions...</p>
<p>$$v(0) = C_1 + C_4 = 0,\ C_4 = -C_1$$
$$v'(0) = C_2 a + C_3 = 0,\ C_3 = -aC_2$$
$$v(l) = C_1 \cos al + C_2 \sin al + C_3 l + C_4 = C_1(\cos al - 1) +C_2(\sin al -al) = 0$$
$$v'(l) = -C_1 a \sin al + C_2 a \cos al +C_3 = -C_1 a \sin al + C_2 a (\cos al - 1) = 0$$</p>
<p>The last two equations have the trivial solution, but may have a non-trivial solution if the system is degenerate. In this case it equates to the determinant of the coefficient matrix being 0, or:</p>
<p>$$(\cos al -1)^2 + \sin al (\sin al - al)=0$$</p>
<p>Working on this, you can eventually reach that there is a non-trivial solution if</p>
<p>$$\cos al = \frac{4 \pm a^2l^2}{4+a^2l^2}$$.</p>
<p>The simplest of the two solutions comes when $cos al = 1$, then $al = 2\pi n$. For $n=1$, you get a non trivial solution for </p>
<p>$$P = \frac{4 \pi^2 EI}{L^2}$$</p>
<p>which is the critical load for a doubly clamped buckling rod.</p>
<p>The other solution, $\cos al = \frac{4 - a^2l^2}{4+a^2l^2}$ has also inifnitely many solutions, one at $al=0$, the next one around $al \approx 4$, see the graph below where both sides of the equation have been plotted. Since $4 > 2 \pi$, the effective critical load is the other one.</p>
<p><img src="https://i.sstatic.net/c9EYb.png" alt="enter image description here"></p>
| 1,087
|
differential equations
|
Search for differential equation from Green function
|
https://physics.stackexchange.com/questions/496283/search-for-differential-equation-from-green-function
|
<p>Let's consider the following: </p>
<blockquote>
<p>We have a Green function <span class="math-container">$G$</span>, and we want to know what linear differential equation is solved by <span class="math-container">$G$</span>. </p>
</blockquote>
<p>How to do this? The question is: If I know <span class="math-container">$G$</span>, then is there a method that allow to solve equation <span class="math-container">$LG=\delta$</span> with respect to <span class="math-container">$L$</span>? In other words, normally we have the differential equation, and we try to get Green function <span class="math-container">$G$</span> in order to solve it. I know the Green function of the equation, but try to obtain the right equation that is solved by <span class="math-container">$G$</span>.</p>
|
<p>The following is a bit of an inductive approach and it would probably not work for all Green functions. The basic equation that you want to solve is
<span class="math-container">$$ \hat{D} G(\mathbf{x}) = \delta(\mathbf{x}) , $$</span>
where <span class="math-container">$\hat{D}$</span> is the differential operator that you want to find and <span class="math-container">$G$</span> is the Green function, which is known. Say for instance your Green function is given by
<span class="math-container">$$ G(\mathbf{x})=\int \frac{1}{m^2+|\mathbf{k}|^2}\exp(i \mathbf{k}\cdot\mathbf{x}) d^3k . $$</span>
if one can somehow get rid of the denominator inside the integral, one can see that the result would produce the Dirac delta function. So, the differential operator must produce <span class="math-container">$m^2+|\mathbf{k}|^2$</span> when it operates on the exponential function inside the integral. For each <span class="math-container">$\mathbf{k}$</span>, we need a gradient operator, which would bring down a <span class="math-container">$i\mathbf{k}$</span>. So, it then follows that the required differential operator is
<span class="math-container">$$ \hat{D}=m^2-\nabla^2 . $$</span></p>
<p>This inductive approach is perhaps not very useful for a general case, but it should cover most of the typical cases that one finds. If there is another case that you are interested in that cannot be treated in this way, please include it in the question and then we can think how to deal with it.</p>
| 1,088
|
differential equations
|
Dimensionless expression for differential equation
|
https://physics.stackexchange.com/questions/521952/dimensionless-expression-for-differential-equation
|
<p>I am working through <em>Nonlinear Dynamics and Chaos</em> by Steven H Strogatz. In chapter 3.5 (overdampened beads on a rotating hoop), a differential equation is converted into a dimensionless form. I am trying to work out which dimensions the initial equations had, and why the converted form is dimensionless.</p>
<p>Initial equation:</p>
<p><span class="math-container">$mr \ddot{\phi} = -b \dot{\phi} -mg \sin\phi + mr \omega^2 \sin\phi \cos \phi $</span></p>
<p><span class="math-container">$m$</span> is mass, <span class="math-container">$r$</span> is radius, <span class="math-container">$\phi$</span> is an angle, <span class="math-container">$b,g$</span> are arbitrary, positive constants, and <span class="math-container">$\omega$</span> is angular velocity.</p>
<p>Using a characteristic time <span class="math-container">$T$</span>, a dimensionless time <span class="math-container">$\tau$</span>, with <span class="math-container">$\tau = \frac{t}{T}$</span> is introduced. </p>
<p><span class="math-container">$\dot{\phi}$</span> and <span class="math-container">$\ddot{\phi}$</span> then become <span class="math-container">$\frac{1}{T}\frac{d\phi}{d\tau}$</span> and <span class="math-container">$\frac{1}{T^2}\frac{d^2\phi}{d\tau^2}$</span>, respectively.</p>
<p>Then the initial equation becomes </p>
<p><span class="math-container">$\frac{mr}{T^2}\frac{d^2\phi}{d\tau^2} = -\frac{b}{T}\frac{d\phi}{d\tau} - m g \sin\phi + mr \omega^2 \sin\phi \cos \phi$</span></p>
<p>This is made dimensionless by dividing through a force <span class="math-container">$mg$</span>:</p>
<p><span class="math-container">$(\frac{r}{gT^2})\frac{d^2\phi}{d\tau^2} = (-\frac{b}{mgT})\frac{d\phi}{d\tau} - \sin\phi + (\frac{r \omega^2}{g}) \sin\phi \cos \phi$</span></p>
<p>And all the expressions in the brackets are dimensionless.</p>
<p>I understand why the expressions in the brackets are dimensionless, but what about the differentials?</p>
<p><span class="math-container">$\phi$</span> is dimensionless.</p>
<p>but would
<span class="math-container">$\dot{\phi}$</span> not have dimension <span class="math-container">$\frac{1}{s}$</span>, and <span class="math-container">$\ddot{\phi}$</span> <span class="math-container">$\frac{1}{s^2}$</span>?</p>
|
<p>Because you take the derivative with respect to <span class="math-container">$\tau$</span>. Since <span class="math-container">$\tau$</span> is dimensionless, the derivative is too.</p>
| 1,089
|
differential equations
|
Setting up differential equations for two-level Rabi problem
|
https://physics.stackexchange.com/questions/388677/setting-up-differential-equations-for-two-level-rabi-problem
|
<p>I try to follow the derivation of Rabi two-level problem but I went into trouble when attempting to set up the equations as many notes have suggested.</p>
<p>Using the book (Laser cooling and trapping by Metcalf and Straten) I am reading. We start by with writing down Schrodinger's equation for a two-level system where the Hamiltonian is given by $H=H_0 + H'$. And $H_0$ absorbs all diagonal terms from $H'$ so that the resulting Schrodinger equations are a coupled differential equations.
$$i\hbar\frac{\partial\psi}{\partial t} = H \psi$$
and $$|\psi\rangle = c_g |g\rangle + c_e e^{-i\omega_{eg}t}|e\rangle$$
where $|g\rangle$ is the ground state, $|e\rangle$ is the excited state, and $\omega_{eg}=\omega_e-\omega_g$. </p>
<p>The coupled equations are
$$\begin{align}
i\hbar \dot{c}_g(t) &= c_e(t)H'_{ge}(t) e^{-i\omega_{eg}}t\\
i\hbar \dot{c}_e(t) &= c_g(t)H'_{eg}(t) e^{i\omega_{eg}}t
\end{align}
$$</p>
<p>In the book, the author uses $H'(t)=-e\vec{E}(\vec{r},t) \cdot \vec{r}$ and with a plane wave travelling in the $z$-direction, the electric field operator becomes $\vec{E}(\vec{r},t)=E_0\hat{\epsilon}\cos(kz-\omega_l t)$, where $\omega_l$ is the laser frequency. Now if we define Rabi frequency as
$$\Omega\equiv \frac{-eE_0}{\hbar}\langle e|r|g\rangle,$$
the element of $H'$ becomes $H'_{eg}=\hbar \Omega \cos (kz-\omega_l t)$.</p>
<p>Here is where I run into problems, I plug in the expression for $H'_{eg}$, differentiate the second equation and use both first order equations to eliminate $c_g(t)$. However the resulting equation contains $z$ dependence that I do not know how to get rid of. </p>
<p>Referring to <a href="http://community.dur.ac.uk/thomas.billam/PreviousNotes_MPAJones.pdf" rel="nofollow noreferrer">this note</a>, I see they use the same expression for $H'$ but they also ignored the $z$ dependence. I am wondering what I have missed in considering setting up the equations.</p>
|
<p>You probably implicitly making the assumption that the wavelength of this wave is much larger than the size of the atom, so that $kz \ll 1$. Here is why I think that:</p>
<p>I see that you are using the interaction picture, so that $i\hbar \frac{\partial}{\partial t} \lvert \psi \rangle = H'\lvert \psi \rangle$, then, applying $\langle g \rvert$ and $\langle e \rvert$ we obtain the two equations:</p>
<p>$$
i\hbar \frac{\partial}{\partial t}C_g(t) = C_g(t)\langle g \rvert H' \lvert g \rangle + C_e(t)e^{-i\omega_{eg}t}\langle g \rvert H' \lvert e \rangle
$$
$$
i\hbar \frac{\partial}{\partial t}C_e(t) = C_e(t)\langle e \rvert H' \lvert g \rangle + C_e(t)e^{-i\omega_{eg}t}\langle e \rvert H' \lvert e \rangle
$$</p>
<p>But in your calculations $\langle g \rvert H' \lvert g \rangle = \langle e \rvert H' \lvert e \rangle = 0$. This is usually (as far as I know) because of a parity argument. Because, for g, and similarly for e:</p>
<p>$$
\langle g \rvert H' \lvert g \rangle = \langle g \rvert -e E_0 \cos(kz-\omega t) \hat{\epsilon} \cdot \vec{r} \lvert g \rangle = -eE_0\hat{\epsilon}\cdot \int d^3 \vec{r} |\psi_ g(\vec{r})|^2\vec{r} \cos(kz-\omega t)
$$</p>
<p>If $kz \ll 1$, we can consider the cosine to be constant for the integration, take it out of the integral and get 0 as a result because the resulting integrand will be an odd function ($|\psi|^2$ is even and $\vec{r}$ is odd).</p>
<p>Now, assuming that we are taking $kz \ll 1$,</p>
<p>$$
H'_{ge} = \langle g \rvert -eE_0 \cos(kz-\omega t)\hat{\epsilon} \cdot \vec{r} \rvert e \rangle = -eE_0 \langle g \lvert \cos(kz - \omega t) \hat{\epsilon} \cdot \vec{r} \rvert e \rangle
$$
$$
H'_{ge} \approx -eE_0 \cos(\omega t) \langle g \rvert \hat{\epsilon} \cdot \vec{r} \lvert e \rangle
$$</p>
| 1,090
|
differential equations
|
Should every physical problem formulated as a differential equation have a mathematical solution?
|
https://physics.stackexchange.com/questions/354051/should-every-physical-problem-formulated-as-a-differential-equation-have-a-mathe
|
<p>I encountered the following statement in Boyce's <em>Elementary Differential Equations and Boundary Value Problems</em> : </p>
<blockquote>
<p>Not all differential equations have solutions; nor is the question of existence purely mathematical. If a meaningful physical problem is correctly formulated mathematically as a differential equation, then the mathematical problem should have a solution. </p>
</blockquote>
<p>Is this true? </p>
|
<p>Maybe there is more context that qualifies this statement, but taken as is, it's completely false. In general, when we talk about existence of solutions to a differential equation, we're talking about existence given a certain set of boundary conditions. It's perfectly possible, in practical real-world problems, that we can have constraints on the boundary conditions, and if our boundary conditions don't satisfy those constraints, there is no solution.</p>
<p>For example, I could write down a differential equation representing the free motion of a body in a viscous medium. I could then specify the following boundary conditoins: at $t=0$ its velocity is zero, and at $t=t_f>0$ its velocity is nonzero. There is no such solution.</p>
<p>In physical problems where we specify the initial conditions, we want not just existence but uniqueness of solutions.</p>
<p>Less trivially, we can have examples of inconsistency or indeterminism (solution exists, but is not unique) in physical problems that come up in interesting, "meaninful" contexts. Examples include Norton's dome, naked singularities, and the Novikov consistency principle.</p>
| 1,091
|
differential equations
|
Differential equation for describing a moving disk
|
https://physics.stackexchange.com/questions/732826/differential-equation-for-describing-a-moving-disk
|
<p>I'm doing some self-study on physics and came across this problem:</p>
<blockquote>
<p>A disk rolls without slipping across a horizontal plane. The plane of the disk remains vertical, but it is free to rotate about a vertical axis. What generalized coordinates may be used to describe the motion? Write a differential equation describing the rolling constraint. Is this equation integrable? Justify your answer by a physical argument. Is the constraint holonomic?</p>
</blockquote>
<p>This is a problem in the book Thornton/Marion's Classical Dynamics of Particles and Systems,
and there's a <a href="https://www.chegg.com/homework-help/student-solutions-manual-for-thornton-marion-s-classical-dynamics-of-particles-and-systems-5th-edition-chapter-7-solutions-9780534408978" rel="nofollow noreferrer">Chegg solution</a> which goes:</p>
<p><a href="https://i.sstatic.net/VBuLM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VBuLM.png" alt="enter image description here" /></a></p>
<p>My question is: How did they arrive at the differential equation <span class="math-container">$$ dx cos(\phi) + dy sin(\phi) = Rd\theta~? $$</span></p>
|
<p><a href="https://i.sstatic.net/tERZN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tERZN.png" alt="enter image description here" /></a></p>
<p>the disk can rotate about the z-axis with the angle <span class="math-container">$~\varphi~$</span> and about the y-axis with the angle <span class="math-container">$~\theta~$</span>, thus the rotation matrix is:</p>
<p><span class="math-container">$$\mathbf R=\mathbf R_z(\varphi)\,\mathbf R_y(\theta)$$</span></p>
<p>from here you obtain the angular velocity in inertial system</p>
<p><span class="math-container">$$\vec\omega= \left[ \begin {array}{c} -\sin \left( \varphi \right) \dot\theta
\\ \cos \left( \varphi \right) \dot\theta
\\ \dot\varphi \end {array} \right]
$$</span></p>
<p>the disk can move in the x-y plane with the velocities <span class="math-container">$~\dot x~,\dot y$</span></p>
<p><span class="math-container">$$\vec v_d=\left[ \begin {array}{c} {\dot x}\\ {\dot y}
\\ 0\end {array} \right]
$$</span></p>
<p>the velocity at contact point with the plane is zero (roll condition) . you obtain
<span class="math-container">$$\vec v_c=\vec v_d-\vec \omega\times\, \begin{bmatrix}
0 \\
0 \\
-R \\
\end{bmatrix}=\vec 0\quad\Rightarrow$$</span></p>
<p><span class="math-container">$${\dot x}=-\cos \left( \varphi \right) \dot\theta \,R\tag 1$$</span>
<span class="math-container">$$ {\dot y}=-\sin \left( \varphi \right) \dot\theta \,R$$</span> <span class="math-container">$\quad\Rightarrow$</span>
<span class="math-container">$$\frac{dy}{dx}=\tan(\varphi)$$</span></p>
<p>equations (1) and (2) are the nonholonomic constraint equations</p>
| 1,092
|
differential equations
|
Frobenius method for fourth-order differential equation
|
https://physics.stackexchange.com/questions/839455/frobenius-method-for-fourth-order-differential-equation
|
<p>I am trying to reproduce some results from a paper
<a href="https://iopscience.iop.org/article/10.1209/epl/i1998-00235-7" rel="nofollow noreferrer">https://iopscience.iop.org/article/10.1209/epl/i1998-00235-7</a></p>
<p>The authors solved a 4th order partial differential equation
<span class="math-container">$$\nabla^4 u+(2-\lambda)\nabla^2+2\lambda(1+u)=0$$</span>
where <span class="math-container">$u(\theta,\phi)$</span> is a function of angular position, and <span class="math-container">$\nabla^2=\frac{1}{\sin\theta}\partial_\theta(\sin\theta\ \partial_\theta)+\frac{1}{\sin^2\theta}\partial_\phi^2$</span> is the angular Laplacian. First they expressed the general solution as a multipole expansion as follows
<span class="math-container">$$u=-1+\sum_{n=0}^\infty u_n(\theta)\cos(n\phi)$$</span>
then they defined <span class="math-container">$u_n$</span> as a Frobenius series
<span class="math-container">$$u_n=\left(\frac{1+s}{1-s}\right)^{n/2}\sum_{k=0}^\infty\chi_{nk}(1+s)^k$$</span>
where <span class="math-container">$s=\cos\theta$</span>. From this they found about the following recurrence relation
<span class="math-container">$$\chi_{nk}=\frac{2k(k-2)+\lambda}{2k(k+n)}\chi_{nk-1}-\frac{k(k-1)(k-2)(k-3)+\lambda(k^2-3k+4)}{4k(k-1)(k+n)(k+n-1)}\chi_{nk-2}$$</span>
I wasn't able to reproduce this result. I am assuming they did some similar calculation from the usual Frobenius method for the second order differential equations. What surprises me is that despite the biharmonic operator they ended up the same style of recurrence relation with that of the second order equation. I worked out myself and ended up extra terms like <span class="math-container">$\chi_{nk-3}$</span> and <span class="math-container">$\chi_{nk-4}$</span> as expected. Can someone figure out what did I miss here?</p>
| 1,093
|
|
differential equations
|
How are the differential forms for Maxwell's Equations used?
|
https://physics.stackexchange.com/questions/466189/how-are-the-differential-forms-for-maxwells-equations-used
|
<p>I am currently reading up on Maxwell's Equations (specifically Ampere's Circuital Law- with Maxwell's Addition) for a presentation on differential equations.</p>
<p>I chose the topic ignorant of how the differential form of these equations are used, and I cannot seem to find a digestable use of their differential form anywhere.</p>
<p>My understanding of the differential forms is that their mathematical representations in these forms are easier to grasp than their integral forms (they make more physical sense). However, I am certain they are used for more than just this, but I cannot seem to find any of examples of this.</p>
<p>That being said, my questions are:</p>
<ol>
<li>Are the differential forms for Maxwell's Equations used in practice?</li>
<li>How are they used? (If possible, please provide a mathematical model and/or link to its derivation or result)</li>
</ol>
<p>In particular, I am focused on Ampere's Law, so answers involving just this equation is okay too.</p>
|
<p>The integral forms of Maxwell's equations are fairly useless unless you have situations with very high degrees of symmetry and/or fields aligned along co-ordinate axes. e.g. The beloved examples of undergraduate physics everywhere of spherical and cylindrical charge and current distributions.</p>
<p>Once you move away from these situations then the integral forms become extremely difficult to use in practice because they do not apply at a point. If you wish to numerically solve the equations then it is far easier to do that starting off with a set of <em>differential</em> equations that are already in the form that are amenable to solving on a "grid".</p>
<p>A second reason to move to the differential forms is to show how electromagnetic waves can exist and can be generated from accelerating charge and current distributions. </p>
<p>The differential forms also allow you to intuitively grasp some aspects of electromagnetism far more easily. e.g. If I ask you whether the field <span class="math-container">$\vec{B} = x\vec{i}$</span> is a valid description for a magnetic field, it is far easier to say that it can't be because the divergence is non-zero than to perform closed surface integrals for (potentially) an infinite number of possible closed surfaces.</p>
| 1,094
|
differential equations
|
Is it possible that classical propagator be used as an integrating factor for solving differential equations?
|
https://physics.stackexchange.com/questions/816994/is-it-possible-that-classical-propagator-be-used-as-an-integrating-factor-for-so
|
<p>I have two questions about the picture.</p>
<p>1)
I think classical propagator itself is not function, is just an operator.</p>
<p>And "(operator)(function)" is not that "(operator)X(function)".</p>
<p>So it seems that the product rule can't be applied in differentiation.</p>
<p>Then, is it possible that classical propagator be used as an integrating factor for solving differential equations?</p>
<p>2)
The first term in the differential equation is partial derivative.</p>
<p>Then, is it possible to integrate both sides with respect to x ("dx") while solving a differential equation using integrating factor?</p>
<p>?)</p>
<p><a href="https://i.sstatic.net/Dal2Wio4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Dal2Wio4.png" alt="enter image description here" /></a></p>
|
<p>I'm answering, assuming <span class="math-container">$L_0$</span> is a scalar.</p>
<h2>Integration in space</h2>
<blockquote>
<p>Then, is it possible to integrate both sides with respect to x ("dx") while solving a differential equation using integrating factor?</p>
</blockquote>
<p>If you perform this operation, you're integrating over the space variable and thus computing a multiple of the average field <span class="math-container">$\int_x f(x,t) dx = L \overline{f}(t)$</span>, without solving the evolution in time. Thus, you'd get a dynamical equation for the average value of the function: you're losing the information on the point value of <span class="math-container">$f(x,t)$</span> and not solving the evolution in time. You can do it, but I'm not sure this is what you want.</p>
<h2>Integration in time</h2>
<p>The solution of the linear equation is the <strong>convolution</strong> (not the "usual" product) of the "impulsive response" (the propagator) and the forcing term in time domain, and the product of the transforms in transformed domains (here Laplace). You can prove it both in time and transformed domains.</p>
<p><strong>Time domain.</strong> Starting from the equation</p>
<p><span class="math-container">$$\left( \frac{\partial}{\partial t} + i L_0 \right) \Delta f(x,t) = - i \Delta L(t) F(x) \ ,$$</span></p>
<p>you can multiply by <span class="math-container">$e^{i L_0 t}$</span> in order to get</p>
<p><span class="math-container">$$\frac{\partial}{\partial t} \left( e^{i L_0 t} \Delta f(x,t) \right) = e^{i L_0 t} \left( -i \Delta L(t) F(x) \right) \ ,$$</span></p>
<p>that, after integration in time, becomes</p>
<p><span class="math-container">$$e^{i L_0 t} \Delta f(x,t) - e^{i L_0 t_0} \Delta f(x,t_0) = \int_{\tau=t_0}^t e^{i L_0 \tau} \left( -i \Delta L(\tau) F(x) \right) d \tau$$</span></p>
<p><span class="math-container">$$\Delta f(x,t) = e^{i L_0 (t_0 - t)} \Delta f(x,t_0) + \int_{\tau=t_0}^t e^{i L_0 (\tau - t)} \left( -i \Delta L(\tau) F(x) \right) d \tau$$</span></p>
<p><strong>Laplace domains.</strong></p>
<p><span class="math-container">$$s\Delta \hat{f}(x,s) - \Delta \hat{f}(x,0) + i L_0 \Delta \hat{f}(x,s) = - i \Delta \hat{L}(s) F(x)$$</span></p>
<p><span class="math-container">$$\hat{f}(x,s) = \frac{1}{s + i L_0} \Delta \hat{f}(x,0) +\frac{1}{s + i L_0} (-i \Delta \hat{L}(s) F(x)) \ .$$</span></p>
| 1,095
|
differential equations
|
Physical meaning of this boundary value differential equation
|
https://physics.stackexchange.com/questions/371393/physical-meaning-of-this-boundary-value-differential-equation
|
<p>(I originally posted this on math stack exchange but was advised to post it here)</p>
<p>I am considering the following boundary value problem:
$$-\frac{\mathrm{d}}{\mathrm{d}x} \left[ a(x) \frac{\mathrm{d}}{\mathrm{d}x}(u(x)) \right] + c(x)u(x) = f(x),$$
where $x \in [0,1]$ and $u(0) = u(1) = 0.$</p>
<p>I searched through the boyce and diprima differential equations book but did not find any physical interpretation to the differential equation above with the given boundary conditions. From what I've seen, the equation above arises as a result of solving PDEs. I'm looking for a physical interpretation for the diff eq itself.</p>
<p>More specifically, it'd be great if someone can point me to a reference which specifies what $a(x),c(x),f(x),u(x)$ can mean. I already have an idea of how my $a(x)$ will be represented. I intend to play around with $c(x),f(x)$ to obtain interesting looking solutions $u(x)$. However, I don't want to just blindly play around with $c(x),f(x)$ not knowing what they mean.</p>
<p>Suggestions appreciated.</p>
|
<p>I encountered the type of equation you mentioned in lectures I heard on "Methods on the solutions of ordinary & partial differential equations". As example for its usefulness the "heat equation in thermodynamical equilibrium" was cited. In thermodynamical equilibrium the partial time-derivative of temperature "field" is zero, whereas an (in time) constant heat source $f(x)$ is assumed to be non-zero. The solution of this (boundary) problem is the temperature distribution $u(x)$. The coefficient $a(x)$ corresponds to an anisotropic diffusion coefficient (The coefficient $c$ is zero in this example, I regret). If you consult wikipedia on "heat equation" you will get good overview on most of the details of this type of problem.
Actually, in the meantime it is considered more like an engineering problem which might explain why you did not find much information on it in physics literature. (Actually such problems of this type also exist in elasticity theory.)</p>
<p>Actually it is a typically engineering problem where for instance some material part of a mechanical device (defined by an area $\Omega$) is exposed to a constant heat source and the question is if in the considered area $\Omega$ a certain (unacceptable) temperature limit is exceeded. The approach is often by the use of finite elements FE(as presented in the lecture I heard). Plenty commercial software for doing this job already exits.
Therefore this problem type, I would say, already left the field physicists are really interested in. </p>
<p>But the heat equation only differs from the Schroedinger equation by an imaginary $i$ in the time-dependent part, this $i$ is no longer in the time-independent Schroedinger equation, therefore the time-independent Schroedinger equation (SE) also has the form of the equation you cite. However, very often the functions $c(x)$ in the SE have a singular point, which actually requires different solutions techniques as in the first example I mentioned. </p>
<p>Of course I only mentioned one respectively two examples, actually there are plenty of other examples of the use of this equation. However, as already mentioned for the SE, the properties of the different functions $a$, $b$, $u$ and $f$ applied in the mathematical lectures often are not fulfilled in equations of this type in physics, so other techniques were used by physicists to solve such equations. Those are often so different that you will hardly recognize them compared to those typically shown in mathematical lectures (e.g. the one I heard). </p>
| 1,096
|
differential equations
|
Two parameter differential equation solution
|
https://physics.stackexchange.com/questions/810734/two-parameter-differential-equation-solution
|
<p>I am working on the paper titled "Energetic and entropic effects of bath-induced coherences" (<a href="https://arxiv.org/abs/1905.02013" rel="nofollow noreferrer">https://arxiv.org/abs/1905.02013</a>)and there is a two parameter differential equation for calculating the population rates such that:
<span class="math-container">$$
\dot{P}_{+} = -4[G(ω) – G(-ω)]p_{o} – 4[2G(ω) + G(-ω)]P_{+} + 4G(ω)r \\
\dot{P}_{o} = 4G(ω)P{+} - 4G(-ω)P_{o}
$$</span>
However, it defines a new quantity to obtain the results such that
<span class="math-container">$$q^{\pm} = {P}_{+} + (1 \pm \sqrt{G(-ω)/G(ω)})P_{o} $$</span>
with the corresponding eigen values
<span class="math-container">$$a^{\pm} = 4 (\pm \sqrt{G(-ω)G(ω)}-G(ω)-G(-ω)) $$</span>
My question is, is this an approach to solve the main equantions or I am making mistake.</p>
| 1,097
|
|
differential equations
|
Trouble solving partial differential equation with Laplacian squared
|
https://physics.stackexchange.com/questions/582593/trouble-solving-partial-differential-equation-with-laplacian-squared
|
<p>I am working in extensions of General Relativity Theory and at the moment of taking the Newtonian limit of this extension theory (essentialy, mathematically speaking, this is just linearizing the field equations obtained via the variational principle, but this is not important) I arrive to the following partial differential equation:
<span class="math-container">\begin{equation}
\nabla^2 h+b\nabla^4h=\alpha \delta^3(\vec r)+\beta \dfrac{1}{r}e^{-r/\gamma}.
\end{equation}</span>
Here, <span class="math-container">$\nabla^2$</span> is the Laplacian operator, <span class="math-container">$\nabla^4=\nabla^2\nabla^2$</span> is the ''squared'' of Laplacian operator, <span class="math-container">$\alpha,\beta$</span> and <span class="math-container">$\gamma$</span> are just real constants, <span class="math-container">$\delta^3(\vec r)$</span> is the three dimensional Dirac delta, <span class="math-container">$r$</span> is the variable and <span class="math-container">$h(r)$</span> is the function we're solving for (physically, it is basically the Newtonian potential).</p>
<p>I am having a lot of trouble to solve this differential equation, I tried to solve it by Fourier transforms but I'm not capable to find the analytic expression for <span class="math-container">$h(r)$</span>. Even though, I know the solution must be what in physics we call Yukawa-like potentials, i.e, the solution must be of the form <span class="math-container">$\dfrac{k_1}{r}$</span> plus terms like <span class="math-container">$\dfrac{k_2}{r}e^{-mr}$</span>.</p>
<p>Can someone help me out to find the solution?</p>
|
<p>Take the Fourier transform of each side with
<span class="math-container">$$
h(x) = \int \tilde h(k) e^{-ikx} \frac{d^3k}{(2\pi)^3}
$$</span>
so that
<span class="math-container">$$
\nabla^2 h(x)= \int \left\{-|k^2|\tilde h(k)\right\} e^{-ikx} \frac{d^3k}{(2\pi)^3}, \quad etc.
$$</span></p>
<p>Here <span class="math-container">$|k^2| = k_x^2+k_y^2+k_z^2$</span>.</p>
<p>As (I think!)
<span class="math-container">$$
\frac{e^{-m|x|}}{4\pi r}= \int e^{ikx}\frac 1 {|k^2|+m^2}\frac{d^3k}{(2\pi)^3},
$$</span></p>
<p>we get
<span class="math-container">$$
(-|k^2|+ b |k^2|^2) \tilde h(k) = \left(\alpha +\beta \frac{1}{k^2+m^2}\right) .
$$</span>
This leads to
<span class="math-container">$$
\tilde h(k)= \left(\alpha +\beta \frac{1}{|k^2|+m^2}\right) \left(\frac {1}{|k^2|-b |k^2|^2}\right)\\
= \left(\alpha +\beta \frac{1}{|k^2|+m^2}\right) \left(\frac {1}{|k^2|(1-b |k^2|)}\right).
$$</span>
Now invert the FT to get
<span class="math-container">$$
h(x)= -\int \left(\alpha +\frac{\beta}{|k|^2+m^2}\right)\left(\frac{e^{-ikx}}{|k|^2(1-b |k|^2)}\right) \frac{d^3k}{(2\pi)^3}.
$$</span>
Resolve the various factors using partial fractions to get a sum of Yukawas.</p>
<p>I have not tried doing the latter, so I have not yet seen your divergence problem.</p>
| 1,098
|
differential equations
|
Amplitude of Oscillation Without Solving Differential Equation
|
https://physics.stackexchange.com/questions/406265/amplitude-of-oscillation-without-solving-differential-equation
|
<p>I am currently working on problem in my own research. There seems to be a weak analogy between my problem and motion on a spring. Therefore, I am exploring this question in regards to a mass oscillating on a spring in hopes to gain further insight into my own system in question.</p>
<p>Here is the idea: We can write out the differential equation of motion for a mass on a spring</p>
<p>$m\ddot x=-kx$</p>
<p>Even though we can find an analytic solution to this equation, let's assume that we can only solve this equation numerically.</p>
<p>Let's say we start the mass at rest at some non-zero initial position. We know that the amplitude of the oscillation will be equal to the magnitude of the initial position (for example, if we start at x = 5 m, then the amplitude will be 5 m). </p>
<p>My question is this: Is there a way to determine that this is true of the amplitude without actually solving the differential equation. In other words, can we use the equation (and maybe other things we know about the system, like how at the maximum position the velocity is 0 and the acceleration will be at a maximum) to determine what will be our maximum position without actually solving the differential equation.</p>
<p>I know this is kind of vague, so if more information is needed please let me know.</p>
|
<p>The answer is yes, because this equation of motion conserves energy. At any time, $E = \frac{1}{2}m\dot{x}^2 + \frac{1}{2}kx^2$ is constant because
$$\frac{dE}{dt} = \dot{x}\left(m\ddot{x}+kx\right) = 0$$</p>
<p>This means if we know the initial conditions $x(t=0)$ and $\dot{x}(t=0)$ we know the energy, and we can use that to find the position and velocity at later times. In particular, as $\dot{x}^2$ decreases $x^2$ must increase, and $\dot{x}^2$ can't be smaller than $0$ so $|x|$ is maximum when
$$\frac{1}{2}kx^2 = E$$
or
$$x_{max} = \sqrt{\frac{2E}{k}}$$</p>
<p>If you don't know both the position and the velocity at some point in time then this technique does not work, but then there's very little you can say about the motion anyway.</p>
| 1,099
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.