category
stringclasses
107 values
title
stringlengths
15
179
question_link
stringlengths
59
147
question_body
stringlengths
53
33.8k
answer_html
stringlengths
0
28.8k
__index_level_0__
int64
0
1.58k
differential equations
Why are differential equations called differential equations?
https://math.stackexchange.com/questions/4631/why-are-differential-equations-called-differential-equations
<p>Why are differential equations called differential equations?</p>
<p>Because they are equations (with the variable being a function, not a number) that involve a function and its derivatives (the functions obtained by differentiating it).</p>
400
differential equations
Coddington&#39;s An Introduction to Differential Equations, Tenenbaum&#39;s Ordinary Differential Equations or Ince&#39;s Ordinary Differential Equations?
https://math.stackexchange.com/questions/3539971/coddingtons-an-introduction-to-differential-equations-tenenbaums-ordinary-dif
<p>Which of these books, Coddington's An Introduction to Differential Equations, Tenenbaum's Ordinary Differential Equations and Ince's Ordinary Differential Equations, is better to learn Differential Equations (at least the ordinary differential equations)?</p>
<p>Zill &amp; Wright's book &quot;Differential Equations with Boundary-Value Problems&quot; is pretty good. I'm still using this book in my final year of undergrad studies.</p> <p>Wish I could attach the textbook here as I have a pdf copy.</p>
401
differential equations
Martin Braun&#39;s Differential Equations and Their Applications or Arnold&#39;s Ordinary Differential Equations plus Evans&#39; Partial Differential Equations?
https://math.stackexchange.com/questions/3540034/martin-brauns-differential-equations-and-their-applications-or-arnolds-ordinar
<p>My objective is to study Ordinary and Partial Differential Equations in a theoretical way. I have 2 options on buying differential equations books in mind. These options are: (1) Differential Equations and Their Applications by Martin Braun and (2) Ordinary Differential Equations by V. I. Arnold plus Partial Differential Equations by Lawrence C. Evans. The first option is without doubt more affordable. However, the second may be more complete and more suitable for my purpose of theoretical study. What do you recommend? </p>
402
differential equations
Systems of Differential Equations and higher order Differential Equations.
https://math.stackexchange.com/questions/391002/systems-of-differential-equations-and-higher-order-differential-equations
<p>I've seen how one can transform a higher order ordinary differential equation into a system of first-order differential equations, but I haven't been able to find the converse. Is it true that one can transform any system into a higher-order differential equation? If so, is there a general method to do so?</p>
<p>If I am understanding your question, you just would reverse the process on the last equation from the system.</p> <p>An $n^{th}$ order differential equation can be converted into an $n$-dimensional system of first order equations. </p> <p>There are various reasons for doing this, one being that a first order system is much easier to solve numerically (using computer software) and most differential equations you encounter in “real life” (physics, engineering etc) don’t have nice exact solutions.</p> <p>If the equation is of order $n$ and the unknown function is $y$, then set:</p> <p>$$x_1 = y, x_2 = y', \ldots , x_n = y^{n−1}.$$</p> <p>Note (and then note again) that we only go up to the $(n − 1)^{st}$ derivative in this process. Lets do an example in both directions (practice some where a known system has been converted to such a system and make sure you can work backward).</p> <p><strong>Forward Approach</strong></p> <p>$$\tag 1 y^{(4)} - 3y' y'' + \sin(t y'') -7ty^2 = e^t$$</p> <p>Let: $x_1 = y, x_2 = y', x_3 = y'', x_4 = y'''$ and substitute into $(1)$, yielding:</p> <ul> <li>$x_1' = y' = x_2$</li> <li>$x_2' = y'' = x_3$</li> <li>$x_3' = y''' = x_4$</li> <li>$x_4' = y^{(4)} = 3yy''-\sin(ty'')+7ty^2+e^t = 3x_2x_4-\sin(tx_3)+tx_1^2+e^t$</li> </ul> <p><strong>Backward Approach</strong></p> <p>Looking at the last equation from the system, we let: $y = x_1, y' = x_2, y''=x_3, y'''=x_4$ and substitute into the system's last equation above, yielding:</p> <ul> <li>$y^{(4)} = 3y'y''-\sin(ty'')+7ty^2+e^t$</li> </ul>
403
differential equations
Differential Equations self-study
https://math.stackexchange.com/questions/3356898/differential-equations-self-study
<p>I have become a TA for a professor who teaches differential equations. The course is basically a self study course where students are to use their previous knowledge to explore and teach themselves differential equations. There is no book for the class which makes this hard for some students. I am reaching out to the community to see if there are any differential equation textbooks out there that really help a student self study differential equations. In doing some research I have found "A First Course in Differential Equations with Modeling Applications by Dennis G. Zill" as well as "Fundamentals of Differential Equations by R. Kent Nagle" to be somewhat decent. Thank you for your recommendations.</p>
<p>I think Hirsch and Smale's First edition book is absolutely amazing. It progresses nicely starting with linear systems, and generalizing the treatment to non-linear ODEs. It treats the linear algebra clearly, it's very geometric, and the theorems are stated clearly, and proven very nicely. In this book they're not really concerned with the billions of techniques of solving ODEs, rather they are focused on a few key principles, and they elucidate them very nicely. (as you can tell, I'm very fond of this book)</p> <p>A second book which treats the material in a similar spirit is Lawrence Perko's <a href="https://www.springer.com/gp/book/9780387951164" rel="noreferrer">Differential Equations and Dynamical Systems</a>. I found Perko's and Hirsch/Smale to be very nice complementary texts.</p> <p>Finally, if you want a fearlessly general glimpse to the subject of ODEs, Henri Cartan's book Differential Calculus has a small chapter devoted to the main setup of the theory; existence, uniqueness, smooth dependence on initial conditions, linear equations etc in the general context of Banach spaces.</p> <hr> <p>But really, the subject of ODEs is very big, and different books have different goals. The books I mentioned focus on the geometric aspect and linear algebra (for the first two), but if you/the prof have different intentions, then obviously, you should consider a different source.</p>
404
differential equations
Differential equations notation
https://math.stackexchange.com/questions/108544/differential-equations-notation
<p>I've always wondered why does the differential equation notation for linear equations differ from the standard terminology of vector spaces.</p> <p>We all know that the equation $y&#39;&#39; + p(x) y&#39; + q(x)y = g(x)$ for some function $g$ is called <em>linear</em> and that the associated equation $y&#39;&#39; + p(x)y&#39; + q(x) y = 0$ is called <em>homogeneous</em>. But why is that? WHY should mathematicians explicitly cause confusion with the rest of the theory of vector spaces?</p> <p>What I mean by that is : Why not call the equation $y&#39;&#39; + p(x)y&#39; + q(x)y = g(x)$ an <em>affine</em> equation and call $y&#39; + p(x) y&#39; + q(x) y = 0$ a <em>linear</em> equation? Because linear equations (in the sense of differential equations) are not linear in the sense of vector spaces unless they're homogeneous ; and linear equations (in the sense of differential equations) remind me more of a linear system of the form $Ax = b$ (which is called an affine equation in vector space theory) than of a linear equation at all. </p> <p>Just so that I made myself clear ; I perfectly know the difference between linear equations in linear algebra and linear equations in differential equations theory ; I'm asking for some reason of "why the name".</p> <p>Thanks in advance,</p>
<p>There is no confusion at all: these are in fact the same concepts when viewed in the right light.</p> <p>First of all, we need to find a vector space to put all these functions in. Let $C^\infty(\mathbb{R})$ be the space of all smooth functions $\mathbb{R} \to \mathbb{R}$; this is naturally a $\mathbb{R}$-vector space, albeit of infinite dimension. Consider the operator $D : C^\infty (\mathbb{R}) \to C^\infty (\mathbb{R})$ defined by $$D f = \sum_{k=0}^{n} a_k f^{(k)}$$ where $a_k : \mathbb{R} \to \mathbb{R}$ are some smooth functions and $f^{(k)}$ is the $k$-th derivative of $f$. $D$ is easily seen to be a <em>linear</em> operator, and a differential equation of the form $$D f = g$$ is precisely a linear ODE of order $n$. That is to say, the word ‘linear’ refers to the linearity of $D$ as an operator! In this light, solving a linear ODE consists of two steps:</p> <ol> <li><p>Finding the kernel of $D$, i.e. solving the homogeneous linear ODE $D f = 0$, and</p></li> <li><p>Finding a ‘particular integral’ $f_0$ such that $D f_0 = g$.</p></li> </ol> <p>This is exactly the same as solving a system of (possibly inhomogeneous) linear equations in ordinary linear algebra! </p> <p>Now, one might be tempted to find an analogue of Gaussian elimination to work with linear ODEs, but the fact that $C^\infty (\mathbb{R})$ has infinite dimension and no natural basis tends to screw things up a little...</p>
405
differential equations
First order ordinary differential equations?
https://math.stackexchange.com/questions/2201171/first-order-ordinary-differential-equations
<p>Please tell how one can identify 1st order</p> <p>1) Homogeneous differential equations.</p> <p>2) Homogeneous linear differential equations.</p> <p>3) Non Homogeneous differential equations.</p> <p>4) Non Homogeneous linear differential equations.</p>
<ul> <li>First-order means the highest degree of derivative with respect to $y$ is one. That is, we only have $y'$ or $\frac{dy}{dx}$. No $y'', y'''$, etc.</li> <li><p>Linear means that the differential equation is of the form $A(x)y'+B(x)y+C(x)=0$. That is, there is no $y^2,yy',(y')^2,$ etc.</p></li> <li><p>Homogeneous means that in the above form also $C(x)=0$. So the general form of a first-order linear homogeneous equation is $y'+P(x)y=0$ </p></li> </ul>
406
differential equations
Getting into differential equations
https://math.stackexchange.com/questions/723964/getting-into-differential-equations
<p>I'm just getting into differential equations now and I've got to show that the given $y(x)$ is a solution to the differential equation: $$u'+u = 0 \ , \ y(x) = Ce^{-x}$$</p> <p>How do I tackle this? I know nothing about differential equations and my book has the strangest explanations.</p>
<p>The point is that you plug $y(x) = Ce^{-x}$ into the differential equation at the place of $u$ , and of course the derivative of $y(x)$ at the place of $u'$ (do you know how to differentiate $y(x)$?). Then you must show that the equality in the differential equation holds, that is $y'(x) + y(x) = 0$. So \begin{align} y'(x) + y(x) = -Ce^{-x} + Ce^{-x} = (-C+C)e^{-x} = 0\cdot e^{-x} = 0. \end{align} So it is, in fact, a solution since it equals zero as it should.</p>
407
differential equations
Distinction between &quot;measure differential equations&quot; and &quot;differential equations in distributions&quot;?
https://math.stackexchange.com/questions/294696/distinction-between-measure-differential-equations-and-differential-equations
<p>Is there a universally recognized term for ODEs considered in the sense of distributions used to describe impulsive/discontinuous processes? I noticed that some authors call such ODEs "measure differential equations" while others use the term "differential equations in distributions". But I don't see a major difference between them. Can anyone please make the point clear? Thank you.</p>
<p>As long as the distributions involved in the equation are (signed) measures, there is no difference and both terms can be used interchangeably. This is the case for impulsive source equations like $y''+y=\delta_{t_0}$. </p> <p>Conceivably, ODE could also involve distributions that are not measures, such as the derivative of $\delta_{t_0}$. In that case only "differentiable equation in distributions" would be correct. But I can't think of a natural example of such an ODE at this time. </p>
408
differential equations
Differential Equations -&gt; Residue
https://math.stackexchange.com/questions/82304/differential-equations-residue
<p>I am currently taking an Engineering course (differential equations), in which the concept of "Residue" has been introduced. Having missed part of the lecture, and reviewed both the class textbook (no help) and my Anton Bivens Calculus book, I have found almost no information on how to actually calculate the residue with regards to a differential equation, nor what the resulting equation (minus the residue) would look like. A Google search has been aggravating, and the Differential Equations for Dummies book I purchased does not appear to make any mention of this method either.</p> <p>Could anyone explain, or point me to some idiot-level lecture notes to help explain this concept to me?</p> <p>Regards, -R</p>
<p>The residues typically appear when you solve the differential equations via Laplace Transformation. You can take a look at the <a href="http://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/video-lectures/" rel="nofollow">video lectures of Arthur Mattuck</a> for a neat introduction to the subject. If I remember correctly the relevant part starts from Lecture 19.</p>
409
differential equations
Book recommendation: Differential equations with differential geometry
https://math.stackexchange.com/questions/3374801/book-recommendation-differential-equations-with-differential-geometry
<p>I have been doing some self-study of differential equations and have finished Habermans' elementary text on linear ordinary differential equations and about half of Strogatz's nonlinear differential equations book. The thing that I am noticing is just how much these text avoid engaging the underlying differential geometry/topology of phase spaces. It also feels like the further I got in this differential equations, the more important it is to understand the underlying differential topology--for instance understanding Hamiltonian systems and symplectic manifolds, etc. </p> <p>Indeed, the only text that I have seen that seems to engage the differential topology of phase spaces seems to be Arnold's 1973 book on Ordinary Differential Equations. This seems to be a really good book. The challenge is that Arnold can be a bit terse sometimes, so I was hoping to find a book to supplement Arnold's text. </p> <p>I have enough background in differential topology by watching Fredric Schuller's lectures and then working through some of Renteln and John Lee's books. </p> <p>Hence, I was hoping to find a book that elaborates on the differential topology side of differential equations. So all of these topics about vector fields on a manifold are fair game. Now I looked at Hirsch and Smale 1974, but this did not really get into the differential topology stuff. Perko's book was also pretty terse and did not systematically develop the topology. </p> <p>If anyone has any good recommendations, that would be appreciated. </p> <p>Thanks. </p>
410
differential equations
Differential Equations
https://math.stackexchange.com/questions/27896/differential-equations
<p>How would I solve these differential equations? Thanks so much for the help!</p> <p>$$P&#39;_0(t) = \alpha P_1(t) - \beta P_0(t)$$ $$P&#39;_1(t) = \beta P_0(t) - \alpha P_1(t)$$</p> <p>We also know $P_0(t)+P_1(t)=1$</p>
<p>Note that from the equation you have $$P&#39;_0(t) = \alpha P_1(t) - \beta P_0(t) = -P&#39;_1(t)$$ which gives us $P&#39;_0(t) + P&#39;_1(t) = 0$ which gives us $P_0(t) + P_1(t) = c$. We are given that $c=1$. Use this now to eliminate one in terms of the other.</p> <p>For instance, $P_1(t) = 1-P_0(t)$ and hence we get, $$P&#39;_0(t) = \alpha (1-P_0(t)) - \beta P_0(t) \Rightarrow P&#39;_0(t) = \alpha - (\alpha + \beta)P_0(t)$$</p> <p>Let $Y_0(t) = e^{(\alpha + \beta)t}P_0(t) \Rightarrow Y&#39;_0(t) = e^{(\alpha + \beta)t} \left[P&#39;_0(t) + (\alpha + \beta) P_0(t) \right] = \alpha e^{(\alpha + \beta)t}$</p> <p>Hence, $Y_0(t) = \frac{\alpha}{\alpha + \beta}e^{(\alpha + \beta)t} + k$ i.e. $$P_0(t) = \frac{\alpha}{\alpha + \beta} + k e^{-(\alpha+\beta)t}$$ $$P_1(t) = 1 - P_0(t) = \frac{\beta}{\alpha + \beta} - k e^{-(\alpha+\beta)t}$$</p>
411
differential equations
Functional vs Differential Equations?
https://math.stackexchange.com/questions/3350892/functional-vs-differential-equations
<h2>Background</h2> <p>I'm under the impression that a differential equation has locality hardwired into it (which is why we see them more in physics). However, if I wanted to write something non-local I'd use a functional equation. I am aware there are cases where the differential equation has a functional equation form as well.</p> <h2>Question</h2> <ul> <li>Is my initial assumption true: "differential equation has locality hardwired into it"? </li> <li>Is the cardinality of the set of functional equations more than that of the differential equations? </li> <li>Is there any functional equation which does not have any differential equation equivalent? </li> <li>Is there any differential equation which does not have any functional equation equivalent?</li> </ul> <h2>Caveat</h2> <p>While different theories of physics have different definitions of locality (for example bell locality in quantum mechanics and micro-causality in QFT) they all agree <strong>one cannot send information (non-random message) from point A to point B without a mediator</strong> (which is what I mean by locality in the above in the above). </p>
<p>Functional equations include the usual differential equations as a specialized subclass.</p> <p>One of my favorite functional equations is <span class="math-container">$$ y=f(x)\\ y' = \sin(f(y) $$</span> It is clear that <span class="math-container">$f(x) = n\pi$</span> for any integer <span class="math-container">$n$</span> is a solution; it is much less clear whether any other solutions exist. This equation has no differential equation equivalent.</p> <p>Differential equations only have locallity built in if you are considering the class of well-posed initial value problems. Boundary value problems do not necessarily exhibit locality (although I guess that is a cheat because the boundary condition itself is a non-local matter). Thus some of the most important equations in physics, including the steady-state Schroedinger equation in spherical or cylindrical coordinates, are non-local when the wave function is considered as a function of the angle about some axis.</p> <p>In between functional equations (which when non-trivial are often impossibly difficult to work with) and differential equations (which often can be attacked by perturbation theory, and a host of other cool tools) I offer differential difference equations. For example, for some given initial value curve <span class="math-container">$P(t) : [0,1) \mapsto \Bbb{R}$</span> and som initial velocity value <span class="math-container">$v$</span>, <span class="math-container">$$ \left. \frac{d^2 x(t)}{dt^2} \right|_{t = t_0} = - x(t_0 -1) \\ t \in [0,1) \implies x(t) = P(t) \\ \left. \frac{d}{dt}x(t) \right|_{t = 1}=v $$</span></p> <p>Differential difference equation problems can have many of the same features as ordinary eigenvalue problems, yet can exhibit some of the same headaches as full-blown non-trivial functional equation problems.</p> <p>The cardinality issue is not easy and may be subtle. I believe the cardinality of the set of all differential equations is the same as the cardinality of the functions, but when you expand to functional equations, that may change. </p>
412
differential equations
1st order q-differential equations
https://math.stackexchange.com/questions/4324810/1st-order-q-differential-equations
<p>Can we solve 1st order q-differential equations using the usual methods of 1st order differential equations? For example, can we use integration factor method to solve this q-differential equations?</p> <p><span class="math-container">$$\text D_qy(x)=a(x)y(qx)+b(x)$$</span></p>
<p>Here is how to get a functional equation since you asked. Starting with a direct reference from <a href="https://mathworld.wolfram.com/q-Derivative.html" rel="nofollow noreferrer">q-Derivative from Wolfram MathWorld</a>:</p> <blockquote> <p><a href="https://i.sstatic.net/rBiKH.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rBiKH.jpg" alt="enter image description here" /></a></p> </blockquote> <p>Therefore:</p> <p><span class="math-container">$$\text D_q y(x)=a(x)y(qx)+b(x)=\frac{y(qx)-y(x)}{x(q-1)}=a(x) y(qx)+b(x)$$</span></p> <p>which works for <span class="math-container">$q\ne 1$</span> since:</p> <p><span class="math-container">$$\lim_{q\to1}\text D_q y(x)= \lim_{q\to1} \frac{y(qx)-y(x)}{x(q-1)} =\lim_{q\to0}\frac{y(x+q)-y(x)}{q}=y’(x)= \text D y(x)=\frac{dy(x)}{dx}$$</span></p> <p>but this limit is not needed when the q-Derivative is taken. Therefore we have our functional equation, rewritten:</p> <p><span class="math-container">$$\boxed{\text D_q y(x)=a(x)y(qx)+b(x)\iff\frac{y(qx)-y(x)}{x(q-1)}=a(x) y(qx)+b(x)}$$</span></p> <p>Here is another form which also gives a recursive solution:</p> <p><span class="math-container">$$\text D_q y(x)=a(x)y(qx)+b(x)\iff y(qx)-y(x)=a(x) y(qx)x(q-1)+b(x)x(q-1)\\\implies y(x)=y(qx)(a(x)x(1-q)+1)-b(x)x(q-1) $$</span></p> <p>Please correct me and give me feedback!</p>
413
differential equations
Differential equations viewed rigorously.
https://math.stackexchange.com/questions/3452685/differential-equations-viewed-rigorously
<p>When I first encountered differential equations,solving them seemed to be somewhat mechanical and I could not enjoy any taste of it.But after undergoing linear algebra course,I think there is more to understand in it rather that differentiating and integrating.I now want to revisit differential equation again and study it with a rigorous approach by finding reasons behind every subtle thing. First I took in hand the linear differential equations which is quite easy to understand in view of linear algebra,treating the differentiation as a linear operator on <span class="math-container">$C^\infty$</span>.But there were many more things like exact differential equations,variable separable and homogeneous equations and some of those methods of solving involved partial derivatives.Since I have yet not studied Riemann integral and multivariable calculus,I find it difficult to analyze these situations.Can anyone help me by suggesting a good reference that explains the reason behind every subtle thing?</p>
414
differential equations
Free differential equations textbook?
https://math.stackexchange.com/questions/279835/free-differential-equations-textbook
<p>I've seen questions on what are some good differential equations textbook and people generally points to Ordinary Differential Equations by Morris Tenenbaum and Harry Pollard and so on</p> <p>I was wondering if there are any free (GNU free documentation license, CC, or alike) textbooks on the subject. A good example, but not for differential equations, is <a href="http://joshua.smcvt.edu/linearalgebra/">http://joshua.smcvt.edu/linearalgebra/</a></p> <p><strong>Disclaimer</strong>: I'm a student currently taking a course (and our textbook is quite frankly.. very badly written) on the subject and actually need to cover the following topics:</p> <ul> <li>First order equations</li> <li>Second order equations</li> <li>Linear systems. Homogeneous linear systems</li> <li>Sequences, series and convergence. </li> <li>Fourier series</li> </ul> <p>But it looks like I'm gonna have to do a majority of this on my own.</p>
<p>I think a number of people have been in a very similar situation to you, in a course on differential equations or otherwise, and have looked for an alternate source for the material. I myself am very familiar with the problem. </p> <p>There are a number of excellent textbooks on the subject that sell for less that $15, my personal favorite of which is Tenenbaum/Pollard, but this opinion is obviously not objective and you, along with a number of other answeres seem to find significant faults with their development of the subject. </p> <p>I will list some other possible inexpensive or free resources that I am at least mildly familiar with:</p> <ul> <li><a href="http://rads.stackoverflow.com/amzn/click/0070005990" rel="noreferrer">Agnew's Differential Equations</a> is an old book that treats the subject very classically in a way similar to Tenenbaum/Pollard. One of the greatest aspecsts of this book is its index, which is quite extensive. It relies heavily on physical applications. And you can get it for less than \$5 from Amazon including shipping. This is not a particularly famous choice, as I purchased a copy at a local used bookstore for \$1 but it is nonetheless excellent.</li> <li><a href="http://ocw.mit.edu/courses/mathematics/18-03-differential-equations-spring-2010/index.htm" rel="noreferrer">MIT OCW 18.03 course</a>, as others have pointed out, is a complete set of lectures, notes, and problem sets that would basically make up a course if you were to take from a spectacular lecturer. I would not reccommend buying the textbook suggested on the syllabus page, however. The supplemental notes from what I remember are excellent. </li> <li><a href="http://tutorial.math.lamar.edu/Classes/DE/DE.aspx" rel="noreferrer">Paul's Math Notes</a> have a set of lecture notes on differential equations that cover all of the topics you are asking for. I have never read these, although I've seen them refrenced quite frequently and they are considered excellent from what I have seen.</li> <li><a href="https://www.google.com/#hl=en&amp;safe=off&amp;tbo=d&amp;sclient=psy-ab&amp;q=site%3a.edu+filetype%3apdf&amp;oq=site%3a.edu+filetype%3apdf&amp;gs_l=hp.3...22390.22390.2.22658.1.1.0.0.0.0.86.86.1.1.0.les;..0.0...1c.1.tjLAhSkHTtA&amp;pbx=1&amp;bav=on.2,or.r_gc.r_pw.r_cp.r_qf.&amp;bvm=bv.41524429,d.eWU&amp;fp=8e0d657e9b9006fd&amp;biw=958&amp;bih=926" rel="noreferrer">Google site:.edu filetype:.pdf</a>. Google is an incredible tool, and is far more extensive that most people imagine. They have a number of operators that refine your search, and their engine is so powerful that <a href="http://en.wikipedia.org/wiki/Google_hacking" rel="noreferrer">there is an entire area of computer security devoted to using google to hack websites.</a> By using the operators <strong>site:edu</strong> and <strong>filetype:pdf</strong> we restrict our search to .pdf files from academic institutions. By selecting a query such as "Bernoulli equations" with the operators described (i.e., type <code>"Bernoulli equations" site:edu filetype:pdf</code> into the google search bar) you will recieve a plethora of lecture notes and descriptions of whatever topic you are looking for. Read one, read 5, read 1000. By reading the lecture notes of many different lecturers you can grasp a topic from a number of different viewpoints and methods of development simultaneously, and this provides an excellent supplement to your course and or any of the resources I described above. </li> </ul> <p>I wish you luck with your course. Have fun learning. </p>
415
differential equations
differential equations and physical intuition
https://math.stackexchange.com/questions/590296/differential-equations-and-physical-intuition
<p>Often when you study differential equations, you find phenomena in nature modeled by those equations. Sometimes an insight into a physical problem can help you to solve a differential equation. My question is: If you are a pure mathematician studying differential equations, do you have to be good at physics (biology, finance) too?</p>
<p>By looking at physical phenomena, you'll see an application of a (partial) differential equation expressed in a particular coordinate system, or with particular boundary conditions.</p> <p>The solutions for these applications will likely be a subset of a more general solution.</p> <p>Take Gauss's Law:</p> <p>$$\vec{\nabla} \centerdot \vec{E}(\vec{r}) = 4 \pi \rho(\vec r).$$</p> <p>Solving this differential equation for the electric field $\vec{E}$ might be made much easier if you consider that the electric field is derived from a scalar potential function that depends only on the magnitude of $\vec{r}$. Then you can choose your surface of integration to make the dot product trivial (a sphere centered at $\vec{r} = 0$). Then your partial differential equation becomes a relatively easy regular differential equation in one variable.</p> <p>But this might be a bit divorced from what you need or want. Physicists use differential equations to explain something: electromagnetic fields, mechanics, heat flow, etc. Once they have the model (the equation) they apply it, and see if it matches reality. If you're staying within the realm of mathematics, then you might not want to do that, but instead rely on formal proof, or other "things you know" from related areas of mathematics. Using physical intuition in pure mathematics might lead you to a wrong conclusion, because your application might not be correct.</p>
416
differential equations
differential inclusions vs differential equations
https://math.stackexchange.com/questions/1334904/differential-inclusions-vs-differential-equations
<p>Can someone please clarify what the difference (no pun intended) between the two is? </p> <p>I am reading <a href="http://www.emis.de/journals/RSMT/RSMT/63-3/197.pdf" rel="nofollow" title="this"><em>this</em></a> tutorial and at the very start they state that a differential inclusion is a solution to</p> <p>$ \frac{\mathrm{d}}{\mathrm{d}t}x(t) \in F(t, x(t)) $</p> <p>So that means that the derivative of $x(t)$ is included in $F$ which is a function the argument $t$ and the $x(t)$ itself. Then a solution is a family of functions, right? But why is this different the just a differential equation? I suppose it's meant to be more general, but right now I'm failing to see where. </p> <p>P.S. no tag for differential inclusions, so differential equations it is.</p>
<ul> <li><p>An ordinary differential equation says what the derivative must be, in terms of the function itself and its variable.</p></li> <li><p>An ordinary differential inclusion says the derivative must lie in a specified set, which may also depend on the function and independent variable. </p></li> </ul> <p>So, if the set always consists of one point, the inclusion is in fact an equation. </p> <p>The distinction is blurred if one allows implicit differential equations, in which the derivative is not isolated. These can be interpreted as inclusions, since the derivative is constrained to a set. Conversely, every inclusion can be written as an implicit differential equation by using the indicator function of the set on the right hand side of the inclusion. </p> <p>But this notational juggling is pretty pointless: as you keep reading beyond the first pages of the survey, you'll see that the theory of differential inclusions is quite a bit different in its goals and methods. </p> <p>For comparison: every graph can be thought of as a matrix, but this does not mean we want to do that, nor does it make graph theory a part of linear algebra. </p> <blockquote> <p>Then a solution is a family of functions, right?</p> </blockquote> <p>A solution is a function, in either case. </p>
417
differential equations
Differential equations systems
https://math.stackexchange.com/questions/3265116/differential-equations-systems
<p>We have that system of differential equations: <span class="math-container">$$ \left\{ \begin{array}{ll} x'=-x \\ y'=-2y \\ \end{array} \right. $$</span> I have to solve that system but I only know the method of derive first equation and substitute in the second and get a second order differential equation which I know to solve. But it seems that that method doesn't work here. How can I solve it? I'm beginner at this kind of exercises. Thanks!</p>
<p>These are two unrelated DE's and you have to solve them independently. The answer is <span class="math-container">$x(t)=ce^{-t}$</span> and <span class="math-container">$y=de^{-2t}$</span> where <span class="math-container">$c$</span> and <span class="math-container">$d$</span> are constants. </p>
418
differential equations
Books for ordinary differential equations.
https://math.stackexchange.com/questions/3143493/books-for-ordinary-differential-equations
<p>In every book of the ordinary differential equations that i have it is given existence and uniqueness theory of ordinary differential equations locally i.e. existence of solution in some neighbourhood of the given point. I am searching results and theorems regarding globally existence of solutions of ordinary differential equations. Please suggest some books for self study that has such materials. Thanks. </p>
419
differential equations
What comes after Differential Equations?
https://math.stackexchange.com/questions/577488/what-comes-after-differential-equations
<p>First of all, please do excuse the lack of correct terminology, I've haven't learnt Differential Equations at school (yet) so this question comes from just a bit of research I did for my own enjoyment.</p> <p>I was reading up on differential equations and the first thing I read was that their result is either a function or a family of functions. So I thought, if the result of functions are numbers and the result of eifferential equations are families of functions, is there anything whose results are a family of differential equations?</p> <p>Since I don't know the terminology of the subject, I don't know what to search on Google to find the answer so I come to you for help. What comes after differential equations?</p> <p>Thanks a lot</p> <p>EDIT: I didn't word the question correctly. Sorry about that, I'll try to give an example.<br> In this normal equation $x^2+2x-3=0$ the solutions are $x_{1}=-3$ and $x_{2}=1$. The solutions are numbers. </p> <p>In this differential equation $\frac{dx}{dt} = 5x -3$ the solution is $$x(t) = Ce^{5t}+ \frac{3}{5}.$$</p> <p>The solution is a function/a normal equation.<br> (Took the example for the differential equation from this page <a href="http://mathinsight.org/ordinary_differential_equation_introduction_examples">http://mathinsight.org/ordinary_differential_equation_introduction_examples</a> )</p> <p>What I want to know is if there is a type of equations whose solutions are differential equations.</p>
<p>Mathematics is not a hierarchy of ever more complicated kinds of equations.</p> <p>Differential equations are important because among other things they have provided a language for physics to discuss many problems and understand the behaviour of their systems. From that point of view, differential equations are nothing but numerical equations that hold at many points.</p> <p>But mathematics is way way way more than that. It's about logical structures (loosely motivated by number systems) and their relations.</p> <p>To answer your question more specifically as suggested by nayrb, I don't think there is some standard kind of frame where one writes equations of differential equations. Note that the term &quot;equation&quot; implies that you have some object that you do operations with: in your numeric and differential equations, you can add and multiply numbers and functions respectively. To write equations of differential equations, you should define operations between differential equations. I'm not saying it is not possible, but it is hard for me to imagine how to do it.</p>
420
differential equations
Recommended Books for differential equations?
https://math.stackexchange.com/questions/2784205/recommended-books-for-differential-equations
<p>I am planning to take Differential equations next semester, but due to a timetable issue I want to study most of it this summer in my spare time to make it easier.</p> <p>These are the Topics that will be included, which I think represent about half of the Differential equations in other universities:</p> <p><em>Ordinary differential equations. Explicitly solvable equations, exact and linear equations. Well-posedness of the initial value problem, existence, uniqueness, continuous dependence on initial values. Approximate solution methods. Linear systems of equations, variational system.&nbsp; Elements of stability theory, stability, asymptotic stability, Lyapunov functions, stability by the linear approximation. Phase portraits of planar autonomous equations. Laplace transform, application to solve differential equations. Discrete-time dynamical systems.</em></p> <p>I am not only looking for textbooks with rigorous exercise sets, any books are welcome, heavy in theory ones as well.</p>
<p>Arnold’s book “Ordinary Differential Equations” is absolutely fantastic and contains many of the topics you are looking for. </p>
421
differential equations
Solving Coupled Differential Equations
https://math.stackexchange.com/questions/1676569/solving-coupled-differential-equations
<p>I have the following differential equations, for modeling predator-prey relationships:</p> <p>$$\frac{dx}{dt} = Ax - Bxy$$ $$\frac{dy}{dt} = Cxy - Dy$$</p> <p>Where A, B, C, and D are constants. How could I go about solving this? I've only really worked with basic first order differential equations before, and I've found little to help me figure this out. Any help would be much appreciated.</p>
<blockquote> <p>While complete analytic solution is not likely, it's curious that we can reduce this system to a couple of nonlinear first order ODEs for $x(t)$ and $y(t)$.</p> </blockquote> <p>Let's do this for $y$ only, since the same procedure can be applied to $x$ as well.</p> <p>First, we write down some useful relationships:</p> <p>$$\dot{x}=x(A-By)$$</p> <p>$$Cx-D=\frac{\dot{y}}{y}$$</p> <p>$$Cxy=\dot{y}+Dy$$</p> <p>Now we differentiate the second equation w.r.t. $t$:</p> <p>$$\ddot{y}=(Cx-D) \dot{y}+Cy \dot{x}=\frac{\dot{y}^2}{y}+(A-By)(\dot{y}+Dy)$$</p> <p>Since the equation doesn't containt $t$ explicitly, we can reduce the order by the usual substitution:</p> <p>$$\dot{y}=u(y), \qquad \ddot{y}=u \cdot u'$$</p> <p>We obtain:</p> <p>$$u u'=\frac{u^2}{y}+(A-By)(u+Dy)$$</p> <p>Now we introduce another function:</p> <p>$$u=y \cdot v(y), \qquad u'=v+yv'$$</p> <p>We get:</p> <p>$$yv(v+yv')=yv^2+(A-By)(v+D)y$$</p> <p>Simplifying, we obtain:</p> <p>$$y v v'=(A-By)(v+D)$$</p> <p>But this is a separable equation. So:</p> <p>$$\frac{vdv}{v+D}=\left(\frac{A}{y}-B \right)dy$$</p> <p>$$v-D \ln (v+D)=A \ln y-B y+c_1$$</p> <p>Getting back to the original function, we have:</p> <p>$$\frac{\dot{y}}{y}-D \ln \left(\frac{\dot{y}}{y}+D \right)=A \ln y-B y+c_1$$</p> <p>Or:</p> <blockquote> <p>$$\dot{y}-D y \ln (\dot{y}+D y)=(A-D) y \ln y-B y^2+c_1 y \tag{1}$$</p> </blockquote> <p>This doesn't look like anything solvable, but it is indeed a 1st order ODE for $y$ only.</p> <p>We also need to determine $c_1$ from the original system somehow, because the extra constant shouldn't be here.</p> <hr> <p>We can actually resolve (1) for the derivative using Lambert W (product logarithm) function. Transforming the equation:</p> <p>$$(\dot{y}+Dy)^{-Dy} e^{\dot{y}}=y^{(A-D)y}e^{-B y^2+c_1 y}$$</p> <p>$$\frac{\dot{y}+Dy}{Dy} \exp \left(-\frac{\dot{y}+Dy}{Dy} \right)= \frac{e^{-1-c_1/D}}{ D} y^{-A/D} \exp \left(\frac{B}{D} y \right)$$</p> <p>This has a solution:</p> <p>$$\frac{\dot{y}+Dy}{Dy}=-W \left(-\frac{e^{-1-c_1/D}}{D} y^{-A/D} \exp \left(\frac{B}{D} y \right) \right)$$</p> <blockquote> <p>$$\dot{y}=- Dy \left(1+W \left(-\frac{e^{-1-c_1/D}}{D} y^{-A/D} \exp \left(\frac{B}{D} y \right) \right) \right) \tag{2}$$</p> </blockquote>
422
differential equations
Differential equations book
https://math.stackexchange.com/questions/2022841/differential-equations-book
<p>I have no knowledge of differential equations, but I have the background in differential geometry/topology and analysis that one acquires in a PhD program. I.e. I have a foundational knowledge of Lie groups (roughly equivalent to Knapp's book), Riemannian geometry (roughly equivalent to do Carmo's book) and a similar foundational knowledge of real, complex and functional analysis. I'm looking for a book on differential equations to read in my spare time, but I'm finding it difficult to find one that is written with the level of care and detail that I look for in a textbook, the kind of care and detail one finds in John Lee's books, for instance. In many books I've looked at, symbols will be displayed in equations without precise definitions, functions will be mentioned without stating their domain and co-domain, and the exposition suffers from many other similar deficiencies in clarity. </p> <p>Is there a carefully written DE book for someone with some mathematical maturity?</p>
<p>Here are two books to read:</p> <ul> <li>Vladimir Arnold, ODE</li> <li>Hirsch and Smale, Differential Equations, Dynamical Systems, and Linear Algebra, first edition</li> </ul> <p>These are, arguably, two best introductory ODE books that are well suited for a good graduate course. The first one is very much intuitive, with many illustrations and sometimes lack of technical details. Which is more important, this book uses more modern language compare to many other textbooks, in particular, from the very beginning the discussion proceeds in terms of flows, transformation groups, and vector fields. The second book has a similar flavor, but written in a completely different style, with inclusion of many many mathematical details that are often left omitted. It also has a lot of very manageable exercises, which directly test the understanding. </p> <p>Taken together, these two books will give you a very solid foundation in ODE, and also show connections to many other fields (differential geometry, Lie groups, analysis, etc).</p>
423
differential equations
Things I must know before taking differential equations course
https://math.stackexchange.com/questions/41051/things-i-must-know-before-taking-differential-equations-course
<p>I intend to take this course named "Differential Equations" and per the department followings contents will be taught</p> <pre><code>* First Order Differential Equations * Second Order Linear Equations * Series Solutions of Second Order Linear Equations * Higher Order Linear Equations * The Laplace Transform * System of First Order Linear Equations * Partial Differential Equations and Fourier Series * Boundary Value Problems and Sturm Liouville Theory * Non Linear Differential Equations </code></pre> <p>and this is also given in the course outline</p> <p>"<br> In this course the students will learn how to solve boundary value problems analytically. This will enable them to develop command over one of the two techniques, namely: - Analytical Techniques - Numerical Techniques for the solution of boundary value problems "</p> <p>Now I've not taken math course in a while (10 years ago I took Calculus in high school and I don't remember most of it unfortunately). </p> <p>So I've two questions that I hope some of you can answer for me</p> <ul> <li>What topics in Calculus I must know before taking this course? </li> <li>What is the best Differential Equations book for person like me given the above course outline?</li> </ul> <p>Please pardon my ignorance. I will really appreciate all the help. Thanks and I look forward to hearing from you.</p>
<p>The best thing for you to do would be to look at the prerequisites of the course as specified by your University. Only they will be able to tell you what knowledge you should have before taking the course. From the description give, the course could be at first year undergraduate level, or it could be at graduate student level.</p> <p>However, I will have a stab. This is, and necessarily must be, an incomplete list:</p> <ul> <li>You should have facility with the calculus of basic functions, eg $x^n$, $\exp x$, $\log x$, trigonometric and hyperbolic functions, including derivatives and definite and indefinite integration</li> <li>The chain rule, product rule, integration by parts</li> <li>Taylor series and series expansions</li> <li>Differentiation from first principles, as the limit of ratio of differences</li> <li>Riemann integrals</li> <li>Linear algebra at the level where you're comfortable with the notions of a linear transformation, representing a linear transformation as a matrix, eigenvalues and eigenvectors, and matrix inverse</li> <li>Complex numbers, including cartesian and polar representation, Euler's formula, and relations with trigonometric and hyperbolic functions</li> </ul> <p>Others should feel free to edit with anything I've left out.</p>
424
differential equations
Exact Differential Equations
https://math.stackexchange.com/questions/51458/exact-differential-equations
<p>I was revising differential equations and came across the topic of exact differential equations. I have a doubt concerning it. Suppose the differential equation $M(x,y)dx + N(x,y)dy=0$ is exact. Then the solution is given by: $\int Mdx +\int (N-\frac{\partial}{\partial y}\int Mdx)dy = c$. I understand that the integrand in the second term is a function of y alone and also understand the derivation of this solution. What I don't understand is the following paragraph:</p> <p>My book then says "Since all the terms of the solution that contain x must appear in $\int Mdx$, its derivative w.r.t. y must have all the terms of N that contain x. Hence the general rule to be followed is: Integrate $\int Mdx$ as if y were constant. Also integrate the terms of N that do not contain x w.r.t. y. Equate the sum of these integrals to a constant."</p> <p>I don't understand the justification that is provided for the general rule. Can someone please explain this? </p>
<p>There seemed to be a misunderstanding as people tried to explain to me why $\int Mdx +\int (N-\frac{\partial}{\partial y}\int Mdx)dy = c$ is the solution of the exact ODE, something which I had already understood perfectly. My problem was with the next statement in the book which gave a working rule that essentially said that the solution could be expressed as $\int M dx $ (y constant) $ + \int N&#39; dy = c$ (N' are the terms of N not containing x) is the solution. After some online searches I have hence discovered the solution. The book is wrong. The rule it quotes works so often in practice that people adopt it but there are cases when it fails and we have to take recourse to $\int Mdx +\int (N-\frac{\partial}{\partial y}\int Mdx)dy = c$ to write the solution. For example the ODE $\frac{dx}{\sqrt{x^2+y^2}} +(\frac{1}{y}-\frac{x}{y\sqrt{x^2+y^2}})dy=0$ would give a wrong answer on applying the working rule and we have to take recourse to directly computing $\int Mdx +\int (N-\frac{\partial}{\partial y}\int Mdx)dy = c$.</p>
425
differential equations
Help Differential Equations
https://math.stackexchange.com/questions/1424768/help-differential-equations
<p>I need to solve this differential equation $$(2x+y)dx + (x-2y)dy=0$$ as an exact differential equation and I know it's exact because I solve the equaliy $$ \frac{\partial(2x+y)}{\partial y} = 1$$ and $$\frac{\partial(x-2y)}{\partial x} = 1$$ so following the steps to solve this kind of equations i have: $$ x^2+g'(y) = x-2y $$ and $$g'(y) = \frac {x-2y}{x^2}$$ to be honest I have many doubts what are the next steps so if you can guide me I'll apreciate</p>
<p>The solution is $F(x,y) = C$ such that $$\frac{\partial F}{\partial x} = 2x + y$$ $$\frac{\partial F}{\partial y} = x - 2y$$</p> <p>Treating $y$ as constant, integrate the first equation with respect to $x$ $$F(x,y) = \int (2x + y) \, dx = x^2 + xy + g(y) $$</p> <p>Treating $x$ as constant, take the partial w.r.t. $y$ of the above expression $$ \frac{\partial F}{\partial y} = x + g'(y) = x - 2y $$</p> <p>This gives $$ g'(y) = -2y \Rightarrow g(y) =-y^2$$ $$ F(x,y) = x^2 + xy - y^2 $$</p> <p>so the solution is $$ x^2 + xy - y^2 = C $$</p>
426
differential equations
Tensor differential equations
https://math.stackexchange.com/questions/739261/tensor-differential-equations
<p>I am reading Ringstrom's book <em>The Cauchy problem in General Relativity</em>, But I don't really understand Chapter 12 associating to tensor equations. I want to read some other material about this. Could anyone suggest me some books about tensor differential equations?</p> <p>I want to learn something about the analysis of the existence and uniqueness of solution to tensor equations.</p>
427
differential equations
Adjoint differential equations
https://math.stackexchange.com/questions/2083209/adjoint-differential-equations
<p>Consider the vector differential equations \begin{equation} \mathbf{x}^{\prime}=\mathbf{A}(t)\cdot\mathbf{x}\tag{1} \end{equation} and \begin{equation} \mathbf{y}^{\prime}=-\mathbf{A}^{\ast}(t)\cdot\mathbf{y},\tag{2} \end{equation} where $\mathbf{A}^{\ast}$ is the complex conjugate transpose of $\mathbf{A}$ and $\mathbf{x},\mathbf{y}$ are column vectors. It is well-known that (1) and (2) are said to be <em>adjoint</em> to one another. Further, we know that if $\mathbf{x}$ and $\mathbf{y}$ are solutions of (1) and (2), respectively, then \begin{equation} \mathbf{y}^{\ast}\cdot\mathbf{x}=\text{constant}.\notag \end{equation}</p> <p>Now, consider the higher-order (scalar) differential equations \begin{equation} \sum_{i=1}^{n}p_{i}(t)x^{(i)}(t)=0,\tag{3} \end{equation} where $p_{n}(t)\neq0$, and \begin{equation} \sum_{i=1}^{n}(-1)^{(i)}[p_{i}y]^{(i)}(t)=0.\tag{4} \end{equation} Also, (3) and (4) are said to be <em>adjoint</em> to one another. Further, if $x$ and $y$ are solutions of (3) and (4), respectively, then (see [1, (8.17) on pp. 67]) \begin{equation} \sum_{i=0}^{n}\sum_{j=0}^{i-1}(-1)^{j}x^{(i-j-1)}(t)[p_{i}z]^{(j)}(t)=\text{constant}.\label{hmfeq}\tag{*} \end{equation}</p> <p>The inner sum in \eqref{hmfeq} resembles the matrix multiplication formula. So, recognizing the similarities between systems and scalar equations, is it possible to obtain the result for higher-order equations by transforming them into vector equations? I could <strong>not</strong> establish any bridge here. I am experiencing problems in transforming (4) into a useful matrix representation.</p> <h2>References</h2> <p>[1]. P. Hartman, <em>Ordinary Differential Equations</em>, SIAM, 2002.</p>
<p>Okay, I believe, I have constructed the bridge between the adjoint of a vector equation and the adjoint of a scalar equation. I will explain it step by step here for those who might be interested.</p> <p><strong>Higher-Order Two-Term Scalar Equations</strong></p> <p>Now, consider the following higher-order two-term scalar differential equation \begin{equation} x^{(n)}(t)+p(t)x(t)=0,\label{hottceeq1}\tag{1} \end{equation} where $p$ is a complex-valued continuous function. Then, the matrix representation for \eqref{hottceeq1} is \begin{equation} \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &amp;1&amp;&amp;\\ &amp;&amp;\ddots&amp;\\ &amp;&amp;&amp;1\\ -p(t)&amp;&amp;&amp; \end{array} \right) \cdot \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right).\label{hottceeq2}\tag{2} \end{equation} Thus, we obtain \begin{equation} (-1) \left( \begin{array}{cccc} &amp;1&amp;&amp;\\ &amp;&amp;\ddots&amp;\\ &amp;&amp;&amp;1\\ -p(t)&amp;&amp;&amp; \end{array} \right)^{\ast} = \left( \begin{array}{cccc} &amp;&amp;&amp;\overline{p}(t)\\ (-1)&amp;&amp;&amp;\\ &amp;\ddots&amp;&amp;\\ &amp;&amp;(-1)&amp; \end{array} \right)\notag \end{equation} and \begin{equation} \left( \begin{array}{c} -y^{(n-1)}\\ y^{(n-2)}\\ \vdots\\ (-1)^{n}y \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &amp;&amp;&amp;\overline{p}(t)\\ (-1)&amp;&amp;&amp;\\ &amp;\ddots&amp;&amp;\\ &amp;&amp;(-1)&amp; \end{array} \right) \cdot \left( \begin{array}{c} -y^{(n-1)}\\ y^{(n-2)}\\ \vdots\\ (-1)^{n}y \end{array} \right),\label{hottceeq3}\tag{3} \end{equation} where we have constructed the unknown matrix from bottom to the top. This system gives us the adjoint equation \begin{equation} (-1)^{n}y^{(n)}(t)+\overline{p}(t)y(t)=0.\notag \end{equation} Note that, if $n=\text{even}$, then both \eqref{hottceeq2} and \eqref{hottceeq3} represent \eqref{hottceeq1}, i.e., \eqref{hottceeq1} is self-adjoint. Further, using the definition of the inner product in the first post, we get \begin{equation} \left\langle\left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right), \left( \begin{array}{c} -y^{(n-1)}\\ y^{(n-2)}\\ \vdots\\ (-1)^{n}y \end{array} \right)\right\rangle =\sum_{j=0}^{n-1}(-1)^{n-j}x^{(n-1-j)}\overline{y}^{(j)}=\text{constant}.\notag \end{equation}</p> <p><strong>Higher-Order Autonomous Scalar Equations</strong></p> <p>Next, consider the \begin{equation} x^{(n)}(t)+p_{1}x^{(n-1)}(t)+\cdots+p_{n}x(t)=0,\label{hoaseeq1}\tag{4} \end{equation} where $p_{1},p_{2},\cdots,p_{n}$ are complex numbers. Then, the matrix representation for \eqref{hoaseeq1} is \begin{equation} \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &amp;1&amp;&amp;\\ &amp;&amp;\ddots&amp;\\ &amp;&amp;&amp;1\\ -p_{n}&amp;-p_{n-1}&amp;\cdots&amp;-p_{1} \end{array} \right) \cdot \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right).\notag \end{equation} On the other hand, by using the adjoint coefficient matrix, we form the differential system \begin{equation} \begin{aligned}[] &amp;\left( \begin{array}{c} (-1)^{n-1}y^{(n-1)}+(-1)^{n-2}\overline{p}_{1}y^{(n-2)}+\cdots+\overline{p}_{n-1}y\\ (-1)^{n-2}y^{(n-2)}+(-1)^{n-3}\overline{p}_{2}y^{(n-3)}+\cdots+\overline{p}_{n-2}y\\ \vdots\\ y \end{array} \right)^{\prime}\\ =&amp; \left( \begin{array}{cccc} &amp;&amp;&amp;\overline{p}_{n}\\ (-1)&amp;&amp;&amp;\overline{p}_{n-1}\\ &amp;\ddots&amp;&amp;\vdots\\ &amp;&amp;(-1)&amp;\overline{p}_{1} \end{array} \right) \cdot \left( \begin{array}{c} (-1)^{n-1}y^{(n-1)}+(-1)^{n-2}\overline{p}_{1}y^{(n-2)}+\cdots+\overline{p}_{n-1}y\\ (-1)^{n-2}y^{(n-2)}+(-1)^{n-3}\overline{p}_{2}y^{(n-3)}+\cdots+\overline{p}_{n-2}y\\ \vdots\\ y \end{array} \right), \end{aligned} \notag \end{equation} which gives the scalar equation \begin{equation} (-1)^{n}y^{(n)}(t)+(-1)^{n-1}\overline{p}_{1}y^{(n-1)}(t)+\cdots+\overline{p}_{n}y(t)=0.\notag \end{equation} Further, putting $p_{0}:=1$ for simplicity, we have \begin{equation} \left\langle\left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right), \left( \begin{array}{c} \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}\overline{p}_{n-1-j}y^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}\overline{p}_{n-2-j}y^{(j)}\\ \vdots\\ y \end{array} \end{array} \right)\right\rangle =\sum_{k=0}^{n-1}\sum_{j=0}^{k}(-1)^{j}p_{k-j}x^{(n-1-k)}\overline{y}^{(j)}=\text{constant}.\notag \end{equation}</p> <p><strong>Higher-Order Scalar Equations with Variable Coefficients</strong></p> <p>Finally, we consider \begin{equation} x^{(n)}(t)+p_{1}(t)x^{(n-1)}(t)+\cdots+p_{n}(t)x(t)=0,\notag \end{equation} where $p_{i}(t)$ ($i=1,2,\cdots,n$) is complex-valued and is $i$ times continuously differentiable function. Thus, the matrix representation is \begin{equation} \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right)^{\prime} = \left( \begin{array}{cccc} &amp;1&amp;&amp;\\ &amp;&amp;\ddots&amp;\\ &amp;&amp;&amp;1\\ -p_{n}(t)&amp;-p_{n-1}(t)&amp;\cdots&amp;-p_{1}(t) \end{array} \right) \cdot \left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right).\notag \end{equation} Then, the associated adjoint matrix is \begin{equation} - \left( \begin{array}{cccc} &amp;1&amp;&amp;\\ &amp;&amp;\ddots&amp;\\ &amp;&amp;&amp;1\\ -p_{n}(t)&amp;-p_{n-1}(t)&amp;\cdots&amp;-p_{1}(t) \end{array} \right)^{\ast} = \left( \begin{array}{cccc} &amp;&amp;&amp;\overline{p}_{n}(t)\\ (-1)&amp;&amp;&amp;\overline{p}_{n-1}(t)\\ &amp;\ddots&amp;&amp;\vdots\\ &amp;&amp;(-1)&amp;\overline{p}_{1}(t) \end{array} \right),\notag \end{equation} which yields the system \begin{equation} \begin{aligned}[] &amp;\left( \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}[p_{n-1-j}\overline{y}]^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}[p_{n-2-j}\overline{y}]^{(j)}\\ \vdots\\ \overline{y} \end{array} \right)^{\prime}\\ &amp;= \left( \begin{array}{cccc} &amp;&amp;&amp;\overline{p}_{n}(t)\\ (-1)&amp;&amp;&amp;\overline{p}_{n-1}(t)\\ &amp;\ddots&amp;&amp;\vdots\\ &amp;&amp;(-1)&amp;\overline{p}_{1}(t) \end{array} \right) \cdot \left( \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}[p_{n-1-j}\overline{y}]^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}[p_{n-2-j}\overline{y}]^{(j)}\\ \vdots\\ \overline{y} \end{array} \right), \end{aligned}\notag \end{equation} where we put $p_{0}(t):\equiv1$ for simplicity. Transforming this into the differential equation, we get \begin{equation} (-1)^{n}y^{(n)}(t)+(-1)^{n-1}[\overline{p}_{1}y]^{(n-1)}(t)+\cdots+[\overline{p}_{n-1}y]^{\prime}(t)+\overline{p}_{n}(t)y(t)=0.\notag \end{equation} Using the inner product in the first post gives us \begin{equation} \left\langle\left( \begin{array}{c} x\\ x^{\prime}\\ \vdots\\ x^{(n-1)} \end{array} \right), \left( \begin{array}{c} \begin{array}{c} \sum_{j=0}^{n-1}(-1)^{j}[p_{n-1-j}\overline{y}]^{(j)}\\ \sum_{j=0}^{n-2}(-1)^{j}[p_{n-2-j}\overline{y}]^{(j)}\\ \vdots\\ \overline{y} \end{array} \end{array} \right)\right\rangle =\sum_{k=0}^{n-1}\sum_{j=0}^{k}(-1)^{j}x^{(n-1-k)}[p_{k-j}\overline{y}]^{(j)}=\text{constant},\label{finaleq}\tag{#} \end{equation} which is the desired identity.</p> <p>I believe that \eqref{finaleq} and <a href="https://math.stackexchange.com/questions/2083209/adjoint-differential-equations#mjx-eqn-hmfeq">(*)</a> in the first post are equivalent.</p>
428
differential equations
Textbook recommendation for differential equations
https://math.stackexchange.com/questions/2215194/textbook-recommendation-for-differential-equations
<p>I'm in need of a textbook/workbook that covers differential equations with plenty of practice questions and exercises. A textbook with a particular focus on Nonlinear Ordinary Differential Equations and Dynamical Systems would be great.</p> <p>Thanks</p>
429
differential equations
Euler differential equations
https://math.stackexchange.com/questions/1554087/euler-differential-equations
<p>I have following three equations $$ u'' - 2u = -2v$$ $$ u(0)=0 $$ $$ u'(1)=0 $$ and from these 3 equations I am trying to find u(v). </p> <p>It looks to me "Cauchy-Euler Differential Equations - Nonhomogeneous case" but I am not sure about that because it is not an exactly Cauchy form. Could you help me to figure out u(v)?</p> <p>thanks in advance</p>
<p>The solution to the homogeneous case is </p> <p>$u_c(v)=c_{1}e^{\sqrt{2}v}+c_{2}e^{-\sqrt{2}v}$ and by inspection, we observe that a particular solution is $u_p(v)=v$ so the solution to the general case is</p> <p>$u(v)=c_{1}e^{\sqrt{2}v}+c_{2}e^{-\sqrt{2}v}+v$. Then,</p> <p>$u(0)=0\Rightarrow c_1+c_2=0$ </p> <p>$u'(1)=0\Rightarrow \sqrt{2}c_{1}e^{\sqrt{2}}-\sqrt{2}c_{2}e^{-\sqrt{2}}+1=0$</p> <p>from which </p> <p>$c_1=\dfrac{-1}{\sqrt{2}e^{\sqrt{2}}+\sqrt{2}e^{-\sqrt{2}}}\ $</p> <p>and </p> <p>$c_2=\dfrac{1}{\sqrt{2}e^{\sqrt{2}}+\sqrt{2}e^{-\sqrt{2}}}$ </p> <p>Now $u(v)=u_c(v)+u_p(v)$</p>
430
differential equations
Solving Differential Equations w/ polynomials
https://math.stackexchange.com/questions/3997332/solving-differential-equations-w-polynomials
<p>I'm new to solving differential equations.</p> <p>How would we go about solving a differential equation like</p> <p><span class="math-container">$y'=-4+5y-y^2$</span> ?</p>
<p><span class="math-container">$$\frac{dy}{dt} = -4+5y-y^2$$</span></p> <p><span class="math-container">$$\frac{dy}{-4+5y-y^2}=dt$$</span></p> <p><span class="math-container">$$\int \frac{dy}{4+5y-y^2} = \int dt$$</span></p> <p><span class="math-container">$$\frac{1}{3}\text{ln}(|\frac{y-1}{y-4}|)=t+c_1$$</span></p> <p><span class="math-container">$$\text{ln}(|\frac{3}{y-4} + 1|)=3t+c_2$$</span></p> <p>From here just you can isolate for y, or just leave it implicit.</p>
431
differential equations
Terminology of differential equations
https://math.stackexchange.com/questions/2051288/terminology-of-differential-equations
<p>I am trying to get a better understanding of the terminology of differential equations. As I understand it, I can characterize differential equations along different categories:</p> <p>(i) linear vs. nonlinear (ii) separable vs. nonseparable (iii) homogeneous vs. inhomogeneous (iv) ordinary vs. partial</p> <p>First of all, is it useful to think in these categories? Are there any relations among these categories? For example, is there any attribute in one category which rules out a particular attribute in another category?</p>
<p>The main reason students are introduced to these terms in an elementary course in differential equations is that they serve as a clue to what methods may be needed to solve the equation.</p> <p>When learning how to find the derivative of functions in a beginning calculus course one finds that one only needs a dozen or so differentiation rules to find the derivative of just about any type of commonly encountered function. But when the student takes a course in the integral calculus, they realize that it is not so simple as just learning a dozen or so rules. Solving integral equations involves techniques and approaches as well as rules or formulas. One learns techniques such as partial fraction decomposition, trigonometric substitution, integration by parts, etc. and must gain some experience to understand when to attempt which method or technique. It is not so simple as just plugging into a formula.</p> <p>With differential equations it is more of the same. In fact the term "integral" is broadly defined as "the solution of a differential equation." </p> <p>In a course on differential equations students learn which methods are useful for solving certain frequently encountered types of equations. Giving those types a name makes it easier to associate the type of equation with the appropriate methods.</p>
432
differential equations
Distribution theory and differential equations.
https://math.stackexchange.com/questions/1177480/distribution-theory-and-differential-equations
<p>How does distribution theory plays role in solving differential equations? This question might seem to be very general. I will try to explain, please bear with me.</p> <p>I understand, distributions make it possible to differentiate functions whose derivatives do not exist in the classical sense and any locally integrable function has a distributional derivative. In terms of differential equations, if coefficients of a differential operator are piece-wise continuous then we make use of distributions (how and why it works?).</p> <p>I am more interested in their relation with Green's function. Please help me understand, how can I use distribution theory for solving differential equations. </p>
<p>The most basic application is the use of the fundamental solution (also known as the Green's function) to solve inhomogeneous linear problems. When $*$ is convolution and $\delta$ is the Dirac delta centered at zero, $\delta * f=f$ for a wide class of $f$. On the other hand, if $L$ is a linear differential operator, then $Lu * f=L(u*f)$. (Or at least, this is definitely true when $u,f$ are smooth.) This means that if you could find a solution to $Lu=\delta$, then you could convolve it with $f$ on both sides to get $L(u*f)=f$. So $u*f$ is the solution to $Lv=f$ if $u$ is the solution to $Lw=\delta$. This $u$ is called the fundamental solution or Green's function for the operator $L$.</p> <p>Duhamel's principle lets us extend this to time-dependent problems, provided the spatial differential operator is constant (and again linear). Cf. <a href="http://en.wikipedia.org/wiki/Duhamel%27s_principle#General_considerations" rel="noreferrer">http://en.wikipedia.org/wiki/Duhamel%27s_principle#General_considerations</a></p>
433
differential equations
Can these equations be considered as differential equations?
https://math.stackexchange.com/questions/2879351/can-these-equations-be-considered-as-differential-equations
<p>Consider a differential equation with a term containing $y(x_0)$, for example $$y'' - 2y' + y = y(x_0)$$ $x_0 \in \mathbb{R}$ is a constant. My question is, does such equations fall under the category of differential equations? I have never studied any equation with such a term. If its a differential equation, then $y(x_0)$ can be considered a constant coefficient?</p>
<p>Although the term $y(x_0)$ depends on the solution $y$ it is still a constant as it doesn't depend on $x$.</p> <p>Let's try to solve your equation $y'' - 2y' + y = C,$ where $C=y(x_0).$ One solution is $y(x) = C.$ The solutions of the homogeneous equation are $y(x) = (Ax+B) e^x.$ Therefore the general solutions are $$y(x) = (Ax+B) e^x + C.$$</p> <p>To get $y(x_0) = C$ we must have $A=B=0.$ Thus there is only the constant solution $$y(x) = C = y(x_0).$$</p>
434
differential equations
Deriving Stochastic differential equations
https://math.stackexchange.com/questions/2370518/deriving-stochastic-differential-equations
<p>I am having a difficulty in deriving stochastic differential equations from geometric Brownian motion dynamics.</p> <p>Assume S follows the geometric Brownian motion dynamics, dS = μSdt + σSdZ, with μ and σ constants. Derive the stochastic differential equation satisfied by y = 2S, y = S^2, y=e^S</p> <p>Doing any of these examples will help me. Thanks in advance.</p>
<p>It seems like you want us to do you homework, here I will explain you how to do it.</p> <p>The key is to do Ito's formula: If $ f \in \mathcal{C}^2$ then $$f(S_t) = f(S_0) + \int_0^t f'(S_t)dS_t + \frac{1}{2} \int_0^t f''(S_t) d[S]_t$$</p> <p>And so considering the differential form: $$ df(S_t) = f'(S_t)dS_t + \frac{1}{2} f''(S_t) d[S]_t $$ </p> <p>Where of course $[S]_t$ denotes the quadratic variation of S. In your case: $[S]_t= \int_0^t (\sigma S_t)^2 dt $ assuming $Z_t$ is a BM. </p> <p>You can apply this formula to $f(x)=2x, f(x)=x^2, f(x)=e^x,.. $ </p> <p>Can you finish the exercise?</p>
435
differential equations
Differential Equations background
https://math.stackexchange.com/questions/247282/differential-equations-background
<p>What are the prereqs for differential equations? Do you need to know integral calculus too, and if so, to what extent? I want to learn about DE's as quick as possible but I'm not sure if I'm ready yet, my differential calculus is up to par I believe but my integral calculus is pretty weak, is that going to be a problem?</p>
<p>When solving DEs, we can often leave the integrals in the solutions unsimplified as simplifying the integrals are not the key points when solving DEs, but it doesn't mean than we can completely ignore integral calculus when solving DEs. Because some of the DEs must involve integral calculus to solving. For example:</p> <ol> <li><p>Some of types of DEs require integration on both sides when solving, e.g. the DEs of the type <a href="http://eqworld.ipmnet.ru/en/solutions/ode/ode0328.pdf" rel="nofollow">http://eqworld.ipmnet.ru/en/solutions/ode/ode0328.pdf</a>.</p></li> <li><p>When solving the DEs of the type $\sum\limits_{k=0}^n(a_kx+b_k)y^{(k)}(x)=0$ without any aids of known special functions, it is known that quite a lot of the cases can be solved by assuming the integral kernel of the form $y=\int_Ce^{xs}K(s)~ds$ , and some of the processes should involve <a href="http://en.wikipedia.org/wiki/Differentiating_under_the_integral_sign" rel="nofollow">differentiation under the integral sign</a>.</p></li> </ol> <p>So you have the freedom to completely ignore integral calculus when solving DEs, but you will become the disadvantages of solving some types of DEs.</p>
436
differential equations
Linearity of system of differential equations?
https://math.stackexchange.com/questions/4654977/linearity-of-system-of-differential-equations
<p>I am learning how to solve differential equations (ordinary and partial)and why they are so important for physics.One thing I have noticed so far is that we know so little on the nature of the solutions of a differential equation , only very few forms of differential equations are solvable and even less forms of systems of differential equations are solvable.</p> <p>I have always wondered.Small changes in the starting conditions of a system cause big changes only if the system isnt linear.Can small changes in the coefficients of the terms of a system of differential equations cause huge changes in the solution?Is there any related research going on?</p>
<p>Indeed, I have never used analytical solution of differential equations in practice or at least for engineering application. As you pointed out, only the simplest problems have known solutions and these are nowhere near the complex problems we face in real world.</p> <p>Most of the time, engineers resort to empirical or numerical solutions. Empirical solutions come from experiments and numerical solutions are obtained through simulations such as Finite Element Method (FEM), Finite Volume Method (FVM), etc.. As my mentor told me, if we waited for analytical solutions, we wouldn't have airplanes for another couple of centuries.</p> <p>As for your question, there are plenty of researches in that field e.g. sensitivity analysis, stochastic model, etc.. For example, one may study how much change in structural beam's thickness (related to coefficient of DE) affect overal structure's thickness.</p>
437
differential equations
Linear equation and linear differential equations
https://math.stackexchange.com/questions/1420439/linear-equation-and-linear-differential-equations
<p>I remember noting from an algebra class that $x$ and $y$ of a linear equation neither divide or multiply with each other which is somewhat clear from the forms of linear equations:</p> <p>General form of linear equation:</p> <p>$Ax + By + C = 0$</p> <p>Slope intercept form:</p> <p>$y = mx + b$</p> <p>Is this also true for <em>linear differential equations</em>?</p> <p>The definition goes like this: "A differential equation is said to be linear if the dependent variable and its differential coeficients (derivates) occur only in the first degree and <em>not multiplied</em> together."</p> <p>${dy \over dx} = {Py + Q}$ </p> <p>Where P, Q are functions of $x$ only. What exactly does this mean?</p> <p>Does the algebraic linear equation has something to do with linear differential equation?</p>
<p>It's saying that if x and y (or their derivatives) are multiplied together in any way, it's not considered a linear differential equation because it's not solvable in the usual ways that linear ODE's are.</p> <p>This relates to normal linear equations in that if you have an equation where x and y are multiplied or otherwise modify each other in a way that prevents strict separation in the polynomial, they do not have a linear relationship. For example, the plot of y=1/x is not a line while y=x is.</p>
438
differential equations
Transform Differential Equation to System of Differential Equations
https://math.stackexchange.com/questions/2532783/transform-differential-equation-to-system-of-differential-equations
<p>So this problem may be really simple and there's one small thing I'm missing, but I'm just stumped. </p> <p>Find 4 solutions of the ODE $$y^{(4)} − 4 = 0$$ by transforming the ODE into a system of 4 first order differential equations.</p> <p>I've had no problem solving similar equations, but the lack of any other $y$ value and just a constant has left me confused.</p>
<p><strong>HINT</strong></p> <p>Let the variables be $y, x = y', w = y'', v = y'''$ and introduce equations like $x=y', w=x'$, etc.</p>
439
differential equations
Logistic Equation - Differential Equations
https://math.stackexchange.com/questions/1502909/logistic-equation-differential-equations
<p>I have a question about logistic equations in differential equations and setting up the problem accordingly. This was the question presented:</p> <blockquote> <p>The number $N(t)$ of people in a community who are exposed to a particular advertisement is governed by the logistic equation. Initially, $N(0) = 500$, and it is observed that $N(1) = 1000$. Solve for $N(t)$ if it is predicted that the limiting number of people in the community ho will see the advertisement is 50,000</p> </blockquote> <p>So from the question, we know that the differential equation we are going to solve is $$\frac{dN}{dt} = N(a-bN)$$</p> <p>My question is why are logistic equations setup such that $$\frac{dN}{dt} = kN(50000-N)$$ and not something like $$\frac{dN}{dt} = N(500-1000N)$$ or something similar?</p> <p>Perhaps the root of the question is I don't have a clear understanding of what the logistic equation is, but any help in understanding this would be greatly appreciated. Thank you so much in advanced for your help!</p>
440
differential equations
Set of coupled partial differential equations
https://math.stackexchange.com/questions/1653992/set-of-coupled-partial-differential-equations
<p>I've read that the Einstein equation is a set of 10 coupled partial differential equations. I know what a partial differential equation is, but I don't know what a set of coupled partial differential equations is. Please shed some light into this question.</p>
<p>A set of equations like $$\frac{\partial f}{\partial x} + \frac{\partial f}{\partial y}=a$$ $$\frac{\partial g}{\partial x} + \frac{\partial g}{\partial y}=b$$ where $g$ &amp; $f$ are the dependent variable, $x$ &amp; $y$ are the independent, $a$ &amp; $b$ are constants<br> is a system of partial differential equations which are not coupled, which means that one could be solved without considering the second one. The solution of $f$ is independent of $g$. </p> <p>A set of equations like $$\frac{\partial f}{\partial x} + \frac{\partial g}{\partial x} + \frac{\partial f}{\partial y}=a$$ $$\frac{\partial g}{\partial x} + \frac{\partial f}{\partial x} + \frac{\partial g}{\partial y}=b$$ is a system of coupled partial differential equations, which means that the solution of one dependent variable is depenting also on the other dependent variable and thus they sould be solved together in order to have a correct solution. </p>
441
differential equations
A method of solving differential equations - terminology
https://math.stackexchange.com/questions/2419031/a-method-of-solving-differential-equations-terminology
<p>I have an iterative method of solving differential equations. I would like to emphasize that the method returns <strong>approximate solutions</strong> of differential equations. How should I call this method? I have the following ideas:</p> <ol> <li>An iterative approximate method of solving differential equations</li> <li>An iterative approximation method of solving differential equations</li> <li>An iterative method of solving differential equations approximately</li> </ol> <p>Which one is correct in English? </p>
442
differential equations
how to start studing differential equations
https://math.stackexchange.com/questions/4247702/how-to-start-studing-differential-equations
<p>hello everyone I have trouble understanding differential equations, am in the second year of studying programing engineer this semester I have differential equations, and my level in math is so low am trying to do my best so my question is what subject should I understand before taking a differential equations course, because.</p>
443
differential equations
Are differential-algebraic equations more expressive than ordinary differential equations?
https://math.stackexchange.com/questions/3571788/are-differential-algebraic-equations-more-expressive-than-ordinary-differential
<p>I am interested in systems of differential-algebraic equations (DAE), i.e., systems of equations of the following form <span class="math-container">$$\dot{x} = f(x,y,t)\\0 = g(x,y,t)$$</span></p> <p>I am confused about their relation to ordinary differential equations (ODE): Are there functions that can be described by DAEs but not by ODEs? I.e., functions that are a solution for some DAE but not for any ODE?</p>
<h2>DAE vs. ODE</h2> <p>Almost any DAE system can be reduced to an ODE system. As this requires derivatives of the equation, the equations themselves have to be differentiable to the required order.</p> <p><span class="math-container">$\newcommand{\pd}[2]{\frac{\partial#1}{\partial#2}}$</span> In your example, you could, as per comment, solve the second equation for <span class="math-container">$y$</span> and insert into the first one. This is the same as taking the derivative of the second equation to get a differential equation for <span class="math-container">$y$</span>, <span class="math-container">$$ \pd{g}{t}(x,y,t)+\pd{g}{x}(x,y,t)\cdot f(x,y,t)+\pd{g}{y}(x,y,t)\cdot \dot y=0. $$</span> As is visible, and also demanded by the implicit function theorem, <em>this only works if <span class="math-container">$\pd{g}{y}$</span> is invertible.</em> If that is not the case, further derivatives of the equations may give rise to a complete ODE system, the maximal number of necessary differentiations of any equation is the index of the DAE.</p> <p>Consequently, <em>any ODE system is an index-0 DAE system.</em></p> <p>This process towards an ODE may fail, either because the equations are not smooth enough like in <span class="math-container">$x_1'=x_2,~ x_1=q$</span>, when <span class="math-container">$q$</span> is not differentiable. But also the process of index determination can fail to stop, that is, there is no differentiation order at which one can extract explicit equations for the highest order derivatives. In other words, there may not be any consistent system state, consistent with all equations and their derivatives.</p> <hr> <h2>Usefulness of DAE</h2> <p>Especially physical systems can be encoded more closely to the physical description, the first principles, using DAE systems. This enables software like modelica where large systems are constructed from basic building blocks having an inner dynamic of their state and pins/variables connecting to the outside and other building blocks.</p> <p>For instance, consider the pendulum as mechanical system restrained to a circle, <span class="math-container">\begin{align} \ddot x+\lambda x&amp;=0 \\ \ddot y + g/m + \lambda y &amp;= 0 \\ x^2+y^2-r^2&amp;=0 \end{align}</span> or the corresponding first order system. While the algebraic equation is solvable for one of the variables, this will not give a dynamical equation for the Lagranian variable <span class="math-container">$\lambda$</span>, one needs 2 derivatives to eliminate <span class="math-container">$\lambda$</span> and <span class="math-container">$3$</span> derivatives of the equations for an ODE for <span class="math-container">$\lambda$</span>. </p> <p>This system directly expresses the physical situation in Cartesian coordinates, containing the gravity force as gradient of the potential and the gradient of the surface with its multiplier as virtual force. While mathematically simpler, the transformation to polar coordinates as in the reduced pendulum equation loses this direct physical context.</p>
444
differential equations
Relation between differential equations and difference equations
https://math.stackexchange.com/questions/3254891/relation-between-differential-equations-and-difference-equations
<p>While going over problems in differential equations and difference equations, I realized that most of the techniques that we use to solve them are very similar. This has led me to wonder - is there a deeper theory that unifies both of them?</p> <p>I tried to look around, and all that I could find is something called Time-Scale Calculus. How successful is this theory, and are there any other theories that try to unify Differential and Difference Equations?</p> <p>(PS: By successful, I mean that whether it has been able to explain why exactly Difference and Differential Equations have similar methods of solving them, and also what exactly differentiates them)</p>
445
differential equations
Differential Equations Recipes
https://math.stackexchange.com/questions/2253813/differential-equations-recipes
<p>I have many questions I'd like to ask today. I'm currently studying for my A-levels which is a qualification mainly based in England. I'm taking A-level Further Maths, which involves the study of First and Second Order Differential Equations. It seems we are just being taught to apply recipes to these types of equations and are given no intuition as to why these things even work? This isn't just for teaching in England, I've watched many American lectures and lecture notes and the same thing is done. </p> <p>Let's take a look at <strong>Non-Linear Separable First Order Differential Equations.</strong> </p> <p>We have defined these equations to take the form $$ N(y)\frac{dy}{dx} = M(x)$$</p> <p>We are taught to solve these equations by multiplying $\ dx $ to both sides and then integrate by their respective variables. Like so: </p> <p>$$ \int N(y)\ dy = \int M(x) \ dx $$</p> <p>I have just accepted this for some time, but I'd like to get to the bottom of wtf is going on. We are taught from Year 12/Calculus 1 days that $\frac{dy}{dx} $ is not a ratio but can be treated like one in many cases. Ok... then could you explain really what we just did when solving the above differential equation? I can accept the fact that we can treat $\frac{dy}{dx}$ as a ratio but it's not really a ratio. So, can we be told what we are really doing? Is it some kind of hidden secret? I just want to know why we multiplied both sides by $\ dx$, what are we actually doing? </p> <p>I can assume that the reason we are not told is because it involves higher level math, that someone taking Calculus would not understand. But will actual reasoning behind why these recipes work be exposed to us during University math courses? </p>
446
differential equations
Single approach to solving differential equations
https://math.stackexchange.com/questions/2422800/single-approach-to-solving-differential-equations
<p>I am an engineer who uses mathematics for applications. I have learnt how to solve differential equations, both ordinary and partial. My impression has been that solving differential equations is all about knowing a bag of diverse tricks: separation of variables, reduction in order, power series method, etc.</p> <p>I would like to know if there is a single approach that would work for differential equations. I don't mind if the approach is tedious or if it involves successive approximations. All I wish for is that the procedure of solving differential equations be mechanical in nature, and applicable to widest possible variety of differential equations. I first thought that writing unknown function as Taylor series and successively finding the unknown coefficients is a very general, although tedious (which is alright with me), approach to solving differential equations. However I later learnt that it works only if the expansion is carried about a regular point, otherwise it gives nonsensical answer.</p> <p>Recently I have begun studying one-parameter group theoretic method for solving differential equations, and the author of a book promises it is a very general method. I wished to ask your opinion regarding this and whether there are any other general approaches which could be learnt with minimum prerequisites. Thanks in advance for any advice.</p>
<p>It turns out that this question has been asked in one form or another by many people through the years, and it's complicated. </p> <p>First, it depends on what is meant by solving the equation. Differential equations can describe a vast range of phenomena, from turbulent flow to crystal growth to dynamic plasticity. The "closed form" solutions that can be written down explicitly turn out to be inadequate to describe all of that.</p> <p>A natural next step is to look for series solutions, but as you noted, many equations develop irregularities, for instance shock waves, which cannot really be described with series easily. People have tried things like shock tracking that handle these singularities separately, but it is hard. </p> <p>Another approach is using Lie groups, which you have alluded to. This does unify a lot of existing methods, but it is still essentially limited to situations where a tractable closed form solution is available. </p> <p>The most common modern approach to the problem it to not expect a closed form or series approximation in general ( although this is sometimes possible and useful) but instead look for either useful properties of the solution (e.g. existence, bounds on derivatives, etc.) or try to evaluate the solution at different points via numerical simulation. Another perspective on this technique is that numerical discretizations provide the sequential approximation you are looking for. </p> <p>A bit disappointing, but that is the state of things. Lie group methods are cool though. Definitely study them :)</p>
447
differential equations
Stability of delay differential equations
https://math.stackexchange.com/questions/2768028/stability-of-delay-differential-equations
<p>I have encountered a 2-dimensional system of differential equations. One of them is a <a href="https://en.wikipedia.org/wiki/Delay_differential_equation" rel="nofollow noreferrer">delay differential equation</a> (DDE). Can anybody explain to me how to analyze the stability of a DDE? </p>
<p>In a nutshell, to analyse the stability of a DDE <span class="math-container">\begin{equation} \dot{x}=A_0 x(t)-\sum_{i=1}^m A_i x(t-\tau_i), \end{equation}</span> with <span class="math-container">$A_0$</span> and <span class="math-container">$A_i$</span> are <span class="math-container">$n\times n$</span> matrices, you insert a solution of the form <span class="math-container">$x=e^{\lambda t} v$</span>. This leads to a characteristic equation <span class="math-container">\begin{equation} \rm{det} \Delta(\lambda)=0 \end{equation}</span> where <span class="math-container">\begin{equation} \Delta(\lambda)=\lambda I-A_0-\sum_{i=1}^m A_i e^{-\lambda_i \tau_i}. \end{equation}</span> The characteristic equation is a transcendental equation with infinitely many characteristic roots. The stability of the DDE is set by the spectral abscissa, that is, the root with the largest real part. Once that root passes through the imaginary axes (<span class="math-container">$\rm{Re}(\lambda_i)&gt;0$</span>), the system is unstable. In general, the characteristic roots need to be determined numerically. There are standard tools for this, for instance the <a href="https://twr.cs.kuleuven.be/research/software/delay/ddebiftool.shtml" rel="nofollow noreferrer">DDE-BIFTOOL</a>.</p> <p>Have a look at</p> <ul> <li>Michiels and Niculescu - Stability, Control, and Computation for Time-Delay Systems: An Eigenvalue-Based Approach</li> <li>Smith - An Introduction to Delay Differential Equations with Applications to the Life Sciences</li> <li>Breda, Maset, Vermiglio - Stability of Linear Delay Differential Equations A Numerical Approach with MATLAB</li> </ul>
448
differential equations
Power series solution for differential equations
https://math.stackexchange.com/questions/3890164/power-series-solution-for-differential-equations
<p>Is the solution to differential equations using power series applicable only to homogeneous differential equations? I mean equations of the form: <span class="math-container">$$a_2 \phi ''(x)+a_1 \phi '(x)+ \phi(x) = 0$$</span></p>
<p>If you have a source term <span class="math-container">$f$</span> that admits a power series decomposition, you can apply the power series method to find a solution of your equation.</p> <p>Also, this method can be extended to differential equations of any degrees. It also works if your coefficient <span class="math-container">$a_i$</span> are in the form <span class="math-container">$a_i x_k$</span> for any <span class="math-container">$k$</span>.</p> <p>Finaly, I would add that you need to be careful with this method if you have a non-linear O.D.E.</p>
449
differential equations
Differential Equations without Analytical Solutions
https://math.stackexchange.com/questions/210346/differential-equations-without-analytical-solutions
<p>In many talks, I have heard people say that the differential equation they are interested in has no analytical solution. Do they really mean that? That is:</p> <blockquote> <p>Can you prove a differential equation has no analytical solution? </p> </blockquote> <p>I suspect what they mean is that no one has been able to derive one, but I could be wrong. I also have a question related to the former case.</p> <blockquote> <p>What are some simple examples of differential equations with no known analytical solution?</p> </blockquote> <p>The differential equations courses at my university are method based (identify the DE and use the method provided) which is completely fine. However, I'd like to have some examples which look easy (or look similar to ones for which the given methods will work) in order to show students that not all differential equations are so easily solved.</p> <hr> <p><strong>Added later:</strong> Taking the comments into account, I suppose the type of differential equations I am looking for in the second question are ones which, at this point in time, can only be solved using numerical methods (which, as Emmad Kareem points out, would be good motivation for learning such methods).</p> <hr> <p><strong>The kind of thing I'm looking for:</strong> I was talking to my friend who does Fluid Mechanics and he suggested the Blasius equation $$f''' + \frac{1}{2}ff'' = 0.$$ Apart from $f(x) = ax + b$, there are no known (as far as he knows) analytical solutions.</p>
<p>Take the initial value problem $$y'=\cases{x\bigl(1+2\log|x|\bigr)\quad &amp;$(x\ne0)$ \cr 0&amp;$(x=0)$\cr}\ ,\qquad y(0)=0\ .$$ This example obviously fulfills the assumptions of the existence and uniqueness theorem, so there is exactly one solution. As is easily checked this solution is given by $$y(x)=\cases{x^2\&gt;\log|x|\quad&amp;$(x\ne0)$\cr 0&amp;$(x=0)$\cr}\ .$$ This function is not analytic in any neighborhood of $x=0$.</p>
450
differential equations
SIR model differential equations
https://math.stackexchange.com/questions/2834514/sir-model-differential-equations
<p>I'm trying to solve the SIR model differential equations by separation of variables to get $S$,$I$,$R$ as functions of time , for example $I$ solved the Infected differential equation as follows: $$ dI/dt= BIS-YI, \\ dI/dt = I(BS-Y), \\ dI/I= BS-Y dt, \\ \int dI/I = \int (BS-Y) \, dt = \ln{I} ⁡= BSt-Yt+C $$ Is that integration by separation of variables possible ? (Please answer)</p>
<p>No, you can't do that. The integral $\int (BS-Y) \, dt$ is not equal to $(BS-Y)t+C$, since $S$ depends on $t$.</p>
451
differential equations
Simple Differential Equations
https://math.stackexchange.com/questions/563820/simple-differential-equations
<p>I want to solve the two following differential equations:</p> <p>(1) $ f''(t) = 3f'(t) - f(t)$</p> <p>(2) $ f''(t) = 2f'(t) - f(t)$</p> <p>I chose the approach $f(t) = e^{\lambda t}$ and hence arrive for the first case at</p> <p>$\lambda^2 e^{\lambda t} = e^{\lambda t}(3\lambda - 1) \rightarrow \lambda^2 = 3 \lambda - 1$</p> <p>and for the second differential equation I arrive at $\lambda^2 = 2 \lambda - 1$</p> <p>Is this correct? How do I find $\lambda$ now per hand easily? Also, in the exercise it says "find the solution and determine what $\lambda$ is" - but isn't that the same thing?</p> <p>Also: For the second case, is $t*e^{\lambda t}$ also a solution? Thanks</p>
<p>There are some ways to find $\lambda$ easily such as the quadratic formula or doing some algebra. In this case, let's do the last one , so, for example: $$ \lambda^2 -2\lambda + 1 =0 \\ (\lambda - 1)^2 = 0 \\ \lambda = 1 $$</p> <p>But $\lambda$ is not the solution of the differential equation. Instead:</p> <p>$$ f(x) = C_1 e^{\lambda_1x} + C_2 e^{\lambda_2x} + \dots + C_n e^{\lambda_nx} $$</p> <p>Where $C_i$ are constants. That's true if $\lambda_1 \neq \lambda_2 \neq \dots \neq \lambda_n$. In our case, $\lambda_1 = \lambda_2$, so both solutions would be linearly dependent. To fix that, the solution should be:</p> <p>$$ f(x) = C_1 e^{\lambda_1x} + x\,C_2 e^{\lambda_1x} + \dots + x^n \,C_n e^{\lambda_1x} $$</p> <p>Where $n$ is the multiplicity of $\lambda$</p>
452
differential equations
Extracting differential equations
https://math.stackexchange.com/questions/1115136/extracting-differential-equations
<p>$$\frac{dx}{dy} = \frac{x(\alpha - \beta y)}{y(\delta x - \gamma)}$$ </p> <p>How do I extract two differential equations (y as a function of x and x as a function of y) from the equation above? I could separate the variables, but I don't see how that would help.</p>
<p>I am certain (having recognised the system) that the functions $x, y$ depend on a parameter, say $t$. Writing \begin{equation} \frac{dx}{dy} = \frac{dx/dt}{dy/dt} \end{equation} and separating out the numerator and denominator into their constituent equations, we find \begin{eqnarray} \frac{dx}{dt} &amp;=&amp; x(\alpha - \beta y) \\ \frac{dy}{dt} &amp;=&amp; y(\delta x - \gamma) \end{eqnarray} These are simpler to form a solution to, and are known as the <a href="http://en.wikipedia.org/wiki/Lotka%E2%80%93Volterra_equation" rel="nofollow">Lotka-Volterra System</a>. They are incredibley difficult to solve, and I looked into solving them numerically with Mathematica, see <a href="http://reference.wolfram.com/language/tutorial/NDSolveLotkaVolterra.html" rel="nofollow">Wolfram's page here</a>.</p> <p>Pertubation methods are available for this kind of study, for example the methodology in <a href="http://www.m-hikari.com/imf-2010/53-56-2010/raoIMF53-56-2010.pdf" rel="nofollow">Rao &amp; Thorani</a> present algebraic ways of solutions to the perturbed system. <a href="http://books.google.co.uk/books/about/Evolutionary_Dynamics.html?id=YXrIRDuAbE0C" rel="nofollow">Evolutionary Dynamics</a> by Nowak explains how to find solutions within a more natural sciences bent but is worth the read.</p>
453
differential equations
Duplication/ Differential Equations
https://math.stackexchange.com/questions/309532/duplication-differential-equations
<p>I have a question about a rule in my textbook related to differential equations.</p> <p>If we are considering a differential equation of the form $y"+ay'+b= f(x)$ such that $f(x)=P(x)e^{rx}\cos kx$ with $\deg(P)\leq m$, it is written that in order to determine the particular solution, I should set $y(x)=x^s[(A_0+A_1 x+...+A_mx^m)e^{rx}\cos kx+(B_0+B_1 x+...+B_mx^m)e^{rx}\sin kx]$ where s is the smallest nonnegative integer such that such no term in the particular solution duplicates a term in the general solution.</p> <p>Could someone explain to me what do they mean by "duplicate" ?</p> <p>Thank you</p>
<p>Sometimes, you find guess the whole solution as you noted above. This solution contains a part $y_c$ which is related to homogenous associated equation, and another part which contains $y_p$ the particular solution. What we should care about it is to distinguish these two parts because they all be independent with each other. And that's why the method suggest you to do that. In these kinds of equation, I personally prefer to apply [The Annihilator method**] (<a href="http://en.wikipedia.org/wiki/Annihilator_method" rel="nofollow">http://en.wikipedia.org/wiki/Annihilator_method</a>).</p>
454
differential equations
Approximation of differential equations
https://math.stackexchange.com/questions/432144/approximation-of-differential-equations
<p>Can someone provide me a good reference about approximation techniques in continous domain (not piecewise nor numerical methods) for differential equations?</p>
<p>There really would be different types of methods for this, particularly dependent on how and what your differential equation is and also what you want to achieve.</p> <p>I did some time approximations by studying systems of differential equations (also non-linear) by corresponding stochastic methods such as the master equations, Fokker Planck equations (vs. Langevin equations). For reference to both types of approach you just need to start on wikki to get primary information and then forward.</p> <p>However this is one way. What is often interesting as well is to apply Fourier techniques. This is quite often used by electrical engineers (systems and signals) and in physics. The references there are really massive you need just goolgle the terms. Rather difficult to sieve what is the best technique for your case of equations.</p> <p>Hope this helps well.</p>
455
differential equations
Two differential equations
https://math.stackexchange.com/questions/239844/two-differential-equations
<p>How would I solve these differential equations?</p> <p>$$y'+2y^2=\frac{6}{x^2}$$</p> <p>I tried finding integral product but couldn't find its integral. And also tried to trasform into homogen equation. </p> <p>and the second one is: </p> <p>$$xe^{2y}y'+e^{2y}=\frac{\ln x}{x}$$ </p> <p>How can I start? Thanks.</p>
<p>Well, I can figure out the second one. My guess was we could get the left side of the equation to look like</p> <p>$$\frac d{dx}(f(x)e^{2y})=2f(x)e^{2y}y'+f'(x)e^{2y}$$</p> <p>through the use of an integrating factor. So we have</p> <p>$$\frac{f'(x)}{2f(x)}=\frac1x$$</p> <p>$$\ln f(x)=2\ln x$$ $$f(x)=x^2$$</p> <p>To get the equation into the proper form, multiply both sides by $2x$.</p> <p>$$2x^2e^{2y}y'+2xe^{2y}=\frac d{dx}(x^2e^{2y})=2\ln x$$</p> <p>I assume you can take this one from here?</p>
456
differential equations
Differential Equations with Deviating Argument
https://math.stackexchange.com/questions/197569/differential-equations-with-deviating-argument
<p>Is there literature available on solving differential equations of the type $$f(x,y(x),y(\kappa x),y'(x))=0,$$ where $\kappa$ is a given constant? I know about the book Introduction to the Theory and Application of Differential Equations with Deviating Arguments by L.E. El'sgol'ts and S.B. Norkin from the year 1973 [1], but I wonder if there are more recently published books available as well.</p> <p>Specifically, I would be interested in solving for example $$u(2t)-2u'(t)u(t)=0$$ without guesswork.</p> <p>[1] Introduction to the Theory and Application of Differential Equations with Deviating Arguments, L.E. El'sgol'ts and S.B. Norkin, Mathematics in Science and Engineering, Volume 105, Academic Press, New York, 1973</p>
<p>In regards to more recent literature (more recent than this question in fact), one interesting paper may be "<a href="https://arxiv.org/abs/1402.3084" rel="nofollow noreferrer">A new kind of functional differential equations</a>" by Kong and Zhang. This covers your situation, given $\kappa \in (0,1)$, as well as some more general functional delays. The approach in this paper is that of (the unfortunately named) Retarded Delay Differential Equations, where the delayed functions are not derivatives. To cover the case of $\kappa \in (1,\infty)$, you may follow a similar approach with Advanced Delay Differential Equations, where the delayed functions may include derivatives, by doing a change of variables $x\rightarrow \xi/\kappa$. You may need some background in geometric flow theory for Section 1.3, but outside of that it doesn't seem to require any differential geometry knowledge. The paper covers both ODEs and PDEs.</p> <p>In general, this question falls under the blanket of Functional Differential Equations, though Delay Differential Equations is the current poster-child for the subject. Usually those delays come in the form of $\phi (t) = t-\tau$ for some constant $\tau \in (0,t)$. However, more research into general delays is up and coming, and may be a popular area of DE research in the coming years.</p>
457
differential equations
Solving strange differential equations?
https://math.stackexchange.com/questions/1455876/solving-strange-differential-equations
<p>Whilst my knowledge of differential equations is somewhat limited, I was under the impression that the following was a valid equation to be solved yet it is unrecognised by wolfram alpha and I have no clue how to solve it: $$dy = a\times x \times dx + b\times y$$ Is such a differential equation that contains differentials combined with variables valid in the sense that using differentials as terms as shown is mathematically acceptable and if so how can the equation be solved?</p>
458
differential equations
Easy to read partial differential equations book?
https://math.stackexchange.com/questions/4027697/easy-to-read-partial-differential-equations-book
<p>I'm looking for an easy to read undergraduate book on partial differential equations, ideally something that is not much harder than a multivariable calculus/ordinary differential equations book.</p> <p>I am preparing for a course which is using the text by Walter Strauss, but I found this text a bit difficult to read. Specifically, I found that many of the derivations were missing steps that were not obvious to me or provided little justification for the manipulations. When I searched for introductory books however the Strauss book seems to be recommended.</p> <p>Google seems to recommend, among others:</p> <ul> <li>Partial Differential Equations for Scientists and Engineers by Farlow</li> <li>Introduction to Partial Differential Equations by Peter Olver</li> </ul> <p>I have ruled out:</p> <ul> <li>Partial Differential Equations: An Introduction by Walter Strauss</li> <li>An Introduction to Partial Differential Equations by Michael Renardy</li> <li>Partial Differential Equations by Fritz John</li> <li>Partial Differential Equations by Lawrence C Evans</li> </ul> <p>My background is having read A First Course in Differential Equations with Modelling Applications by Dennis Zill. Would I be better off reading the extended version of this book (Differential Equations with Boundary Value Problems)? I was a little bit hesitant because I was not sure how relevant the book is to PDEs. The chapters I have not read are Fourier Series, Boundary Value Problems in Rectangular Coordinates, Boundary Value Problems in Other Coordinate Systems, Integral Transforms and Numerical Solutions of Partial Differential Equations.</p> <p>EDIT: I ended up finishing about half the book, which is everything covered in my course, Chapters 1 through 7.</p>
<p>Personally I am quite fond of <strong>Evan's Partial differential equations</strong> as an introductory textbook.</p> <p>I would not expect to find something really &quot;easy to read&quot;, though, simply because the subject matter tends not be all that easy.</p>
459
differential equations
Impossible Kinds Of Differential Equations
https://math.stackexchange.com/questions/2337380/impossible-kinds-of-differential-equations
<p>Today my professor told me that there are some differential equations that cannot be solved. Is this true? If it is true, why can they not be solved? How complex would that kind of differential equation have to be?</p>
<p><strong><em>Ordinary Differential Equations</em></strong> generally admit solutions in the event that the defining functions are "reasonable", i.e. possessed of <a href="https://en.wikipedia.org/wiki/Lipschitz_continuity" rel="nofollow noreferrer">Lipschitz continuity</a>. Differentiable functions are generally locally Lipschitz, so we know that equations of the form</p> <p>$\dot {\vec x} = \vec f(\vec x, t) \tag{1}$</p> <p>where $\vec x \in \Bbb R^n$, with differentiable $\vec f$, have unique solutions when the initial data</p> <p>$\vec x(t_0) = \vec x_0 \tag{2}$</p> <p>is specified. Most equations of practical interest, say in the sciences or engineering, fall into this category so there's really no problem in the applications.</p> <p><strong><em>Partial Differential Equations</em></strong>, on the other hand, yield us no such good fortune. There are even very simple, first order PDEs which <strong><em>admit no solutions whatsoever</em></strong>. See, for instance, <a href="https://en.wikipedia.org/wiki/Lewy%27s_example" rel="nofollow noreferrer">Lewy's example</a>. Lewy showed that even such a simple PDE as</p> <p>$\dfrac{\partial u}{\partial \bar z} - iz \dfrac{\partial u}{\partial t} = \phi'(t) \tag{3}$</p> <p>admits <em>no local solutions</em> near $0$ on $\Bbb R \times \Bbb C$ when $\phi$ is not analytic. </p>
460
differential equations
Relating Differential equations and exact differential forms
https://math.stackexchange.com/questions/2791046/relating-differential-equations-and-exact-differential-forms
<p>I'm reading Fundamentals of Differential Equations by Nagle. Given the equation $$\frac{dy}{dx}=f(x,y)$$ Nagle says at times we can rewrite as an exact differential form $$M(x,y)dx+N(x,y)dy=0$$ So it seems this is the case if it holds that $$f(x,y)= \frac{-M(x,y)}{N(x,y)}$$ and if we do cross multiplying, we go from the differential equation to the differential form. But why is this so? I think I was taught in my calculus classes that $dy/dx$ is notation for $y'$ and not literally a ratio of $dy$ and $dx$ which the professor at that time said were "deep concepts" and that sometimes you can't do "intuitive" algebraic operations with them. So why can we do it here? <p> As a side note, I have read Analysis on Manifolds by Munkres about 1.5 years ago, and he talks about (exact) differential forms. Sadly I forgot a lot after that and looking again at the definition of differential forms, it's pretty abstract, and takes many other ideas to "build up to" (ie alternating tensors, dual transformations, tangent spaces, etc). So maybe there's some neat connection between differential equations and differential forms that I'm missing? <p> Thanks a lot in advance, any guidance is greatly appreciated.</p>
461
differential equations
differential equations null solutions
https://math.stackexchange.com/questions/2006710/differential-equations-null-solutions
<p>A book I'm using to teach myself differential equations claims the following:</p> <p>If $y_{1}$ and $y_{2}$ are solutions to the differential equation $y' - a(t)y = q(t)$, then $y = y_{1} - y_{2}$ will be a null solution by linearity.</p> <p>I understand there exists some linear combination of $y_{1}$ and $y_{2}$ that would provide the null solution, but how can I be sure it is exactly $y = y_{1} - y_{2}$?</p>
<p>You can check this by hand:</p> <p>Suppose $y_1$ and $y_2$ are solutions, so that</p> <p>$y_1' - a(t) y_1 = q(t)$ and $y_2'- a(t) y_2 = q(t).$</p> <p>Then, subtracting these two equations yields</p> <p>$$y_1'-y_2' - a(t) y_1 +a(t) y_2 = 0.$$</p> <p>A slight rearranging shows</p> <p>$$(y_1-y_2)' - a(t) (y_1-y_2) = 0,$$</p> <p>so indeed, $y=y_1-y_2$ is a null solution.</p>
462
differential equations
About differential equations.
https://math.stackexchange.com/questions/3509715/about-differential-equations
<ol> <li>While solving differential equations, is it okay to find an expression containing <span class="math-container">$y$</span> and <span class="math-container">$x$</span> (without <span class="math-container">$y'$</span>) from which it's rather difficult to express <span class="math-container">$y$</span>, for example, <span class="math-container">$$ \frac{\ln^2(y)}{y}-\frac{x}{y}= C?$$</span></li> <li>Does it make sense to try to find <span class="math-container">$x(y)$</span> in some problems where it's easier?</li> </ol>
<p>1) Yes, it is typical to find the solution in a manner where <span class="math-container">$y$</span> and <span class="math-container">$x$</span> are implicitly defined. 2) Yes, in some cases it is easier to express <span class="math-container">$x=f(y)$</span> instead of <span class="math-container">$y=g(x)$</span>.</p>
463
differential equations
System of differential equations
https://math.stackexchange.com/questions/1993377/system-of-differential-equations
<p>Can you help me solve this system of differential equations</p> <p>$ x'= 4x - 2y $</p> <p>$ y'= 3x - y - 2e^{3t} $</p> <p>Initial conditions are $ x(0) = y(0) = 0 $</p>
<p>$\mathbf x' + \begin{bmatrix} -4&amp;2\\-3&amp;1\end{bmatrix} \mathbf x = \begin{bmatrix}0\\-2e^{3t}\end{bmatrix}$</p> <p>$e^{At} \mathbf x' + A e^{At} \mathbf x = e^{At}g(t)$</p> <p>$A = P^{-1}DP$</p> <p>$A = \begin{bmatrix}1&amp;2\\1&amp;3\end{bmatrix}\begin{bmatrix}-2&amp;0\\0&amp;-1\end{bmatrix}\begin{bmatrix}3&amp;-2\\-1&amp;1\end{bmatrix}\\ e^{At} = P^{-1} e^{Dt}P$</p> <p>$\mathbf x =$$ e^{-At}C + e^{-At} \int e^{At}g(t) dt\\ e^{-At}C + P^{-1}e^{-Dt} \int e^{Dt}Pg(t) dt$</p> <p>$e^{Dt}Pg(t) =\begin{bmatrix}4e^t\\-2e^{2t}\end{bmatrix}$ </p> <p>$\mathbf x =e^{-At}C + P^{-1}\begin{bmatrix}4\\-1\end{bmatrix} e^{3t}\\ P\mathbf x(0) =C + \begin{bmatrix}4\\-1\end{bmatrix}\\ C = \begin{bmatrix}-4\\1\end{bmatrix}$</p> <p>$\mathbf x = \begin{bmatrix}-4e^{2t}+2e^{t}+2e^{3t}\\-4e^{2t}+3e^t + e^{3t}\end{bmatrix}$</p>
464
differential equations
Ordinary differential equations of order zero?
https://math.stackexchange.com/questions/1124912/ordinary-differential-equations-of-order-zero
<p>Is $x+y+2=0$ a differential equation without derivatives of order $n$, $n&gt;0$? Could it be called a differential equation (for unknown $y(x)$) of order $0$? </p> <p>If not, can we define differential equations of order zero?</p>
<p>You <em>could</em> call such equations "zero-order differential equations" but since no derivative is actually involved, the name is more misleading than helpful. I would call it an <em>algebraic equation</em> for $y$, to emphasize the difference between it and differential equations. Shorter and less confusing. </p> <p>But if one generalizes from differential operators (like the $k$th derivative) to <a href="http://en.wikipedia.org/wiki/Pseudo-differential_operators" rel="nofollow noreferrer">pseudo-differential operators</a>, things get more interesting. There are important pseudo-differential operators of order $0$, such as the <a href="https://math.stackexchange.com/q/813605/">Hilbert transform</a>, and one could consider equations involving them: say, solving $$ \int_{\mathbb R} \frac{u(x)}{x-t}\,dt = f(t) $$ for $u$. The nature of pseudo-differential operators of order $0$ is such that they are usually thought of as (singular) <em>integral</em> operators, and so the equations would be usually called <em>integral equations</em>. But occasionally one sees "pseudodifferential equations of order $0$", for <a href="http://link.springer.com/article/10.1007%2FBF01218505" rel="nofollow noreferrer">example here</a>:</p> <blockquote> <p>In the present paper we prove the stability of a nodal spline collocation method for (locally) strongly elliptic zero order pseudodifferential equations </p> </blockquote>
465
differential equations
Multiplied differential equations
https://math.stackexchange.com/questions/3741720/multiplied-differential-equations
<p>How to get <span class="math-container">$x(t),y(t)$</span> solutions for &quot;product differential equations&quot; (dotted on <span class="math-container">$t$</span>):</p> <p><span class="math-container">$$\dot x \dot y= xy,\; \dot y^2-\dot x ^2= 1;$$</span></p> <p>we have by solving quadratics</p> <p><span class="math-container">$$ (2\dot y^2, 2 \dot x^2)= + 1 \pm \sqrt{1+4x^2y^2} ,\; - 1 \pm \sqrt{1+4x^2y^2} $$</span></p>
<p><strong>Hint:</strong></p> <p>You can eliminate <span class="math-container">$y$</span>.</p> <p><span class="math-container">$$\dot x^2-\dot y^2=1\to \dot x^4-\dot x^2\dot y^2=\dot x^2\to \dot x^4-x^2y^2=\dot x^2$$</span></p> <p>and</p> <p><span class="math-container">$$y^2=\frac{\dot x^4-\dot x^2}{x^2}.$$</span></p>
466
differential equations
Problem in Differential Equations
https://math.stackexchange.com/questions/933301/problem-in-differential-equations
<p>Solution curves for the differential equations 1) y' = max{y,y^2} 2) y‘ = min {y,y^2}</p> <p>Please can anybody help me because I am really confused</p>
<p>Hints: On which region is $y &gt; y^2$? On which region is $y &lt; y^2$? Do you know what the solutions of $y' = y$ and $y' = y^2$ are?</p>
467
differential equations
Stiff Nonlinear Differential Equations
https://math.stackexchange.com/questions/1584388/stiff-nonlinear-differential-equations
<p>As far as I know, the concept of stiffness is hard to define rigorously, but there are plenty of handwavy descriptions and motivating examples in the literature when it comes to <strong><em>linear</strong> differential equations</em>. At the same time I have never seen an explicit and straightforward definition of a <em>stiff <strong>nonlinear</strong> differential equation</em>. That being said, I feel like there should be one, and I just haven't seen it yet. To outline, my questions are:</p> <blockquote> <p>Is there such thing as <em>stiff <strong>nonlinear</strong> differential equation</em>? If so, how is it defined? </p> </blockquote> <p>The most straightforward approach to define one is to use linearization, but I am not sure if this is a good idea as the accuracy of linearization will probably have an decisive impact on the region of absolute stability. </p>
<p>There is a recent paper in which the problem of defining stiffness is discussed. Söderlind, Gustaf; Jay, Laurent; Calvo, Manuel Stiffness 1952–2012: sixty years in search of a definition. BIT 55 (2015), no. 2, 531–558. </p>
468
differential equations
Differential equations in function
https://math.stackexchange.com/questions/1244729/differential-equations-in-function
<p>Equations (1) : $xy'+(1-x)y=1$ let $z=xy+1$</p> <p>determine and solve the differential equation (2) whose general solution is the function $z$ .</p> <p>-determine the general solution of (1)</p>
<p>If $z=xy+1$ then \begin{eqnarray} \frac{dz}{dx} &amp;=&amp; \frac{d}{dx}\left(xy+1\right) \\ &amp;=&amp; y+x\frac{dy}{dx} \end{eqnarray} Now: \begin{eqnarray} x\frac{dy}{dx}+y-xy &amp;=&amp; 1 \\ \implies x\frac{dy}{dx}+y &amp;=&amp; 1+xy\\ \implies \frac{dz}{dx} &amp;=&amp; z \end{eqnarray} Now you solve this equation for $z$ and then change back the variables to $x$ and $y$ at the end.</p>
469
differential equations
Notation of differential equations
https://math.stackexchange.com/questions/310481/notation-of-differential-equations
<p>I have just started a course on differential equations, and unfortunetely enough for me, we immediately used notation foreign for me, for example:</p> <p>$$ x^2 \left(\dfrac{d^2y}{dx^2}\right)^2 = \sin( x)\;\textrm{ and}\;\; y \times \dfrac{d^2 y}{dx^2} = \sin(x)$$ were used as examples of non-linear ordinary differential equations. My questions are</p> <ul> <li><p>Is $\large \frac{dy}{dx}$ equal to $y'$? Also, what then is $\large\frac{d^2 y}{dx^2}\;$? <br><br>I have looked up the formal mathematical definition but it somewhat confuses me,</p></li> <li><p>Why on earth is this notation used, instead of just using $\;'\;$ or $\;''\;$? It seems to me much more confusing and unnecessarily messy.</p></li> </ul>
<p>Yes, your interpretation is correct: $$\frac{dy}{dx} = y' \;\text{ and likewise,}\;\;\frac{d^2y}{dx^2} = y''$$</p> <p>It's often simpler to use $y'$, I agree. But there are contexts, for example in implicit differentiation, or when $x$ and $y$ are defined in terms of a parameter, like $t$, in which using $\frac{dy}{dx}$, e.g., makes explicit that we want to differentiate $y$ with respect to $x$. There are other advantages, but when no confusion or ambiguity results from using $y'$ to denote $\frac{dy}{dx}$, by all means, use it!</p> <p>Please also see the previous thread <a href="https://math.stackexchange.com/questions/25102/why-is-the-2nd-derivative-written-as-frac-mathrm-d2y-mathrm-dx2">Why the second derivative is written as $\dfrac{d^2y}{dx^2}$?</a> for more comprehensive information as to what exactly the notation denotes, and its origin.</p>
470
differential equations
Measure-driven differential equations
https://math.stackexchange.com/questions/187327/measure-driven-differential-equations
<p><strong>Background:</strong> I need some help to understand the concept behind measure-driven differential equations. The solution of an ordinary differential equation is continuous. In order to describe discontinuous trajectories we use the concept of <em>distributions</em> (very well described in the book "Functional Analysis" by W. Rudin). In brief, every function $x:\Re\to\Omega\subseteq \Re^n$ that is locally integrable over the open set $\Omega$ is mapped to a functional $T_x:\mathcal{D}(\Re^n)\to\Re^n$ as follows:</p> <p>$$ T_x(\phi)= \int_\Omega x\phi d\mu $$</p> <p>where $\mathcal{D}(\Re^n)$ is the set of <em>test functions</em> (infinitely many times differentiable and with compact support). $\mu$ is the Lebesgue measure over $\Omega$ - meaning that the integration is carried out in the Lebesgue sense.</p> <p>Every distribution (i.e. a functional $T\in\mathcal{D}^\star(\Omega)$) has a derivative given by:</p> <p>$$ (DT)(\phi)=-T(D\phi) $$</p> <p>In that sense we can construct generalized differential equations that look like:</p> <p>$$ DT=g(T) $$</p> <p>Using this framework we can describe solutions that encounter jumps such as impulsive differential equations. This is accomplished using the Dirac functional $\delta(\phi)=\phi(0)$. (I don't want to go into much detail to keep the question short).</p> <p><strong>The problem:</strong> I recently stumbled on a thing called "Measure-driven differential equations". These have the form:</p> <p>$$ dx=f(x(t),u(t))dt+g(x(t))d\mu(t) $$</p> <p>where $\mu:\mathcal{B}([t_0,t_1])\to\Re_+$ is a positive measure with the property $\mu(A)\in K$ for all $A\subseteq [t_0,t_1]$ where $K$ is compact. $u$ and $\mu$ here serve as external "signals". The solution of such an equation is reportedly:</p> <p>$$ x(t)=x(t_0) + \int_{t_0}^t f(x(s),u(s))ds + \int_{[t_0,t]}g(x(s))\mu(ds) $$</p> <p>(see <a href="http://www.dcce.ibilce.unesp.br/~gsilva/INVARIANCIA/inv_periodic.pdf">this article</a> for example).</p> <p><strong>The questions:</strong> (i) I'm a bit puzzled with the notation $d\mu(t)$ and $\mu(ds)$. Can someone elaborate a bit on that? Since $\mu$ is a measure, what exactly is the meaning of $\mu(ds)$? (ii) Is there any advantage from using measures instead of distributions to describe phenomena with discontinuous trajectories? (iii) I would appreciate some reference (preferably a book) to get started with these things.</p>
<p>Intuitively, if your variable $s$ is take in some $\Omega$ the notation $\mu(ds)$ means that you evaluate the measure on an infinitersimal segment $ds$ (you can think that like an open ball little as you want near $s=s*$) and you multiply this value with the value of the function evaluate in $s*$; so for example: $f(s*)\mu(ds)$, but this is just a particular notation because the true integral is the limsup of simplies function that approximate $f(s)$ with the necessary specification about sigma algebra ecc... For the other question I can't help you now...</p>
471
differential equations
Solving ordinary differential equations using the differential operator D
https://math.stackexchange.com/questions/23425/solving-ordinary-differential-equations-using-the-differential-operator-d
<p>I know how to solve linear homogeneous ordinary differential equations with constant coefficients using the differential operator D, by using <a href="http://math.ucsd.edu/~dmeyer/teaching/20Dspring07/diffop2.pdf" rel="nofollow">this</a> method.</p> <p>Is it possible to use a similar method (using the differential operator) to solve more advanced ODEs? I'm thinking of both more advanced linear ODEs, such as Euler-Cauchy differential equations, as well as non-linear ODEs.</p> <p>Are there any articles on the web on this topic, or even textbooks that use this method to solve ODEs?</p>
<p>There is a large literature on "<a href="http://en.wikipedia.org/wiki/Operational_calculus" rel="nofollow noreferrer">operational calculus</a>" dating back to Heaviside's pioneering work. One particularly powerful operator factorization technique is the Infeld - Hull ladder method - which plays a big role in unifying many classes of special functions that arise in physics (mainly via separation of variables in various coordinate systems). Willard Miller showed that this method is equivalent to the representation theory of four local Lie groups. This lie-theoretic approach served to powerfully unify and "explain" all prior similar attempts to provide a unfied theory of such classes of special functions. See <a href="https://math.stackexchange.com/questions/622/importance-of-representation-theory/2803#2803">my post here</a> for further details and references.</p>
472
differential equations
What exactly are partial differential equations?
https://math.stackexchange.com/questions/209691/what-exactly-are-partial-differential-equations
<p>I know what differential equations (DEs) are, but what exactly are partial differential equations (PDEs)?</p> <p>I know the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6dinger_equation" rel="nofollow noreferrer"><em>Schrödinger equation</em></a> is a PDE.</p> <p>I'm also looking for an intuitive understanding. Also, what are good resources which explain PDEs for beginners?</p>
<p>Given that the Wiki definition may be too mathematically formal for the OP, let me give some intuition of the partial differential equation starting from the first order case. </p> <p>First, consider a first order ordinary differential equation $$ \frac{\mathrm{d}}{\mathrm{d}t} X = F(t,X) $$ where $X$ takes values in, say, $\mathbb{R}^n$ and $F(t,X)$ is some Lipschitz continuous function. (In other words, this is a dynamical system.) What it says is that it tells us how $X$ <em>ought</em> to change, at an instant in time $t$, based on the time $t$ and the current value of $X$. This is what we call an "evolutionary point of view". </p> <p>The analogue of a partial differential equation that is "evolutionary" is an equation for $X$, which now depends on not only the time $t$ but also some spatial coordinates $(x_1, \ldots, x_N)$ would be something like</p> <p>$$ \frac{\partial}{\partial t} X = F\left(t,x_1, \ldots, x_N, X, \frac{\partial}{\partial x_1} X, \ldots, \frac{\partial}{\partial x_N} X\right) $$</p> <p>Now, we have that how $X$ ought to change at an instant in time $t$ <em>and at position</em> $(x_1,\ldots, x_N)$, is based on a function of not only the coordinate values of $t$ and $(x_1, \ldots, x_N)$, but also the value of $X$ at that space-time point, and also the value of its spatial directional derivatives at that space-time point. </p> <p>There is a different point of view, however, for ordinary differential equations. This is the "constraint point of view". For this we consider the equation $$ X'' = F(s,X) $$ and try to solve it while prescribing boundary conditions $X(0) = f_1$ and $X(1) = f_2$. What we should think of is that the differential equation describe some "compatibility condition" for a certain physical system in stasis. For example, the above equation can be used to describe the distribution of temperature along a rod that is kept at temperature $f_1$ at one end and temperature $f_2$ at the other. The equation says that the second derivative of the temperature function depends on the physical characteristic of the rod at the point $s$ as well as the current temperature at that point $x$. In other words, the laws of nature <em>constrains</em> what temperature profiles are possible. </p> <p>From this point of view, we also get a type of partial differential equations that describes a constraint. In this case, the PDE is usually written as an analytic expression relating the various partial derivatives of a function. What this says is that for the question we are considering, not all functions are admissible as solutions. That some <em>law</em> (most frequently a physical law) requires that the only admissible functions describing the situation (this is a constraint) obey certain relationships imposed upon their Taylor coefficients up to some order $k$ <em>at every point</em>. In other words, the function is not allowed to wiggle willy-nilly. Its rates of changes between the various different directions are tied together. </p> <hr> <p>Intuition aside, the mathematical formulation of a PDE can be stated relatively simply. </p> <p>A <strong>partial differential equation</strong> is a equation which expresses an <em>equality</em> between expressions involving partial derivatives of a given function. More precisely, taking one of the simpler cases, a partial differential equation on a scalar function $u$ defined on some subset $U\subseteq \mathbb{R}^N$ is the equation $$ F(x,u,\nabla u, \nabla^2 u, \ldots , \nabla^k u) = 0 $$ where $x\in U\subseteq \mathbb{R}^N$ are the independent variables, $\nabla^ju$ are the <em>tensors</em> representing the $j$-th fold partial derivatives of $u$ ($\nabla^2 u$ is the Hessian matrix, $\nabla u$ is the gradient vector), and $F$ is some function $$ F: U \times \mathbb{R} \times \mathbb{R}^N \times \mathbb{R}^{N^2} \times \cdots \times \mathbb{R}^{N^k} \to \mathbb{R} $$ The number $k$, the maximum order of the derivatives involved in the equation, is called the "order" of the equation. </p> <p>For some simple examples:</p> <ul> <li><p>The <em>transport</em> equation (or <em>linear advection</em> equation) are cases where $k = 1$, and where $$ F(x,u,p) = V(x)\cdot p $$ where $p\in \mathbb{R}^N$ and $V(x)$ is some vector field on $U$. </p></li> <li><p>The <em>Laplace</em> equation is when $k = 2$ and $$F(x,u,p,q) = \operatorname{trace} q $$ where $p\in\mathbb{R}^N$ and $q\in \mathbb{R}^{N^2}$ is interpreted as an $N\times N$ matrix. </p></li> <li><p>The <em>wave</em> equation is when $k = 2$ and $$F(x,u,p,q) = \operatorname{trace} q - T^\dagger q T $$ where $\dagger$ is the matrix transpose, $T$ is a vector with $\|T\|^2 &gt; 1$</p></li> <li><p>The <em>linear Schroedinger</em> equation is also when $k = 2$ and $$F(x,u,p,q) = \operatorname{trace} q - T^\dagger q T - i T\cdot p $$ where $T$ is a vector with $\|T\|^2 = 1$. If we remove the imaginary $i$ from the equation, we end up with the <em>linear heat</em> equation instead. Note that necessarily for Schrodinger's equation we need $u$ to take values in the complex number $\mathbb{C}$, and so its gradient and Hessian will be complex-valued vector and complex-valued matrix. </p></li> </ul> <hr> <p>And now, for an extremely high-brow definition (which is a bit beyond the "beginner's scope" asked by the original poster, but nonetheless interesting):</p> <p>A <strong>partial differential relation</strong> (of which a partial differential equation is a special type) for a fibre-bundle $F$ over some smooth manifold $M$ is a subset $\mathcal{R}\subseteq F^{(r)}$ of the $r$-th jet bundle of $F$ over $M$. A <em>partial differential equation</em> is one where $\mathcal{R}$ has co-dimension 1. To bring it back to the simplest case defined above the cut: a class of simple fibre-bundles are the trivial bundles $F = M\times N$. Here $M$ is the domain of independent variables (what is $U$ in the definition above). $N$ is the domain of dependent variables (what is $\mathbb{R}$ or $\mathbb{C}$ above, but we can also think of vector valued dependent variables taking values in, say, $\mathbb{R}^n$ or $\mathbb{C}^n$, then we get what are sometimes called <em>systems</em> of partial differential equations). The $k$-th jet bundle is, roughly speaking, the set of all possible $k$-th order Taylor expansions; in other words, it represents the space $\mathbb{R}\times \mathbb{R}^N \times \mathbb{R}^{N^2}\times \cdots \times \mathbb{R}^{N^k}$ of the value of the function and all its (partial) derivatives up to order $k$. </p> <p>Then the single equation $F(x,u,p,q,r,\ldots,s) = 0$, the partial differential equation, should carve out a codimension 1 subset of $U \times \mathbb{R} \times \cdots \times\mathbb{R}^{N^k}$. (See <a href="https://mathoverflow.net/questions/76620/jet-bundles-and-partial-differential-operators">my question on MO</a> for some tangentially related discussions.) </p> <hr> <p><em>Further readings</em></p> <ul> <li><p>Sergiu Klainerman's <a href="https://web.math.princeton.edu/~seri/homepage/papers/gws-2006-3.pdf" rel="nofollow noreferrer">essay</a>, an abridged version of which appeared in the <em>Princeton Companion to Mathematics</em>. It assume a little bit more than absolute beginner, but not too much more. </p></li> <li><p>Jürgen Jost's <a href="http://www.mis.mpg.de/jjost/publications/partdiffeq-2.html" rel="nofollow noreferrer"><em>Partial Differential Equations</em></a> textbook, while on the whole may be a bit too advanced for the OP, has a short introductory chapter titled "What are Partial Differential Equations?", which should also give some intuition. </p></li> <li><p>Ka Kit Tung's <em>Partial differential equations and Fourier analysis - A short introduction</em> is a textbook aimed at students who have had one year of calculus and one course of ordinary differential equations. It has a decent first chapter reviewing ODEs, and a second chapter explaining the physical origins of partial differential equations while comparing and contrasting them to ordinary differential equations which the OP understood better. This may be a reasonable first book for the OP to consult. </p></li> </ul>
473
differential equations
I need help with Differential equations.
https://math.stackexchange.com/questions/2126685/i-need-help-with-differential-equations
<p>What is the Implicit answer for this differential equation? </p> <p>$\frac{dy}{dx} = y^{2}-4$</p> <p>Help me i'm a newbie in differential equations.</p>
<p>Here is how you solve equations like yours. You are trying to solve $\frac{dy}{dx}=f(y)$. Rewrite it as $\frac{dy}{f(y)}=dx$, integrate on both sides $\int\frac{1}{f(y)}dy=\int dx=x+c$ and invert to get expression for $y(x)$. If you have conditions on the value of $y(x)$ as some particular $x$, like $x=0$, then you can determine $c$ as well.</p> <p>Applying this to your example. You have $\int\frac{1}{y^2-4}dy=x+c$. The integral is $\frac{1}{4}\log{\left(\frac{2-y}{2+y}\right)}$ so that $\frac{2-y}{2+y}=\exp{(4x+4c)}$. From here solving for $y$ is straightforward. </p>
474
differential equations
Question regarding finding differential equations
https://math.stackexchange.com/questions/2328125/question-regarding-finding-differential-equations
<p>I have a bit of a general question in regards to differential equations. Right now I'm working my way through "Differential Equations with Applications and Historical Notes (1972 edition)" by Simmons. There was a question regarding verifying that the orthogonal trajectories of the family of curves dictated by $y^2=4c(x+c)$ that asked to prove that substituting the opposite reciprocal of $y'$ into the differential equation does NOT alter the differential equation ( I state the question verbatim below for reference). After hours of trying to crack it, I found the solution online. However, after going back through my work, I noticed one of the expressions I got for the derivative of the orthogonal trajectory curve did seem to act as a solution to the differential equation originally obtained. So, my question is the following...Given a trajectory family $y=y(x,c)$ and its orthogonal trajectory $y_0$, is proving that $y_0^{'}$ and $y_0$ satisfy the differential equation for the family determined by $y$ equivalent to proving that substituting $-(1/y')$ for $y'$ leaves the differential equation determined by $y$ unaltered?</p> <p>[Side question: After checking other texts, I also wonder why the same $y$ symbol is used when referring to finding orthogonal trajectories by substituting $-(1/y')$ for $y'$. The treatments I find don't seem to address this specifically? Is this in your opinions more indicative of that it's understood that it's a different $y$ and they are just playing fast and loose with the notation, or is there a more significant reason for this.</p> <p>$Statement\;of\;the\;question\;in\;Simmons:$ Sketch the family $y^2=4c(x+c)$ of all parabolas with axis the x axis and focus at the origin, and find the differential equation for the family. Show that this differential equation is unaltered when $\frac{dy}{dx}$ is replaced by $-\frac{dx}{dy}$. What conclusion can be drawn from ths fact.</p>
475
differential equations
Fourier and differential equations
https://math.stackexchange.com/questions/2788663/fourier-and-differential-equations
<p>Hey right now I'm practising Fourierseries and found this problem, just so you know it's my first time using Fourier to solve differential equations. $$ f''(x) + f(x) = 3\cos(2x) $$</p>
<p>$$ F[f''(x)+f(x)]=F[3\cos(2x)] $$</p> <p>or</p> <p>$$ ((i \omega)^2+1)F(\omega) = 3 \sqrt{\frac{\pi }{2}} \delta (w-2)+3 \sqrt{\frac{\pi }{2}} \delta (w+2) $$</p> <p>and then</p> <p>$$ F(\omega) = \frac{3 \sqrt{\frac{\pi }{2}} \delta (w-2)+3 \sqrt{\frac{\pi }{2}} \delta (w+2)}{((i \omega)^2+1)} $$</p> <p>and finally</p> <p>$$ F^{-1}\left(\frac{3 \sqrt{\frac{\pi }{2}} \delta (w-2)+3 \sqrt{\frac{\pi }{2}} \delta (w+2)}{((i \omega)^2+1)}\right) = -\cos(2 x) $$</p>
476
differential equations
How to reduce second order nonlinear differential equations into sets of first order differential equations
https://math.stackexchange.com/questions/2039621/how-to-reduce-second-order-nonlinear-differential-equations-into-sets-of-first-o
<p>I have a nonlinear differential equation of the kind:</p> <p>$ f(\ddot{x},\dot{x},x) =0$</p> <p>I would like to know if there is <em>always</em> a way to write such a differential equation in a form like:</p> <p>$ \dot{y} = g(y) $</p> <p>that is to put the equation in the form of a set of first order differential equations. What sort of complications arise in the nonlinear case compared to the linear second order ordinary differential equation? Also, what problems may arise when there are terms like $ \dot{x} x $ ? </p>
477
differential equations
Differential Equations reference for Putnam preparation
https://math.stackexchange.com/questions/90006/differential-equations-reference-for-putnam-preparation
<p>I have two problem collections I am currently working through, the "Berkeley Problems in Mathematics" book, and the first of the three volumes of Putnam problems compiled by the MAA. These both contain many problems on basic differential equations.</p> <p>Unfortunately, I never had a course in differential equations. Otherwise, my background is reasonably good, and I have knowledge of real analysis (at the level of baby Rudin), basic abstract algebra, topology, and complex analysis. I feel I could handle a more concise and mathematically mature approach to differential equations than the "cookbook" style that is normally given to first and second year students. I was wondering if someone to access to the above books that I am working through could suggest a concise reference that would cover what I need to know to solve the problems in them. In particular, it seems I need to know basic solution methods and basic existence and uniqueness theorem. On the other hand, I have no desire to specialize in differential equations, so a reference work like V.I Arnold's book on ordinary differential equations would not suit my needs, and I certainly don't have any need for knowledge of, say, the Laplace transform or PDEs. </p> <p>To reiterate, I just need a concise, high level overview of the basic (by hand) solution techniques for differential equations, along with some discussion of the basic uniqueness and existence theorems. I realize this is rather vague, but looking through the two problem books I listed above should give a more precise idea of what I mean. Worked examples would be a plus. I am very unfamiliar with the subject matter, so thanks in advance for putting up with my very nebulous request.</p> <p>EDIT: I found Coddington's "Intoduction to Ordinary Differential Equations" to be what I needed. Thanks guys.</p>
<p>My instructor used these <a href="http://people.cs.uchicago.edu/~lebovitz/eodesnotes.html" rel="nofollow">notes</a> for the lectures of a basic theory of ODE course. Personally, I'm not a huge fan of the notes, but it does cover the uniqueness and existence (e.g. Gronwall's inequality) theorems pretty well and has a good mixture of computational and more theoretical exercises. The notes are based off of <a href="http://rads.stackoverflow.com/amzn/click/0471860034" rel="nofollow">Ordinary Differential Equations By Birkhoff and Rota</a></p>
478
differential equations
Differential Equations Reference Request
https://math.stackexchange.com/questions/400806/differential-equations-reference-request
<p>Currently I'm taking the Differential Equations course at college, however the problem is the book used. I'll try to make my point clear, but sorry if this question is silly or anything like that: the textbook used (William Boyce's book) seems to assume that the reader doesn't familiarity with abstract math, so it lacks that structure of presenting motivations, then definitions, then theorems and corolaries as we see in books like Spivak's Calculus or Apostol's Calculus.</p> <p>I've already seem a question like this on Math Overflow, however unfortunatelly some people felt offended somehow and said: "are you saying that Boyce is easy? It doesn't matter if you know how to prove things, you must learn to compute", and my point is not that: Apostol and Spivak teaches how to compute <em>also</em>, however since their books are aimed to mathematicians they take care to build everything very fine and their main preocupation is indeed in the theoretical aspects.</p> <p>I really don't like the approach of: well, in some strange way we found that this equation works, so memorize it, know how to compute things and everything is fine. I really want to understand what's going on, and until now I didn't find this possible with Boyce's book (certainly there are people that find this possible, but I'm used to books like Spivak's and Apostol's, so I don't really do well with books like Boyce's).</p> <p>I've already seem Arnold's book on Differential Equations, but the prerequisites for reading it are bigger: he uses diffeomorphisms many times and although I'm currently also studying differential geometry, I don't feel yet comfortable on reading a book like this one. </p> <p>Can someone recommend a book that covers ordinary differential equations, systems of differential equations, partial differential equations, and so on, but that can be read without much prerequisite, and that still has the structure of a book of mathematics? And when I say "structure of book of mathematics" is being like Spivak's and Apostol's books: not mixing up definitions, theorems and examples inside histories of how the theory developed. Since I'm a student of Mathematical Physics, <em>of course</em> motivations and examples from Physics are welcome, but not all mixed up in the text.</p> <p>In truth I don't really believe that a book like the one I described exists (if a book on this topic is good in my point of view, I believe it'll have a lot of prerequisites). Anyway I hope that I don't get misunderstood in what's my doubt, and really sorry if this question is silly in someway.</p>
<p>Try Birkhoff/Rota and Hirsch/Smale (I actually don't know the latest edition with Devaney).</p>
479
differential equations
Partial Differential Equations Course And Differential Geometry Prerequisites
https://math.stackexchange.com/questions/1076091/partial-differential-equations-course-and-differential-geometry-prerequisites
<p>Is the ordinary differential equations course a prerequisite for the partial differential equations course for a person who has passed the integral calculus course?<br/> Is it really required to have passed an ordinary differential equations course to be able to learn Fourier Theory?<br/> What is the prerequisite for a person who wants to learn differential geometry and has passed basic linear algebra, single variable calculus?<br/> My problem with most of the available books is all of them are based on introducing pure theorems with very small amount of practice problems, which does not help much to learn to apply those ideas and make them sink in the mind.<br/> I wish someone could explain the set of skills needed to tackle books based on theory than problem solving, since most of them are hard to deal with.</p> <blockquote> <p>I expect both guidance and advice on the questions asked, please do not bombard me with list of books that you have googled, instead share your invaluable experiences regarding questions and strategies to reduce wasting time on thinking how to think and learn.</p> </blockquote>
480
differential equations
System of differential equations
https://math.stackexchange.com/questions/655744/system-of-differential-equations
<p>Can you help me please to solve this system of differential equations</p> <p>[ \begin{cases} \dot{x_{1}}=2x_{1}-x_{2}\\ \dot{x_{2}}=4x_{1}-2x_{2}-2t^{-3} \end{cases} ]</p> <p>thanks :) </p>
<p><strong>HINT</strong> </p> <p>Extract $x_2$ from the first equation and plug into the second. This will give you a very simple second order differential equation in $x_1$. </p> <p>I am sure you can take from here.</p>
481
differential equations
decouple differential equations
https://math.stackexchange.com/questions/2570271/decouple-differential-equations
<p>I have a system of two Second Order differential equations $$ r^2 \ddot{r} - r^3(\dot{\varphi}^2 +\omega^2) =-GM $$ $$ r \ddot{\varphi} + 2\dot{r}(\dot{\varphi}+\omega)=0 $$ which I am supposed to decouple using the conservation size $ (\dot{\varphi}+\omega)r^2 $ I have shown, that it is indeed a conservation size, as its derivation is r-times the second equation and therefore zero. However I don't know how this is supposed to help me decoupling the two equations.<br> I would be very thankful for hints.</p>
<p>$$ r\varphi'' + 2r'(\varphi'+\omega)=0\quad\to\quad \frac{\varphi'' }{\varphi'+\omega}=-2\frac{r'}{r} $$ $$\ln|\varphi'+\omega|=-2\ln[r|+c$$ $$\varphi'=\frac{C}{r^2}-\omega$$ $$ r^2 r' - r^3((\varphi')^2 +\omega^2) =-GM= r^2 r' - r^3\left(\frac{C}{r^2}-\omega\right)^2 -r^3\omega^2 $$ $$ r' = r\left(\frac{C}{r^2}-\omega\right)^2 +r\omega^2-\frac{GM}{r^2}$$ $$\frac{dr}{dt}= r' = \frac{C^2}{r^3}-\frac{GM}{r^2} -\frac{2C\omega}{r}+2r\omega^2$$ $$t=\int \frac{dr}{\frac{C^2}{r^3}-\frac{GM}{r^2} -\frac{2C\omega}{r}+2r\omega^2}+c_1$$ This integral can be analytically solved. This would be a arduous task, involving a quartic polynomial equation to be solved and then separating the polynomial fraction into four elementary fractions which are each integrable. Too long to be done here.</p> <p>$$\frac{d\varphi}{dt}=\frac{C}{r^2}-\omega \quad\to\quad \frac{d\varphi}{dr}=(\frac{C}{r^2}-\omega)\frac{dr}{dt} $$ $$\frac{d\varphi}{dr}= (\frac{C}{r^2}-\omega)\left( \frac{C^2}{r^3}-\frac{GM}{r^2} -\frac{2C\omega}{r}+2r\omega^2\right) $$</p> <p>$$\varphi(r)=\int (\frac{C}{r^2}-\omega)\left( \frac{C^2}{r^3}-\frac{GM}{r^2} -\frac{2C\omega}{r}+2r\omega^2\right) dr$$ This is easy to integrate (expand and integrate each elementary term).</p> <p>So, the problem is solved on parametric form :</p> <ul> <li><p>The explicit function $\varphi(r)$</p></li> <li><p>The implicit function $t(r)$. </p></li> </ul> <p>There is probably no closed form for the inverse function $r(t)$.</p>
482
differential equations
Freedom of second order differential equations
https://math.stackexchange.com/questions/126986/freedom-of-second-order-differential-equations
<p>Usually for second order differential equations, we get a general solution with 2 constants that are obtained by using the boundary conditions. What about for a coupled pair of second order differential equations? Would I be right to think that the same applies? </p>
<p>You will have $4$ constants, which can be taken as the values of the dependent variables and their derivatives at the initial point.</p>
483
differential equations
Explicit differential equations
https://math.stackexchange.com/questions/89727/explicit-differential-equations
<p>Can you help me to find all solutions of differential equation $y&#39;^2-(x+y)y&#39;+xy=0$?</p> <p>I wrote this equation as product of explicit equations:</p> <p>$$(y&#39;-x)(y&#39;-y)=0$$</p> <p>Then I found zeroes: $y&#39;-x=0 \Longrightarrow y&#39;=x \Longrightarrow y=\frac{x^2}2+C_1$</p> <p>$y&#39;=0$ I don't know what to do here. Maybe to solve as equation 'without $x$'? Am I doing this right?</p>
<p>Your second equation is not $y&#39;=0$ as you write in the question, but $y&#39;-y=0$, in other words $y&#39;=y$. This has the well-known solution $y=C_2e^x$.</p> <p>So now you have the solutions $y=\frac{x^2}2+C_1$ and $y=C_2 e^x$. Now, for the most difficult part of the trick, you need to find all ways to glue <em>intervals</em> of these solutions together so the derivative matches across the glue point ... which means (consult the differential equations again!) that the gluing point(s) has to lie on the line $x=y$.</p> <p>For example one solution among many would be $y=\cases{e^{x-1}&amp;\text{for }x\le 1\\ \frac{x^2+1}2&amp;\text{for }x\ge 1}$.</p> <p>On the other hand, this gluing-together doesn't involve anything specific to differential equations, so perhaps you can take it from here?</p>
484
differential equations
Defining Homogeneous Differential Equations
https://math.stackexchange.com/questions/3215032/defining-homogeneous-differential-equations
<p>I am putting together a list of types of first and second order differential equations and I am struggling with the definition of homogeneous and nonhomogeneous. Can anyone clarify the definitions for me please? </p>
<p>The word <em>homogeneous</em>, somewhat confusingly, is used in two different ways when describing differential equations.</p> <p>In the case of <strong>first-order equations</strong>, a homogeneous equation is usually something of the form</p> <p><span class="math-container">$$\frac{dy}{dx} = f(\frac{y}{x})$$</span></p> <p>i.e. an equation in which <span class="math-container">$y$</span> and <span class="math-container">$x$</span> appear only in the combination <span class="math-container">$\frac{y}{x}$</span>. ‘Homogeneous’ in this case refers to the fact that if <span class="math-container">$y(x)$</span> is a solution then <span class="math-container">$\lambda y(\frac{x}{\lambda})$</span> is also a solution for any <span class="math-container">$\lambda \neq 0$</span>.</p> <p>An example of such an equation would be <span class="math-container">$$\frac{dy}{dx} = \frac{y^2 + 4xy - x^2}{4x^2}$$</span></p> <p>and we can divide the numerator and the denominator by <span class="math-container">$x^2$</span> to obtain</p> <p><span class="math-container">$$\frac{dy}{dx} = \frac{(\frac{y}{x})^2 + 4(\frac{y}{x}) - 1}{4}$$</span></p> <p>which meets our criteria for a first-order homogeneous equation.</p> <p>In the case of <strong>second-order equations</strong>, it usually means something of the form</p> <p><span class="math-container">$$\frac{d^2y}{dx^2} + a(x) \frac{dy}{dx} + b(x)y = 0$$</span></p> <p>The zero on the right hand side is what makes this a homogeneous equation. If <span class="math-container">$y_1$</span> and <span class="math-container">$y_2$</span> are solutions, then <span class="math-container">$A y_1(x) + B y_2 (x)$</span> is also a solution, for constants <span class="math-container">$A, B \in \mathbb{R}$</span>.</p> <p>A useful way of thinking about this is to observe that the left hand side defines a linear map on the vector space of differentiable functions, and so the solution space is the set of vectors mapped to the zero vector, i.e. the kernel of this linear map. The kernel is in particular a subspace and so is closed under addition and scalar multiplication (in this case by real numbers).</p> <p>A inhomogeneous equation, then, is an equation where the right hand side is not zero. The general form is</p> <p><span class="math-container">$$\frac{d^2y}{dx^2} = + a(x) \frac{dy}{dx} + b(x)y = f(x)$$</span></p> <p>These are harder to solve, and do not have the same properties as the homogeneous variant.</p>
485
differential equations
Required Practice Problems of Modelling Differential Equations?
https://math.stackexchange.com/questions/3035396/required-practice-problems-of-modelling-differential-equations
<p>Folks,</p> <p>In differential equations there are two things: 1. Modeling (form differential equation based on physical system or real life problem) 2. Solving differential equation Need help for first part.</p> <p>Could you please provide link to problem sets or practice set of Modeling Differential Equations?</p> <p>I found on MIT OCW, however it will be helpful if you could send link other practicr sets, books. Thanks.</p>
486
differential equations
Advanced book on partial differential equations
https://math.stackexchange.com/questions/1989007/advanced-book-on-partial-differential-equations
<p>I am looking for an advanced book on partial differential equations that makes use of functional analysis as much as possible. All the books I have looked in so far either shy away from functional analysis and try to avoid even basic concepts, or present results from functional analysis I know anyway just to discuss some very basic applications to partial differential equations (say, semigroup theory applied to the heat equation).</p> <p>The book I am looking for should</p> <ul> <li>use functional analysis instead of hard analysis whenever possible (I am well aware of the fact that the theory of partial differential equations is not merely an application of functional analysis),</li> <li>go into some advanced topics that are relevant for research, and</li> <li>not spend too much space on covering the results of functional analysis itself - I have my references for that.</li> </ul> <p>The background is that I am interested in operator equations that are not partial differential equations, yet methods from pde are often helpful. If it is relevant, I am mostly interested in <em>elliptic</em> and <em>parabolic</em> equations, although I don't want to limit the focus.</p>
<p>Here are some suggestions.</p> <ol> <li><em>Functional Analysis, Sobolev Spaces, and Partial Differential Equations</em> by Haim Brezis. This violates your rule of not developing the functional analysis material, but is a very good book. You can skip the stuff you know and jump right to the PDE / operator bits.</li> <li><em>An Introduction to Partial Differential Equations</em> by Michael Renardy and Robert Rogers. Here you want the last part of the book, say after chapter 8. There's a lot of nice stuff in Chapters 10-12 that uses lots of functional analysis to solve nonlinear elliptic problems, etc.</li> <li><em>Monotone Operators in Banach Space and Nonlinear PDE</em> by Ralph Showalter. This is heavy functional / operator theoretic material used to solve some serious nonlinear problems.</li> <li><em>Nonlinear Differential Equations of Monotone Type in Banach Spaces</em> by Biorel Barbu. This covers the same sort of material as the Showalter book.</li> <li><em>Applications of Functional Analysis and Operator Theory</em> by Hutson and Pym. There's a lot more in here than applications in PDE, but you might find it interesting.</li> </ol>
487
differential equations
Differential Equations book for self-learning
https://math.stackexchange.com/questions/4523091/differential-equations-book-for-self-learning
<p>After posting this <a href="https://math.stackexchange.com/questions/4521706/suggestion-of-around-6-books-in-the-topics-abstract-algebra-linear-algebra">question</a>, I have decided to separate the contents into multiple questions.</p> <p>Reasons I am doing this:</p> <ul> <li><p>Right tag for each topic, so people who are interested in that topic and watch that tag may suggest rather than one suggests books for all the mentioned topics. So it might be simpler and more efficient.</p> </li> <li><p>Having difficulty to decide as everyone has his own opinion. But I believe there is something common for most of people who have read in this topic.</p> </li> <li><p>I did not get many answers to see common suggestions and take a decision.</p> </li> <li><p>So this post might be as voting post.</p> </li> </ul> <hr /> <p>I need a book in:</p> <p><span class="math-container">$\big( \bigstar \big)$</span> <span class="math-container">$\text{Differential Equations}$</span>:</p> <p>Some suggestions in this website are:</p> <p><span class="math-container">$\star$</span> &quot;<span class="math-container">$\text{Differential Equations with Applications and Historical Notes}$</span>&quot; by &quot;<span class="math-container">$\text{George F. Simmons}$</span>&quot;.</p> <p><span class="math-container">$\star$</span> &quot;<span class="math-container">$\text{Ordinary Differential Equations}$</span>&quot; by &quot;<span class="math-container">$\text{Morris Tenenbaum and Harry Pollard}$</span>&quot;.</p> <p><span class="math-container">$\star$</span> &quot;<span class="math-container">$\text{Ordinary Differential Equations}$</span>&quot; by &quot;<span class="math-container">$\text{Vladimir I. Arnold and R. Cooke}$</span>&quot;.</p> <p><span class="math-container">$\star$</span> &quot;<span class="math-container">$\text{Ordinary Differential Equations}$</span>&quot; by &quot;<span class="math-container">$\text{Wolfgang Walter and R. Thompson}$</span>&quot;.</p> <p><span class="math-container">$\star$</span> &quot;<span class="math-container">$\text{Partial Differential Equations}$</span>&quot; by &quot;<span class="math-container">$\text{Lawrence C. Evan}$</span>&quot;.</p> <p><span class="math-container">$\star$</span> &quot;<span class="math-container">$\text{Partial Differential Equations: An Introduction}$</span>&quot; by &quot;<span class="math-container">$\text{Walter A. Strauss}$</span>&quot;.</p> <p><span class="math-container">$\star$</span> &quot;<span class="math-container">$\text{Partial Differential Equations for Scientists and Engineers}$</span>&quot; by &quot;<span class="math-container">$\text{Stanley J. Farlow}$</span>&quot;.</p> <hr /> <p>Even though I have some knowledge in this topic, I want a book that is easy to read for self-learners, to be as comprehensive as possible (but starting from scratch, so it should not be 2nd text), contains proofs as much as possible, contains good number of examples/exercises, and requires least prerequisites.</p> <hr /> <p>No one suggested me</p> <hr /> <p>You can suggest other than the listed above if you think there is a better one, please suggest the best (one book containing ODEs and one book containing PDEs) or (one single book containing both ODEs and PDEs) whichever better (in your opinion).</p> <p>So if book X contains ODEs and book Y contains PDEs and book Z contains both ODEs and PDEs, I wish you suggest me either (X and Y) or (Z) whichever you think it is the best and satisfies the above criteria.</p> <p>Thanks a lot.</p>
488
differential equations
Solving differential equations &quot;$c$&quot; value
https://math.stackexchange.com/questions/2951485/solving-differential-equations-c-value
<p>I have two questions regarding solving differential equations given initial conditions:</p> <p>1) When do you substitute the initial conditions into the equation to calculate the value of the constant "<span class="math-container">$c$</span>". Do you substitute it once you integrate both sides of the differential equations and you get a constant "<span class="math-container">$c$</span>"? Or do you substitute the initial conditions after integrating both sides AS WELL AS rearranging the equations to get <span class="math-container">$y$</span> in terms of <span class="math-container">$x$</span> and <span class="math-container">$c$</span>. Using the second method, sometimes you get two values for "<span class="math-container">$c$</span>" with only one value being correct. </p> <p>2) When you solve certain differential equations, you get one side written with "<span class="math-container">$\pm$</span>" in the front. However, only one equation fits the initial conditions even after you solve for the constant "<span class="math-container">$c$</span>". The one that fits is either the one with the "<span class="math-container">$+$</span>" or the one with the "<span class="math-container">$-$</span>" in the front. How do you justify which one is correct without giving geometric representations of both and then saying "according to graph, this one <em>insert equation</em> is correct". Can you somehow solve without getting the "<span class="math-container">$\pm$</span>" in the front?</p> <p>Thanks. </p>
<p>For your first question. Wait until you have got a solution that is dependent on this constant <span class="math-container">$c$</span>, then plug in your initial conditions to find out it's value. Sometimes you won't have initial conditions so you can just leave <span class="math-container">$c$</span> in there.</p> <p>With regards to the second part, often your answer won't make sense if you pick the <span class="math-container">$+$</span> or the <span class="math-container">$-$</span>. For example if you have <span class="math-container">$y$</span> defined as being positive, but taking the <span class="math-container">$-$</span> makes it negative. It is often left to you to justify your choice, and if you can't, it can be possible that both hold.</p>
489
differential equations
Differential equations notations confusion
https://math.stackexchange.com/questions/325545/differential-equations-notations-confusion
<p>Given these differential equations:</p> <p>$\frac{d^2x}{dt^2} = 2\Omega\frac{dy}{dt}\sin(\lambda) - \frac{g}{L}x$</p> <p>$\frac{d^2y}{dt^2} = -2\Omega\frac{dx}{dt}\sin(\lambda) - \frac{g}{L}y$ </p> <p>Now making the following substitutions:</p> <p>$\frac{dx}{dt} = u$ and $\frac{dy}{dt} = v$ we have the following differential equations</p> <p>$\frac{du}{dt} = 2\Omega v\sin(\lambda) - \frac{g}{L}x$</p> <p>$\frac{dv}{dt} = -2\Omega u\sin(\lambda) - \frac{g}{L}y$</p> <p>Now here is where I get confused, note that we can write $\frac{du}{dt} = f(t,u)$ and likewise for the other equation. But if that is so, then what would be my unknown variable for the first equation? Or is it wrong to write $\frac{du}{dt} = f(t,u)$? </p>
<p>I think if you write the system as follows, it gets simpler, although it takes time.</p> <p>$$D^2x=aDy-bx, ~~~\frac{dx}{dt}=Dx\\ D^2y=-aDx-by,~~~\frac{dy}{dt}=Dy$$ wherein $a=2\Omega\sin(\lambda), b=\frac{g}{L}$. So we get: $$(D^2+b)x-aDy=0\\\\ (D^2+b)y+aDx=0$$ By any method you know for solving abbove system, one gets: $$[(D^2+b)^2+a^2D^2]y=0$$ and $$[(D^2+b)^2+a^2D^2]x=0$$ Each of the above can converetd to standard form as, for example for $y(t)$, : $$[(D^2+b)^2+a^2D^2]y=0\to y^{(4)}+(2b+a^2)y''+b^2y=0$$ which can be solved easily.</p>
490
differential equations
Differential equations system
https://math.stackexchange.com/questions/1778640/differential-equations-system
<p>I have found the following example in one of my courses but I don't have any similar exercises resolved so I would like to know how to solve this:</p> <p>The differential equations system is the following:</p> <p>$$x_1' = 3x_1 - 2x_2 + e^t$$ $$x_2' = 2x_1 - x_2 + 2e^{2t}$$</p> <p>a) Write the system in the matriceal form $x' = Ax+b(t)$</p> <p>b) Determinate the solutin of the system.</p>
<p>Writing the system in matrix form is nearly identical to writing any linear (non-differential-equation) system in matrix form: $$\begin{cases} ax+by=c\\dx+ey=f \end{cases}\iff\underbrace{\begin{pmatrix}a&amp;b\\d&amp;e\end{pmatrix}}_{\mathbf{A}}\,\underbrace{\begin{pmatrix}x\\y\end{pmatrix}}_{\mathbf{x}}=\underbrace{\begin{pmatrix}c\\f\end{pmatrix}}_{\mathbf{b}}$$ The main components are the coefficient matrix $\mathbf{A}$, what I'll call the solution vector $\mathbf{x}$, and what I'll call (for lack of a better name) the right hand side vector $\mathbf{b}$.</p> <p>In the case of a system of linear ODEs, the same $\mathbf{A}\mathbf{x}=\mathbf{b}$ framework can be used. For example, you can rewrite the following general system as: $$\begin{cases}ax_1+bx_2={x_1}'\\dx_1+ex_2={x_2}'\end{cases}\iff\begin{pmatrix}a&amp;b\\d&amp;e\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}=\begin{pmatrix}{x_1}'\\{x_2}'\end{pmatrix}=\begin{pmatrix}x_1\\x_2\end{pmatrix}'$$ Your system is different in that there are extra terms that do not depend on $x_1$ or $x_2$. These extra terms can be addressed by introducing another vector to represent any instance of a new term. For example: $$\begin{cases}ax_1+bx_2+t={x_1}'\\dx_1+ex_2={x_2}'\end{cases}\iff\begin{pmatrix}a&amp;b\\d&amp;e\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}+\begin{pmatrix}1\\0\end{pmatrix}t=\begin{pmatrix}x_1\\x_2\end{pmatrix}'$$ This extra term(s) is what your question refers to as $b(t)$.</p> <p>For your particular system, the matrix form would simply be $$\begin{pmatrix}3&amp;-2\\2&amp;-1\end{pmatrix}\begin{pmatrix}x_1\\x_2\end{pmatrix}+\begin{pmatrix}1\\0\end{pmatrix}e^t+\begin{pmatrix}0\\2\end{pmatrix}e^{2t}=\begin{pmatrix}x_1\\x_2\end{pmatrix}'$$ Solving this system can be done readily via undetermined coefficients. You can find several examples using this method worked out <a href="http://tutorial.math.lamar.edu/Classes/DE/NonhomogeneousSystems.aspx" rel="nofollow">here</a>. If you'd like a walkthrough for this particular problem (or are asked to use a different method for solving), feel free to leave a comment.</p>
491
differential equations
Convolution and differential equations
https://math.stackexchange.com/questions/1483787/convolution-and-differential-equations
<p>Consider the following system of differential equations: \begin{align} x_1'=f_1(x_1,x_2)\\ x_2'=f_2(x_1,x_2) \end{align} Assume that a solution $x(t)$ exists for $t\in (-T,T)$. Let $g:\mathbb{R}^2\to\mathbb{R}$ be a smooth function. Now we consider the following system of differential equations: \begin{align} x_1'=(g*f_1)(x_1,x_2)\\ x_2'=(g*f_2)(x_1,x_2) \end{align} What can we say about the solution of the above system? Can we say anything about the interval of validity? Or the proximity of the solution to the solution of the original equation?</p> <p>In particular I am interested when $g$ is the Gaussian convolution kernal: \begin{align} g(x)=\frac{1}{4\pi\sigma}exp\Big({-\frac{||x||^2}{4\sigma}}\Big) \end{align}</p>
492
differential equations
differential equations of second order
https://math.stackexchange.com/questions/1369886/differential-equations-of-second-order
<p>How may I solve this differential equations:</p> <p>$$y''+4y=12x^2-16x\cos(2x)?$$</p>
<p>Hint: Solve the homogeneous part first: the part $y''+4y=0$ by letting $y=e^{\lambda x}$. Then differentiate, substitute, simplify and solve.</p> <p>Then solve the rest using method of undetermined coefficient.</p>
493
differential equations
associated homogeneous linear differential equations
https://math.stackexchange.com/questions/1971539/associated-homogeneous-linear-differential-equations
<p>Can someone please explain how associated homogeneous linear differential equations work with an example?</p>
<p>Lets say you have the linear differential equation $y'' + y =3x$;</p> <p>The associated homogeneous equation is $y'' + y = 0$</p> <p>The set of the solutions to the homogeneous equation is {$\alpha \cos +\beta \sin ; \alpha, \beta \in \Bbb R$}.</p> <p>One particular solution to the innitial equation is 3x.</p> <p>Thus theset of solutions to the initial equation is {$\alpha \cos x +\beta \sin x + 3x; \alpha, \beta \in \Bbb R$}.</p>
494
differential equations
Dependent differential equations
https://math.stackexchange.com/questions/3614288/dependent-differential-equations
<p>So I have been working with these differential equations that are dependent on each other: <span class="math-container">$$ \begin{aligned} \frac{dy}{dt} &amp;= 0.1y-0.01(10000-a)\\ \frac{da}{dt} &amp;= 0.1a-10y \end{aligned} $$</span> So the first equation represents the change rate of predatory fish while the other equation is a representation of small fish.</p> <p>My question is: is there a way to find a general solution for these equations, say in form of an exponential function? And what does that factor 0.01 in the first equation stand for, having 10000 as the population of the small fish at the beginning?</p>
<p>You can rewrite the system in matrix form, introducing the vector <span class="math-container">$p:=(y,a)^T$</span>. The system now reads</p> <p><span class="math-container">$$p'=Mp+q$$</span> where <span class="math-container">$M$</span> is a constant matrix and <span class="math-container">$q$</span> is a constant vector. We can get rid of <span class="math-container">$q$</span> by trying a contant solution <span class="math-container">$p_p$</span>, such that</p> <p><span class="math-container">$$p_p'=0=Mp_p+q.$$</span> Then by subtraction,</p> <p><span class="math-container">$$p'=Mp.$$</span></p> <p>Now assume a solution with an exponential form</p> <p><span class="math-container">$$p=ce^{\lambda t}.$$</span></p> <p>We plug it in the equation and get</p> <p><span class="math-container">$$\lambda ce^{\lambda t}=Mce^{\lambda t},$$</span> or</p> <p><span class="math-container">$$Mc=\lambda c.$$</span></p> <p>This shows that <span class="math-container">$\lambda$</span> is an Eigenvalue of <span class="math-container">$M$</span> and <span class="math-container">$c$</span> an Eigenvector. The general solution will combine the two Eigenvalues of <span class="math-container">$M$</span>.</p>
495
differential equations
Differential Equations Applications
https://math.stackexchange.com/questions/4681746/differential-equations-applications
<p>An initial amount <span class="math-container">$\alpha$</span> of a tracer (such as a dye or a radioactive isotope) is injected into Compartment 1 of the two-compartment system shown in the Figure. At time <span class="math-container">$t &gt; 0$</span>, let <span class="math-container">$x_1 (t)$</span> and <span class="math-container">$x_2 (t)$</span> denote the amount of tracer in Compartment 1 and Compartment 2, respectively. Thus under the conditions stated, <span class="math-container">$x_1 (0) = \alpha$</span> and <span class="math-container">$x_2 (0) = 0$</span>. The amounts are related to the corresponding concentrations <span class="math-container">$\rho_1 (t)$</span> and <span class="math-container">$\rho_2 (t)$</span> by the equations <span class="math-container">$x_1 = \rho_1 V_1$</span> and <span class="math-container">$x_2 = \rho_2 V_2$</span> (i) where <span class="math-container">$V_1$</span> and <span class="math-container">$V_2$</span> are the constant respective volumes of the compartments. The differential equations that describe the exchange of tracer between the compartments (using the relations in (i),) are</p> <p>dx1/dt = −L21x1 + L12x2 dx2/dt = L21x1 − L12x2</p> <p>where L21 = k21 / V1 is the fractional turnover rate of Compartment 1 with respect to 2 and L12 = k12 / V2 is the fractional turnover rate of Compartment 2 with respect to 1. Solve the system of differential equations using elimination when L21 = 2/25, L12 = 1/50 and α = 25.</p> <p>My solution: I've tried solving and found x2 = c1+c2e^(-1/10)t after using initial conditions we get x2 = 96(1-e^(-1/10t)) and x1 = 1/4(96-e^(-1/10t))+5/4e^(-1/10t) verify my solution and let me know how to improve it.</p>
496
differential equations
Why aren&#39;t exact differential equations considered PDE?
https://math.stackexchange.com/questions/929973/why-arent-exact-differential-equations-considered-pde
<p>Exact differential equations come from finding the total differential from some multivariable function.</p> <p>In the exact differential equation $M\mathrm{d}x+N\mathrm{d}y=0$</p> <p>M and N are considered to be partial derivatives of some potential function... So why aren't exact differential equations considered PDEs? After all, you're finding the potential function given it's partial derivatives...</p> <p>Thanks.</p>
<p>Because the partial differentials part is just a method of solving them, it's in the intermediate steps of a solution, not in the DE itself from the start. A bad example(can't think of a better one right now) would be considering $x-2=0$ a second degree polynomial because you can introduce parameters and make it $x^2=4, x&gt;0$.</p> <p>Also, consider being able to solve a D.E. by transforming it into exact equation by multiplying it with an integrating factor or by using another method that has nothing to do with partial derivatives. Why would you call that a PDE?</p> <p>A more specialized example would be $$y'=y \iff y'-y=0 \stackrel{\cdot e^{-x}}{\iff}\frac{y'}{e^x}-\frac{y}{e^x}=0$$ Now, would you consider $y'=y$ a PDE?</p>
497
differential equations
Solving differential equations on a scheme
https://math.stackexchange.com/questions/4171145/solving-differential-equations-on-a-scheme
<p>Is there a way to define differential equations on a scheme? If so, is there a Galois-like theory for adding solutions to the differential equation to the sections of the Scheme?</p>
<p><strong>Question:</strong> &quot;If so, is there a Galois-like theory for adding solutions to the differential equation to the sections of the Scheme?&quot;</p> <p><strong>Answer:</strong> If <span class="math-container">$A:=k[x]$</span> is a polynomial ring over a field <span class="math-container">$k$</span> it follows <span class="math-container">$T:=Der_k(A)\cong k[x]\partial_x$</span> were <span class="math-container">$\partial_x$</span> is partial derivative wrto the <span class="math-container">$x$</span>-variable. Let <span class="math-container">$E:=A\{e_1,..,e_n\}$</span> be the free <span class="math-container">$A$</span>-module of rank <span class="math-container">$n$</span>. A &quot;connection&quot; on <span class="math-container">$E$</span> is a map</p> <p><span class="math-container">$$\nabla: T \rightarrow End_k(E)$$</span></p> <p>defined by (<span class="math-container">$z:=\sum_i a_ie_i$</span> with <span class="math-container">$a_i \in A$</span>)</p> <p><span class="math-container">$$\nabla(\partial)(z):=\sum_i \partial(a_i)e_i + A(\partial) z$$</span></p> <p>where <span class="math-container">$A: T \rightarrow Mat(n,A)$</span> is an <span class="math-container">$A$</span>-linear map where <span class="math-container">$Mat(n,A)$</span> is the ring of <span class="math-container">$n\times n$</span>-matrices with coefficients in <span class="math-container">$A$</span>. The kernel of the connection</p> <p><span class="math-container">$$E^{\nabla} \subseteq E$$</span></p> <p>is a <span class="math-container">$k$</span>-vector space - the &quot;solution space&quot; of the connection.</p> <p><strong>Note:</strong> The solution space <span class="math-container">$E^{\nabla}$</span> consists of polynomial solutions to the system <span class="math-container">$(E,\nabla)$</span>, and usually polynomial solutions to systems of linear differential equations are not that interesting.</p> <p><strong>Example: Complex manifolds:</strong> One may do something similar for complex manifolds: If <span class="math-container">$X \subseteq \mathbb{P}^n_{\mathbb{C}}$</span> is a compex manifold, <span class="math-container">$E$</span> a holomorphic finite rank vector bundle and <span class="math-container">$\nabla: T_X \rightarrow End(E)$</span> a flat holomorphic connection, it follows <span class="math-container">$E^{\nabla}$</span> is a local system on <span class="math-container">$X$</span>. There is an equivalence of categories between the category of local systems of complex vector spaces on <span class="math-container">$X$</span>, and complex representations of the topological fundamental group of <span class="math-container">$X$</span>, hence to <span class="math-container">$E^{\nabla}$</span> we get a finite dimensional complex representation</p> <p><span class="math-container">$$\rho_{\nabla}:\pi_1(X) \rightarrow End_{\mathbb{C}}(W)$$</span></p> <p>of the topological fundamental group. This correspondence leads to the famous &quot;Riemann-Hilbert correspondence&quot; - a much studied topic in algebra/topology. Similar constructions exist for schemes. Such questions leads to the study of differential Galois theory, Picard-Vessiot theory and linear algebraic groups. A &quot;linear algebraic group&quot; is an algebraic subgroup of <span class="math-container">$GL_k(V)$</span> for some vector space <span class="math-container">$V$</span> of finite dimension over a field <span class="math-container">$k$</span>.</p> <p><a href="https://en.wikipedia.org/wiki/Riemann%E2%80%93Hilbert_correspondence#Examples" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Riemann%E2%80%93Hilbert_correspondence#Examples</a></p> <p><a href="https://en.wikipedia.org/wiki/Picard%E2%80%93Vessiot_theory" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Picard%E2%80%93Vessiot_theory</a></p> <p><a href="https://en.wikipedia.org/wiki/Differential_Galois_theory" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Differential_Galois_theory</a></p>
498
differential equations
Geometrical insights on differential equations
https://math.stackexchange.com/questions/4409559/geometrical-insights-on-differential-equations
<p>Hi: I am researching about relationships between Differential Geometry and Differential Equations. I am looking for <strong>examples and references of the use of geometric concepts to solve or analyze differential equations</strong>.</p> <p>For example, in <em>Differential Equations With Applications and Historical Notes</em> the author relates the solution to the Brachistochrone with the Snell law and the behavior of light, providing a very valuable intuition of the form of the solution.</p> <p>I know a lot of geometric concepts are used in this field: orbits, symmetries... I am looking for especially creative or illuminating examples, especially if they had historical significancy.</p> <p>For example, in this <a href="https://math.stackexchange.com/questions/1384338/math-intuition-and-natural-motivation-behind-t-student-distributionv">question</a> a geometric non-obvious intuition for an important theorem in statistics is provided. Another example is the counterexample to Poincaré-Bendixson Theorem counterexample in the torus, where the geometry of the torus is used to construct non-periodic yet recurrent orbits.</p> <p>Examples and references of relationships in the other direction are also welcome: for example, Picard–Lindelöf theorem can be used to prove every spatial curve is uniquely determined (up to rigid movement) by its curvature and torsion.</p> <p>Thanks in advance.</p>
<p>As stated in the comment section this is a vast topic. I will just name a few directions that you can explore.</p> <p>-<em>Degree theory</em>: in finite dimension a map <span class="math-container">$f:M^n\to N^n$</span> between closed manifolds of the same dimension has a degree which can be defined as a signed sum of the points in the fiber <span class="math-container">$f^{-1}(y_0)$</span> of a regular value <span class="math-container">$y_0$</span>. This enjoys several properties, for example it does not really depend on <span class="math-container">$y_0$</span> and it depends on <span class="math-container">$f$</span> only up to homotopy. This can be generalized (under suitable assumptions)to PDEs once we realize the solution set of a PDE as the zero locus of a map <span class="math-container">$F:X\to Y$</span> where <span class="math-container">$X, Y$</span> are Banach spaces/manifolds. Then if <span class="math-container">$F$</span> has positive degree it means that the fiber is non-empty, from this one can conclude the existence of solutions of a perturbation of <span class="math-container">$F$</span> or of <span class="math-container">$F$</span> itself if <span class="math-container">$0$</span> is a regular value. A book to read about this is Deimling &quot;Nonlinear functional analysis&quot;.</p> <p>-<em>Morse theory</em>: this has been first applied by Marston Morse to prove results about the existence of geodesics on closed manifolds. We have a functional <span class="math-container">$E:X\to \mathbb R$</span> (the energy, e.g. the length of a curve, <span class="math-container">$X$</span> a set of curves) and we are interested critical points of <span class="math-container">$E$</span>, i.e. solutions to the equation <span class="math-container">$\nabla E (x) = 0 $</span>. For example the geodesic equation can be formalized in this way (see <a href="https://math.stackexchange.com/questions/2270622/critical-curves-of-the-energy-functional-are-geodesics">Critical Curves of the Energy Functional are Geodesics</a>). Now the idea of Morse is that the flow of the gradient may be used to infer informations about the existence of such critical points. In finite dimension think about a generic function <span class="math-container">$\mathbb S^2\to \mathbb R$</span>, we know that the sum <span class="math-container">$n_m - n_s + n_M = 2$</span> where <span class="math-container">$n_m, n_s, n_M$</span> are respectively the number of local minimum, saddles, and local maxima. Consequently if one knows the number of <span class="math-container">$n_m, n_s$</span> he can infer the existence of <span class="math-container">$n_M$</span> new critical points. A famous book about this is Milnor's Morse theory.</p> <p>-<em>h-principle</em> A PDE on a manifold <span class="math-container">$M$</span> prescribes an operator <span class="math-container">$F:J^r(X)\to \mathbb R$</span> on some jet bundle. Solutions of the PDE must lie in the zero locus of <span class="math-container">$F$</span>. <span class="math-container">$J^{r}(X)$</span> is fibration over <span class="math-container">$M$</span>, and a solution of the PDE gives a section of <span class="math-container">$J^{r}(X)$</span>. Unfortunately a section of <span class="math-container">$J^r(X)$</span> does not necessarily have to be induced from a function (such sections are called non-holonomic). The <span class="math-container">$h$</span>-principle essentially consists in finding sufficient conditions that ensures that you can promote a section of <span class="math-container">$J^r(X)$</span> to an holonomic one (hence a real solution). A book where you can read about this is Eliashberg &amp; Mishachev &quot;introduction to the h-principle&quot;. Also see the introduction of <a href="https://arxiv.org/pdf/1609.03180.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1609.03180.pdf</a></p> <p>-<em>Index theory</em> The Atiyah-Singer index theorem computes the difference between thee dimension of the kernel and the dimension of the cokernel of an elliptic differential operator in purely topological terms. Studying its proof will give you a lot of tools to understand classic elliptic PDEs and geometry. A good book about it is Lawson-Michelson's &quot;spin geometry&quot;.</p>
499