category
stringclasses 107
values | title
stringlengths 15
179
| question_link
stringlengths 59
147
| question_body
stringlengths 53
33.8k
| answer_html
stringlengths 0
28.8k
| __index_level_0__
int64 0
1.58k
|
|---|---|---|---|---|---|
solve differential equations
|
MATLAB solve Ordinary Differential Equations
|
https://stackoverflow.com/questions/23962549/matlab-solve-ordinary-differential-equations
|
<p>How can I use matlab to solve the following Ordinary Differential Equations?</p>
<p><strong>x''/y = y''/x = -( x''y + 2x'y' + xy'')</strong></p>
<p>with two known points, such as t=0: x(0)= x0, y(0) = y0; t=1: x(1) = x1, y(1) = y1 ?
It doesn't need to be a complete formula if it is difficult. A numerical solution is ok, which means, given a specific t, I can get the value of x(t) and y(t).</p>
<p>If matlab is hard to do this, mathematica is also OK. But as I am not familiar with mathematica, so I would prefer matlab if possible.</p>
<p>Looking forward to help, thanks!</p>
<p>I asked the same question on stackexchange, but haven't get good answer yet.
<a href="https://math.stackexchange.com/questions/812985/matlab-or-mathematica-solve-ordinary-differential-equations">https://math.stackexchange.com/questions/812985/matlab-or-mathematica-solve-ordinary-differential-equations</a></p>
<p>Hope I can get problem solved here!</p>
<p>What I have tried is:</p>
<p>---------MATLAB</p>
<blockquote>
<blockquote>
<p>syms t</p>
</blockquote>
</blockquote>
<pre><code>>> [x, y] = dsolve('(D2x)/y = -(y*D2x + 2Dx*Dy + x*D2y)', '(D2y)/x = -(y*D2x + 2Dx*Dy + x*D2y)','t')
Error using sym>convertExpression (line 2246)
Conversion to 'sym' returned the MuPAD error: Error: Unexpected 'identifier'.
[line 1, col 31]
Error in sym>convertChar (line 2157)
s = convertExpression(x);
Error in sym>convertCharWithOption (line 2140)
s = convertChar(x);
Error in sym>tomupad (line 1871)
S = convertCharWithOption(x,a);
Error in sym (line 104)
S.s = tomupad(x,'');
Error in dsolve>mupadDsolve (line 324)
sys = [sys_sym sym(sys_str)];
Error in dsolve (line 186)
sol = mupadDsolve(args, options);
</code></pre>
<p>--------MATLAB</p>
<p>Also, I tried to add conditions, such as x(0) = 2, y(0)=8, x(1) = 7, y(1) = 18, and the errors are still similar. So what I think is that this cannot be solve by dsolve function.</p>
<p>So, again, the key problem is, given two known points, such as when t=0: x(0)= x0, y(0) = y0; t=1: x(1) = x1, y(1) = y1 , how I get the value of x(t) and y(t)?</p>
<p>Update:
I tried ode45 functions. First, in order to turn the 2-order equations into 1-order, I set x1 = x, x2=y, x3=x', x4=y'. After some calculation, the equation becomes:</p>
<pre><code>x(1)' = x(3) (1)
x(2)' = x(4) (2)
x(3)' = x(2)/x(1)*(-2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2)) (3)
x(4)' = -2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2) (4)
</code></pre>
<p>So the matlab code I wrote is:</p>
<pre><code>myOdes.m
function xdot = myOdes(t,x)
xdot = [x(3); x(4); x(2)/x(1)*(-2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2)); -2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2)]
end
main.m
t0 = 0;
tf = 1;
x0 = [2 3 5 7]';
[t,x] = ode45('myOdes',[t0,tf],x0);
plot(t,x)
</code></pre>
<h2>It can work. However, actually this is not right. Because, what I know is that when t=0, the value of x and y, which is x(1) and x(2); and when t=1, the value of x and y. But the ode functions need the initial value: x0, I just wrote the condition x0 = [2 3 5 7]' randomly to help this code work. So how to solve this problem?</h2>
<p>UPDATE:
I tried to use the function bvp4c after I realized that it is a boundary value problem and the following is my code (Suppose the two boundry value conditions are: when t=0: x=1, y=3; when t=1, x=6, y=9. x is x(1), y is x(2) ):</p>
<pre><code>1. bc.m
function res = bc(ya,yb)
res = [ ya(1)-1; ya(2)-3; yb(1) - 6; yb(2)-9];
end
2. ode.m
function dydx = ode(t,x)
dydx = [x(3); x(4); x(2)/x(1)*(-2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2)); -2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2)];
end
3. mainBVP.m
solinit = bvpinit(linspace(0,6,10),[1 0 -1 0]);
sol = bvp4c(@ode,@bc,solinit);
t = linspace(0,6);
x = deval(sol,t);
plot(t,x(1,:));
hold on
plot(t,x(2,:));
plot(t,x(3,:));
plot(t,x(4,:));
x(1,:)
x(2,:)
</code></pre>
<p>It can work, but I don't know whether it is right. I will check it again to make sure it is the right code.</p>
|
<p>The following is the answer we finally get @Chriso: use matlab bvp4c function to solve this boundary value problem (Suppose the two boundry value conditions are: when t=0: x=1, y=3; when t=1, x=6, y=9. x is x(1), y is x(2) ):</p>
<pre><code>1. bc.m
function res = bc(ya,yb)
res = [ ya(1)-1; ya(2)-3; yb(1) - 6; yb(2)-9];
end
2. ode.m
function dydx = ode(t,x)
dydx = [x(3); x(4); x(2)/x(1)*(-2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2)); -2*x(1)*x(3)*x(4)/(1+x(1)^2+x(2)^2)];
end
3. mainBVP.m
solinit = bvpinit(linspace(0,6,10),[1 0 -1 0]);
sol = bvp4c(@ode,@bc,solinit);
t = linspace(0,6);
x = deval(sol,t);
plot(t,x(1,:));
hold on
plot(t,x(2,:));
plot(t,x(3,:));
plot(t,x(4,:));
x(1,:)
x(2,:)
</code></pre>
| 234
|
solve differential equations
|
solving a pair of iterated differential equations
|
https://stackoverflow.com/questions/53525908/solving-a-pair-of-iterated-differential-equations
|
<p>Ive got a pair of differential equations that I've learnt how to solve:</p>
<blockquote>
<pre><code>dc/dt=r*c+c^2-c^3-b*c*u,
du/dt=-g*u+(b*c*u)/2,
</code></pre>
</blockquote>
<p>where r,b,g are constants (no assumption on r but b and g are positive). So my code to solve these are:</p>
<pre><code>from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
b=2
g=0.1
def func(z,t):
c = z[0]
u = z[1]
dcdt = r*c+c**2-c**3-b*c*u**2
dudt = -g*u+0.5*b*c*u
return [dcdt,dtaudt]
#inital conditions
z0 = [1,0.2] #u[0] =!0.0
#time points
t = np.linspace(0,20,100)
#solve ODE
z = odeint(func,z0,t)
#seperating answers out
c = z[:,0]
u = z[:,1]
</code></pre>
<p>I hope this is the right way to go about it. If not, please let me know of my mistake. I have two questions.</p>
<ol>
<li>Is there any way we can make my function dependent on (c,u,t) rather than (z,t)? </li>
<li>I need to iterate these equations. So I need a variable <code>n=6</code></li>
</ol>
<p>where we can get python to solve the differential equations for dcdt[i] for i up to 6. I want this because I want to change the equations so that dcdt[i] might depend on dcdt[i-1] or dcdt[i+1] etc.</p>
<p>Can someone help me please? Thanks in advance.</p>
| 235
|
|
solve differential equations
|
Making an AbstractOdeSolver class library to solve differential equations numerically (c++)
|
https://stackoverflow.com/questions/46874293/making-an-abstractodesolver-class-library-to-solve-differential-equations-numeri
|
<p>I am trying to make and ode solver using c++ and numerical methods (euler, heun and runge kutta). first i made and abstract class for ode solving requirements then i made a separate class for each solver inheriting from the ABClass. there is no problem with the code excpet it works only with first order differential equations as dydt = y
however i need to expand it to be able to solve a system of first order differential equations as dydt = x-y & dxdt = y together </p>
<p>here is the header file </p>
<pre><code> #ifndef ABSTRACTODESOLVER_H_INCLUDED
#define ABSTRACTODESOLVER_H_INCLUDED
#include <iostream>
using namespace std;
class AbstractOdeSolver
{
private:
double stepsize;
double initialTime;
double finalTime;
double initialValue;
public:
double (*RHS)(double, double); //pointer to function
void SetStepSize(double h);
void SetIntervalTime(double t0, double t1);
void SetInitialValue(double y0);
void setRHS(double (*pRHS)(double, double)); //set pointer function
double getStepSize(){return stepsize;}
double getInitialTime(){return initialTime;}
double getFinalTime(){return finalTime;}
double getInitialValue(){return initialValue;}
virtual void SolveEquation() = 0;
};
class EulerSolver : public AbstractOdeSolver
{
public:
virtual void SolveEquation();
};
class HeunSolver : public AbstractOdeSolver
{
public:
virtual void SolveEquation();
};
class RungeKuttaSolver : public AbstractOdeSolver
{
public:
virtual void SolveEquation();
};
#endif // ABSTRACTODESOLVER_H_INCLUDED
</code></pre>
<p>and this is the source code for one solver: </p>
<pre><code> void EulerSolver::SolveEquation(){
double yNew = 0.0;
double yOld = getInitialValue(); // y0 initial value of Y
double tInit = getInitialTime(); // t0 initial value of T
double tFinal = getFinalTime();
double h = getStepSize();
for (double i = tInit; i <= tFinal; i += h){
yNew = yOld + (h * RHS(tInit,yOld));
yOld = yNew;
tInit += h;
cout << left << setw(5) << tInit << " " << setw(5) << yNew << endl;
}
</code></pre>
<p>}</p>
<p>and this is a program: </p>
<pre><code> double Func(double t, double y){
return y;
}
EulerSolver euler1;
euler1.SetIntervalTime(0,0.6);
euler1.SetInitialValue(1);
euler1.SetStepSize(0.1);
euler1.setRHS(Func);
euler1.SolveEquation();
</code></pre>
<p>here i am using a function pointer RHS to allow the solver to use the function every time in the iteration, however i need to have more than one function in order to solve higher degree equations and the problem i dont know a way of making two separate functions with related variables (dydt = x-y & dxdt = y) and then having a pointer on each and returning the answer in an array!.
Any ideas will be great. </p>
| 236
|
|
solve differential equations
|
Solving differential equations in Matlab
|
https://stackoverflow.com/questions/58895739/solving-differential-equations-in-matlab
|
<p>I need to solve these 2 differential equations simultaneously. </p>
<pre><code>dr^3/dt=(-3*D*Cs)/(ρ*r0^2 )*r*(1-C)
dC/dt=((D*4π*r0*N*(1-C)*r)-(Af*C))/V
</code></pre>
<p>Note: dr^3/dt is the derivative of r^3 with respect to t</p>
<p>The two equations resemble the change in particle radius (r) and concentration (C) with time for a dissolution process of a microsuspension and its simultaneous absorption in the bloodstream. What is expected to happen as the solid dissolves, is that radius, r, will decrease and the concentration, C, will increase and eventually plateau (i.e. reach an equilibrium) as the dissolved solid is being removed into the bloodstream by this Af*C term (where Af is some sort of absorption rate constant). The equations come from this paper which I am trying to replicate: <a href="https://jpharmsci.org/article/S0022-3549(18)30334-4/fulltext#sec3.2.1" rel="nofollow noreferrer">jpharmsci.org/article/S0022-3549(18)30334-4/fulltext#sec3.2.1</a> -- Change in C with t is supposed to be like Figure 3 (DCU example).</p>
<p>I did the simplification: dr^3/dt = 3r^2*(dr/dt) and by dividing both sides of the equation by 3r^2. The odes become:</p>
<pre><code>function dydt=odefcnNY_v3(t,y,D,Cs,rho,r0,N,V,Af)
dydt=zeros(2,1);
dydt(1)=((-D*Cs)/(rho*r0^2*y(1)))*(1-y(2)); % dr*/dt
dydt(2)=(D*4*pi*N*r0*(1-y(2))*y(1)-(Af*y(2)))/V; % dC*/dt
end
</code></pre>
<pre><code>y(1) = r* and
y(2) = C*
</code></pre>
<pre><code>r* and C*
</code></pre>
<p>is the terminology used in the paper and are "normalised" radius and concentration where </p>
<pre><code>r*=r/r0 and C*=C/Cs
</code></pre>
<p>where:</p>
<ul>
<li>r=particle radius (time varying and expressed by dydt(1))</li>
<li>r0=initial particle radius</li>
<li>C=concentration of dissolved solids (time varying and expressed by dydt(2))</li>
<li>Cs=saturated solubility</li>
</ul>
<p>The rest of the code is below. <em>Updated with feedback from authors on values used in paper and to correct initial values to y0=[1 0]</em> </p>
<pre><code>MW=224; % molecular weight
D=9.916e-5*(MW^-0.4569)*60/600000 %m2/s - [D(cm2/min)=9.916e-5*(MW^-0.4569)*60] equation provided by authors, divide by 600,000 to convert to m2/s
rho=1300; %kg/m3
r0=10.1e-6; %m dv50
Cs=1.6*1e6/1e9; %kg/m3 - 1.6ug/m3 converted to kg/m3
V=5*0.3/1e6;%m3 5ml/Kg animal * 0.3Kg animal, divide by 1e6 to convert to m3
W=30*0.3/1000000; %kg; 30mg/Kg animal * 0.3Kg animal, divide by 1e6 to convert to m3
N=W/((4/3)*pi*r0^3*rho); % particle number
Af=0.7e-6/60; %m3/s
tspan=[0 24*3600]; %s in 24 hrs
y0=[1 0];
[t,y]=ode113(@(t,y) odefcnNY_v11(t,y,D,Cs,rho,r0,Af,N,V), tspan, y0);
plot(t/3600,y(:,1),'-o') %plot time in hr, and r*
xlabel('time, hr')
ylabel('r*, (rp/r0)')
legend('DCU')
title ('r*');
plot(t/3600,y(:,1)*r0*1e6); %plot r in microns
xlabel('time, hr');
ylabel('r, microns');
legend('DCU');
title('r');
plot(t/3600,y(:,2),'-') %plot time in hr, and C*
xlabel('time, hr')
ylabel('C* (C/Cs)')
legend('DCU')
title('C*');
plot(t/3600, y(:,2)*Cs) % time in hr, and bulk concentration on y
xlabel('time, hr')
ylabel('C, kg/m3')
legend('Dissolved drug concentration')
title ('C');
</code></pre>
<p>I first tried ode45, but the code was taking a very long time to run and eventually I got some errors. I then tried ode113 and got the below error.</p>
<pre><code>Warning: Failure at t=2.112013e+00. Unable to meet integration tolerances without reducing the step size below the smallest value allowed (7.105427e-15) at time t.
</code></pre>
<p><em>Update: Code for function updated to resolve singularity issue:</em></p>
<pre><code>function dydt=odefcnNY_v10(t,y,D,Cs,rho,r0,N,V,Af)
dydt=zeros(2,1);
dydt(1)=(-D*Cs)/(rho*r0^2)*(1-y(2))*y(1)/(1e-6+y(1)^2); % dr*/dt
dydt(2)=(D*4*pi*N*r0*(1-y(2))*y(1)-Af*y(2))/V; %dC*/dt
end
</code></pre>
<p><strong>Results</strong>
<a href="https://i.sstatic.net/jld4z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jld4z.jpg" alt="enter image description here"></a></p>
<p><strong>Mechanistic background to model</strong></p>
<p><em>Derivation of dr/dt</em></p>
<p><a href="https://i.sstatic.net/rEv50.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rEv50.jpg" alt="Derivation of dydt(1)"></a></p>
<p><em>Derivation of dC/dt</em></p>
<p><a href="https://i.sstatic.net/hKECk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hKECk.jpg" alt="Derivation of dydt(2)"></a></p>
<p><em>Model Assumptions</em></p>
<p>Above you will find slides that show the derivations of these equations. They assumed that in the Noyes-Whitney equation for dissolution rate dM/dt=(𝐷/h)<em>4𝜋𝑟^2</em>𝑁(𝐶𝑠−𝐶), the film thickness, h, is equal to the particle radius, r. This is a simplification usually done in biopharmaceutics modelling if the Reynolds number is low and particles are <60um (in their case 10um). If we make this assumption, we are left with dM/dt=𝐷<em>4</em>𝜋<em>𝑟</em>𝑁(𝐶𝑠−𝐶). I am eager to replicate this paper as I want to do this exact same thing i.e. model drug absorption of a subcutaneous injection of a microsuspension. I have contacted the authors, who seem rather unsure of what they have done so I am looking at other sources, there is for example this paper: <a href="https://pubs.acs.org/doi/pdf/10.1021/acs.iecr.7b04730" rel="nofollow noreferrer">https://pubs.acs.org/doi/pdf/10.1021/acs.iecr.7b04730</a> where in equation 6, an equation for dC/dt is shown. They imbed the change in surface area per unit volume (a) (equation 5) into equation 6. And their mass transfer coefficient kL is a lumped parameter = D/h (diffusivity/film thickness).</p>
|
<p>In the original form of the radius equation</p>
<pre><code>d(r^3)/dt = -3K*(r^3)^(1/3)*(1-C)
</code></pre>
<p>or the power-reduced one</p>
<pre><code>dr/dt = -K/r*(1-C) <==> d(r^2)/dt = -2K*(1-C)
</code></pre>
<p>you reach a singularity at the moment that the radius shrinks to zero like approximately <code>r=sqrt(A-B*t)</code>, that is, the globules will have vanished in finite time. From that point on, obviously, one should have <code>r=0</code>. This can be obtained via a model change from a 2 component system to a scalar equation for <code>C</code> alone, getting the exact time via event mechanism. </p>
<p>Or alternatively, you can modify the radius equation so that <code>r=0</code> is a natural stationary point. The first version somehow does that, but as the second versions are equivalent, one can still expect numerical difficulties. Possible modifications are</p>
<pre><code>d(r^2)/dt = -2K*sign(r)*(1-C)
</code></pre>
<p>where the signum function can be replaced by continuous approximations like</p>
<pre><code>x/(eps+abs(x)), x/max(eps,abs(x)), tanh(x/eps), ...
</code></pre>
<p>or in the reduced form, the singularity can be mollified as</p>
<pre><code>dr/dt = -K*(1-C) * r/(eps^2+r^2)
</code></pre>
<p>or still in some other variation</p>
<pre><code>dr/dt = -K*(1-C) * 2*r/(max(eps^2,r^2)+r^2)
</code></pre>
<hr>
<p>Applied to the concrete case this gives (using python instead of matlab)</p>
<pre class="lang-py prettyprint-override"><code>def odefcnNY(t,y,D,Cs,rho,r0,N,V,Af):
r,C = y;
drdt = (-D*Cs)/(rho*r0**2)*(1-C) * r/(1e-6+r**2); # dr*/dt
dCdt = (D*4*pi*N*r0*(1-C)*r-(Af*C))/V; # dC*/dt
return [ drdt, dCdt ];
</code></pre>
<p>and apply an implicit method in view of the near singularity at r=0</p>
<pre class="lang-py prettyprint-override"><code>D=8.3658e-10#m2/s
rho=1300; #kg/m3
r0=10.1e-6; #m dv50
Cs=0.0016; #kg/m3
V=1.5e-6;#m3
W=9e-6; #kg
N=W/(4/3*pi*r0^3*rho);
Af=0.7e-6/60; #m3/s
tspan=[0, 24*3600]; #sec in 24 hours
y0=[1.0, 0.0]; # relative radius starts at full, 1.0
sol=solve_ivp(lambda t,y: odefcnNY(t,y,D,Cs,rho,r0,Af,N,V), tspan, y0, method="Radau", atol=1e-14);
t = sol.t; r,C = sol.y;
</code></pre>
<p>then results in a solution plot</p>
<p><a href="https://i.sstatic.net/OI60k.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OI60k.png" alt="plot of the start of the solution"></a></p>
<p>which now with the corrected parameters looks close to the published graphs.</p>
| 237
|
solve differential equations
|
solving differential equation with step function
|
https://stackoverflow.com/questions/50471374/solving-differential-equation-with-step-function
|
<p>I am trying to solve this differential equation as part of my assignment. I am not able to understand on how can i put the condition for u in the code. In the code shown below, i arbitrarily provided </p>
<pre><code>u = 5.
2dx(t)dt=−x(t)+u(t)
5dy(t)dt=−y(t)+x(t)
u=2S(t−5)
x(0)=0
y(0)=0
</code></pre>
<p><code>where S(t−5) is a step function that changes from zero to one at t=5. When it is multiplied by two, it changes from zero to two at that same time, t=5</code>. </p>
<pre><code>def model(x,t,u):
dxdt = (-x+u)/2
return dxdt
def model2(y,x,t):
dydt = -(y+x)/5
return dydt
x0 = 0
y0 = 0
u = 5
t = np.linspace(0,40)
x = odeint(model,x0,t,args=(u,))
y = odeint(model2,y0,t,args=(u,))
plt.plot(t,x,'r-')
plt.plot(t,y,'b*')
plt.show()
</code></pre>
|
<p>I do not know the SciPy Library very well, but regarding the <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.odeint.html" rel="nofollow noreferrer">example in the documentation</a> I would try something like this:</p>
<pre><code>def model(x, t, K, PT)
"""
The model consists of the state x in R^2, the time in R and the two
parameters K and PT regarding the input u as step function, where K
is the infimum of u and PT is the delay of the step.
"""
x1, x2 = x # Split the state into two variables
u = K if t>=PT else 0 # This is the system input
# Here comes the differential equation in vectorized form
dx = [(-x1 + u)/2,
(-x2 + x1)/5]
return dx
x0 = [0, 0]
K = 2
PT = 5
t = np.linspace(0,40)
x = odeint(model, x0, t, args=(K, PT))
plt.plot(t, x[:, 0], 'r-')
plt.plot(t, x[:, 1], 'b*')
plt.show()
</code></pre>
| 238
|
solve differential equations
|
matlab ODE45 solves Differential equations with two variables of same order
|
https://stackoverflow.com/questions/39295156/matlab-ode45-solves-differential-equations-with-two-variables-of-same-order
|
<p>I have a Differential equations such as below in matlab format </p>
<pre><code>syms x y m g r l J
% x,y are variables, the others are constant
1: 0.5*m*(r^2*x^2+l^2*(Dx-Dy)^2+2*r*l*Dx*(Dx-Dy)*cos(y))+0.5*J*(Dx-Dy)^2=m*g*
(l*sin(x-y)-r*(1-cos(x)));
2: J*(D2x-D2y)+l^2*(D2x-D2y)-r*l*(Dx)^2*sin(y)+r*l*D2x*cos(y)-m*g*l*cos(x-y)=0,
3: x(0)=pi/2,y(0)=pi/2,Dx(0)=0,Dy(0)=0,
</code></pre>
<p>I want to use ODE45 method to solve it, but I don't know how to use it in such a situation</p>
|
<p>You need to solve for D2x and D2y to get an explicit ODE. Since there is only one equation for them, what you have is a DAE, differential-algebraic equation (system). </p>
<p>Thus you either employ a DAE solver or you have to compute the derivative of the first equation to get a second equation for the second derivatives, isolate them (if necessary, using a numerical solver) and then solve it as a second order ODE.</p>
<pre><code>eq3 = diff(eq1, t)
deriv(t,w)
x,y,Dx,Dy = w
solve(eq2,eq3 for D2x,D2y)
deriv = [ Dx, Dy, D2x, D2y ]
</code></pre>
| 239
|
solve differential equations
|
How to solve differential equations with silence period (using DifferentialEquations.jl)?
|
https://stackoverflow.com/questions/57578402/how-to-solve-differential-equations-with-silence-period-using-differentialequat
|
<p>I want to solve a differential equation that has some "silence period" (I'm not sure whether it has a formal name or not, it means during this period that the system is static and not controlled by the differential equation).</p>
<p>For example (see the figure), when a free-fall ball touches the ground, the callback function will be triggered. It enters this "silence period" (dash line in the figure), after this period, it keeps following the differential equation (the parameters can be change or not, as in the figure I reset the height of the ball).</p>
<p><img src="https://i.sstatic.net/9YW1D.png" alt="Example trace"></p>
<p>I know it can be done by for loop, but currently I try to use DifferentialEquations.jl, so I wonder if I can set a callback or use other methods to accomplish it?</p>
<p>Thank you in advance!</p>
|
<p>The simplest way to do this is to set a parameter to zero using a <code>DiscreteCallback</code>, and having a second callback un-zero it. <a href="http://diffeq.sciml.ai/latest/features/callback_functions.html" rel="nofollow noreferrer">The callback handling page</a> describes in more detail how to define and use such callbacks.</p>
| 240
|
solve differential equations
|
Python error while solving complex coupled differential equations
|
https://stackoverflow.com/questions/34627594/python-error-while-solving-complex-coupled-differential-equations
|
<p>I'm trying to solve complex coupled differential equations as follows.</p>
<pre><code>import numpy as np
from scipy.integrate import complex_ode
def cae(z, A, params):
A1, A2, A3 = A
alpha, gamma, dbeta = params
dA = [
-0.5*alpha*A1 + 1j*gamma*(np.abs(A1)**2 + 2*np.abs(A2)**2 + 2*np.abs(A3)**2)*A1 + 2j*gamma*A2*A3*np.conjugate(A1)*np.exp(1j*dbeta*z),
-0.5*alpha*A2 + 1j*gamma*(2*np.abs(A1)**2 + np.abs(A2)**2 + 2*np.abs(A3)**2)*A2 + 1j*gamma*A1**2*np.conjugate(A3)*np.exp(-1j*dbeta*z),
-0.5*alpha*A3 + 1j*gamma*(2*np.abs(A1)**2 + 2*np.abs(A2)**2 + np.abs(A3)**2)*A3 + 1j*gamma*A1**2*np.conjugate(A2)*np.exp(-1j*dbeta*z),
]
return dA
A0 = [1, 1e-3, 0])
z0 = 0
params = [0, 2, 0]
L = 2.5
dz = 0.01
sol = complex_ode(cae).set_integrator("dopri5")
sol.set_initial_value(A0, z0).set_f_params(params)
while sol.successful() and sol.t < L:
sol.integrate(sol.t+dz)
</code></pre>
<p>But, this code gives the following error.</p>
<pre><code>File "C:\Python27\lib\site-packages\scipy\integrate\_ode.py", line 472, in _wrap
f = self.cf(*((t, y[::2] + 1j * y[1::2]) + f_args))
TypeError: can't multiply sequence by non-int of type 'complex'
</code></pre>
<p>Can anyone explain why this is happening and how to fix it?</p>
| 241
|
|
solve differential equations
|
Solving an Involved Set of Coupled Differential Equations
|
https://stackoverflow.com/questions/66877010/solving-an-involved-set-of-coupled-differential-equations
|
<p>I am trying to solve a set of complicated differential equations in Python. These equations contain five functions (defined in the function 'ODEs' in the code below) that are functions of a variable n (greek name is eta--- I used n and eta interchangeably as variable names). These coupled differential equations contain a function (called a) which is a function of a parameter t. n (or eta) is also a function of t and so my first goal was to express, again numerically, the function a as a function of n (eta). Hence I had to solve a less involved pair of differential equations, which I defined in the function 'coupleODE'. I got a plot of a(t) and n(t) and used interpolation to get a model relating function a to function n. This function is a numerical estimation for a(n). I called this interpolation model 'f_quad' --- f, means function, and quad represents quadratic interpolation.</p>
<p>Now, the original five differential equations actually contain a'(n)/a(n), where ' is derivative with respect to n. I numerically found an interpolator for this as well and called it 'deriv_quad'.</p>
<p>Now in 'ODEs' I am modelling the five differential equations, and as you can see in the code, the function body contains segments that use the interpolators 'f_quad' and 'deriv_quad'; these have argument n, which represents the a(n) and a'(n) respectively. I then use odeint python function to numerically solve these differential equations, but I get the error message:</p>
<p>'A value in x_new is above the interpolation range.' My computer says that the error occurred in the line 'f = odeint(ODEs, initials, n_new, atol=1.0e-8, rtol=1.0e-6)'.</p>
<p>I am not sure why this happened; I used the same eta values that I used when finding the interpolators 'f_quad' and 'deriv_quad'. Can anyone help me get rid of this error?</p>
<p>Here is the code:</p>
<pre><code>import numpy as np
from scipy.misc import derivative
from scipy.integrate import odeint
from scipy.interpolate import interp1d
import matplotlib.pyplot as plt
import math
def coupledODE(x,t):
#define important constants
m = 0.315
r = 0.0000926
d = 0.685
H_0 = 67.4
a = x[0]
eta = x[1]
dndt = a**(-1)
dadt = H_0 * (m*a**(-1) + r*a**(-2) + d*a**2)**(1/2)
return [dadt, dndt]
#initial condtions
x0 = [1e-7, 1e-8]
t = np.linspace(0,0.01778301154,7500)
x = odeint(coupledODE, x0, t, atol=1.0e-8, rtol=1.0e-6) #vector of the functions a(t), n(t)
a = x[:,0]
n = x[:,1] #Eta; n is the greek letter eta
plt.semilogx(t,a)
plt.xlabel('time')
plt.ylabel('a(t)')
plt.show()
plt.semilogx(t,n)
plt.xlabel('time')
plt.ylabel('Eta')
plt.show()
plt.plot(n,a)
plt.xlabel('Eta')
plt.ylabel('a(t)')
plt.show()
##############################################################################
# Calculate the Derivative a' (i.e. da/d(eta) Numerically
##############################################################################
derivative_values = []
for i in range(7499):
numerator = x[i+1,0] - x[i,0]
denominator = x[i+1,1] - x[i,1]
ratio = numerator / denominator
derivative_values.append(ratio)
x_axis = n
x_axis = np.delete(x_axis, -1)
plt.plot(x_axis,derivative_values)
plt.xlabel('Eta')
plt.ylabel('Derivative of a')
plt.show()
##############################################################################
#Interpolation
##############################################################################
#Using quadratic interpolation
f_quad = interp1d(n, a, kind = 'quadratic')
deriv_quad = interp1d(x_axis, derivative_values, kind = 'quadratic')
n_new = np.linspace(1.0e-8, 0.0504473, num = 20000, endpoint = True)
plt.plot(n_new, f_quad(n_new))
plt.xlabel('Eta')
plt.ylabel('a')
plt.show()
plt.plot(n_new, deriv_quad(n_new))
plt.xlabel('Eta')
plt.ylabel('Derivative of a')
plt.show()
#print(x[0,1])
#print(eta_new[0])
#print(deriv_quad(1e-8))
##############################################################################
# The Main Coupled Equations
##############################################################################
def ODEs(x,n):
fourPiG_3 = 1
CDM_0 = 1
Rad_0 = 1
k = 0.005
Phi = x[0]
Theta1 = x[1]
iSpeed = x[2]
DeltaC = x[3]
Theta0 = x[4]
dPhi_dn = fourPiG_3 *((CDM_0 * DeltaC)/ deriv_quad(n) + (4*Rad_0)/(f_quad(n)*deriv_quad(n))) -
(k**2*Phi*f_quad(n))/(deriv_quad(n)) - (deriv_quad(n) * Phi)/(f_quad(n))
dTheta1_dn = (k/3)*(Theta0 - Phi)
diSpeed_dn = -k*Phi - (deriv_quad(n)/f_quad(n))*iSpeed
dDeltaC_dn = -k*iSpeed - 3*dPhi_dn
dTheta0_dn = -k*Theta1 - dPhi_dn
return [dPhi_dn, dTheta1_dn, diSpeed_dn, dDeltaC_dn, dTheta0_dn]
hub_0 = deriv_quad(1e-8)/f_quad(1e-8)
#Now the initial conditions
Phi_k_0 = 1.0e-5
Theta1_k_0 = -1*Phi_k_0/hub_0 #Ask about the k
iSpeed_k_0 = 3*Theta1_k_0
DeltaC_k_0 = 1.5 * Phi_k_0
Theta0_k_0 = 0.5 * Phi_k_0
initials = [Phi_k_0, Theta1_k_0, iSpeed_k_0, DeltaC_k_0, Theta0_k_0]
####Error Happens Here ####
f = odeint(ODEs, initials, n_new, atol=1.0e-8, rtol=1.0e-6)
Phi = f[:,0]
Theta1 = f[:,1]
iSpeed = f[:,2]
DeltaC = f[:,3]
Theta0 = f[:,4]
</code></pre>
| 242
|
|
solve differential equations
|
Solving 2nd order differential equations wrt this code
|
https://stackoverflow.com/questions/56349229/solving-2nd-order-differential-equations-wrt-this-code
|
<p>I cannot write the program which is solving 2nd order differential equation with respect to code I wrote for <strong>y'=y</strong></p>
<p>I know that I should write a program which turn a 2nd order differential equation into two ordinary differential equations but I don!t know how can I do in Python.</p>
<p>P.S. : I have to use that code below. It's a homework</p>
<p>Please forgive my mistakes, it's my first question. Thanks in advance</p>
<pre class="lang-py prettyprint-override"><code>from pylab import*
xd=[];y=[]
def F(x,y):
return y
def rk4(x0,y0,h,N):
xd.append(x0)
yd.append(y0)
for i in range (1,N+1) :
k1=F(x0,y0)
k2=F(x0+h/2,y0+h/2*k1)
k3=F(x0+h/2,y0+h/2*k2)
k4=F(x0+h,y0+h*k3)
k=1/6*(k1+2*k2+2*k3+k4)
y=y0+h*k
x=x0+h
yd.append(y)
xd.append(x)
y0=y
x0=x
return xd,yd
x0=0
y0=1
h=0.1
N=10
x,y=rk4(x0,y0,h,N)
print("x=",x)
print("y=",y)
plot(x,y)
show()
</code></pre>
|
<p>You can basically reformulate any scalar ODE (Ordinary Differential Equation) of order n in Cauchy form into an ODE of order 1. The only thing that you "pay" in this operation is that the second ODE's variables will be vectors instead of scalar functions.</p>
<p>Let me give you an example with an ODE of order 2. Suppose your ODE is: y'' = F(x,y, y'). Then you can replace it by [y, y']' = [y', F(x,y,y')], where the derivative of a vector has to be understood component-wise.</p>
<p>Let's take back your code and instead of using Runge-Kutta of order 4 as an approximate solution of your ODE, we will apply a simple Euler scheme.</p>
<pre><code>from pylab import*
import matplotlib.pyplot as plt
# we are approximating the solution of y' = f(x,y) for x in [x_0, x_1] satisfying the Cauchy condition y(x_0) = y0
def f(x, y0):
return y0
# here f defines the equation y' = y
def explicit_euler(x0, x1, y0, N,):
# The following formula relates h and N
h = (x1 - x0)/(N+1)
xd = list()
yd = list()
xd.append(x0)
yd.append(y0)
for i in range (1,N+1) :
# We use the explicite Euler scheme y_{i+1} = y_i + h * f(x_i, y_i)
y = yd[-1] + h * f(xd[-1], yd[-1])
# you can replace the above scheme by any other (R-K 4 for example !)
x = xd[-1] + h
yd.append(y)
xd.append(x)
return xd, yd
N = 250
x1 = 5
x0 = 0
y0 = 1
# the only function which satisfies y(0) = 1 and y'=y is y(x)=exp(x).
xd, yd =explicit_euler(x0, x1, y0, N)
plt.plot(xd,yd)
plt.show()
# this plot has the right shape !
</code></pre>
<p><a href="https://i.sstatic.net/49ezC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/49ezC.png" alt="looks like a nice exponential function !"></a></p>
<p>Note that you can replace the Euler scheme by <a href="https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta_methods" rel="nofollow noreferrer">R-K 4</a> which has better stability and convergence properties.</p>
<p>Now, suppose that you want to solve a second order ODE, let's say for example: y'' = -y with initial conditions y(0) = 1 and y'(0) = 0. Then you have to transform your scalar function y into a vector of size 2 as explained above and in the comments in code below.</p>
<pre><code>from pylab import*
import matplotlib.pyplot as plt
import numpy as np
# we are approximating the solution of y'' = f(x,y,y') for x in [x_0, x_1] satisfying the Cauchy condition of order 2:
# y(x_0) = y0 and y'(x_0) = y1
def f(x, y_d_0, y_d_1):
return -y_d_0
# here f defines the equation y'' = -y
def explicit_euler(x0, x1, y0, y1, N,):
# The following formula relates h and N
h = (x1 - x0)/(N+1)
xd = list()
yd = list()
xd.append(x0)
# to allow group operations in R^2, we use the numpy library
yd.append(np.array([y0, y1]))
for i in range (1,N+1) :
# We use the explicite Euler scheme y_{i+1} = y_i + h * f(x_i, y_i)
# remember that now, yd is a list of vectors
# the equivalent order 1 equation is [y, y']' = [y', f(x,y,y')]
y = yd[-1] + h * np.array([yd[-1][1], f(xd[-1], yd[-1][0], yd[-1][1])]) # vector of dimension 2
print(y)
# you can replace the above scheme by any other (R-K 4 for example !)
x = xd[-1] + h # vector of dimension 1
yd.append(y)
xd.append(x)
return xd, yd
x0 = 0
x1 = 30
y0 = 1
y1 = 0
# the only function satisfying y(0) = 1, y'(0) = 0 and y'' = -y is y(x) = cos(x)
N = 5000
xd, yd =explicit_euler(x0, x1, y0, y1, N)
# I only want the first variable of yd
yd_1 = list(map(lambda y: y[0], yd))
plt.plot(xd,yd_1)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/0jwcm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0jwcm.png" alt="looks like a nice cosine function !"></a></p>
| 243
|
solve differential equations
|
Julia Differential Equations Repositories
|
https://stackoverflow.com/questions/77032222/julia-differential-equations-repositories
|
<p>Is there a repository (or a web page) of all differential equations coded in DifferentialEquations.jl or at least ODE in OrdinaryDiffEq.jl?</p>
<p>If there are no repositories, are there other sources, university classes, etc. where Julia code is used to solve differential equations and is available?</p>
|
<p>There are two ways to read this question. One is either looking for where the examples for the ODE definitions are contained, the other is looking for where the ODE solver codebases are found. I'll split this answer into the two possible interpretations.</p>
<h2>Where the ODE Problem Definition Examples are Found</h2>
<p>The DifferentialEquations.jl documentation (<a href="https://docs.sciml.ai/DiffEqDocs/stable/" rel="nofollow noreferrer">https://docs.sciml.ai/DiffEqDocs/stable/</a>) contains many examples for defining ODEs in its tutorials and examples section. It is highly recommended that one get started with the <a href="https://docs.sciml.ai/DiffEqDocs/stable/getting_started/" rel="nofollow noreferrer">Getting Started with Differential Equations in Julia</a> which uses examples such as the Lorenz equation. Other pages include more examples, such as the <a href="https://docs.sciml.ai/DiffEqDocs/stable/examples/classical_physics/" rel="nofollow noreferrer">Classical Physics soler</a> page in the documentation which shows how to implement 5 different classical physics models using standard ODE solvers and symplectic methods. There's also pages on <a href="https://docs.sciml.ai/DiffEqDocs/stable/examples/conditional_dosing/" rel="nofollow noreferrer">dosing models</a>, <a href="https://docs.sciml.ai/DiffEqDocs/stable/examples/kepler_problem/" rel="nofollow noreferrer">the Kepler problem</a>, and much much more.</p>
<p>The ODEProblem definition page (<a href="https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/" rel="nofollow noreferrer">https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/</a>) also has an "Example Problems" section (<a href="https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/#Example-Problems" rel="nofollow noreferrer">https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/#Example-Problems</a>) with more than 10 example problems implemented, including some classic ODEs such as the Brusselator. The source for these can be found in the DiffEqProblemLibrary.jl repository:</p>
<p><a href="https://github.com/SciML/DiffEqProblemLibrary.jl" rel="nofollow noreferrer">https://github.com/SciML/DiffEqProblemLibrary.jl</a></p>
<p>For example, the 20 stiff ODE POLLU pollution model reference in the documentation at (<a href="https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/#ODEProblemLibrary.prob_ode_pollution" rel="nofollow noreferrer">https://docs.sciml.ai/DiffEqDocs/stable/types/ode_types/#ODEProblemLibrary.prob_ode_pollution</a>) can be found at this location in the DiffEqProblemLibrary (in the sublibrary for ODEProblemLibrary):</p>
<p><a href="https://github.com/SciML/DiffEqProblemLibrary.jl/blob/master/lib/ODEProblemLibrary/src/pollution_prob.jl" rel="nofollow noreferrer">https://github.com/SciML/DiffEqProblemLibrary.jl/blob/master/lib/ODEProblemLibrary/src/pollution_prob.jl</a></p>
<p>All of the example problems can be found in that repository.</p>
<p>Note that additional examples can be found by looking at the benchmark pages as well: <a href="https://docs.sciml.ai/SciMLBenchmarksOutput/stable/" rel="nofollow noreferrer">https://docs.sciml.ai/SciMLBenchmarksOutput/stable/</a></p>
<h2>Where the Solver Code is Found and How it's Organized</h2>
<p>DifferentialEquations.jl is a metapackage which re-exports the solver codes. The solvers are documented in <a href="https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve/" rel="nofollow noreferrer">https://docs.sciml.ai/DiffEqDocs/stable/solvers/ode_solve/</a>. In the section "Full List of Methods", the first section is OrdinaryDiffEq.jl which has a few hundred methods. These are all implemented in the OrdinaryDiffEq.jl repository:</p>
<p><a href="https://github.com/SciML/OrdinaryDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/OrdinaryDiffEq.jl</a></p>
<p>However, there are many other solvers available, as documented. A good set to know about for teaching is SimpleDiffEq.jl</p>
<p><a href="https://github.com/SciML/SimpleDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/SimpleDiffEq.jl</a></p>
<p>This for example has a bunch of self-contained implementations which can be easier to explain. For example, the GPUATsit5 implementation is mathematically equivalent to the standard OrdinaryDiffEq.jl Tsit5, but it's built without all of the extra machinery and options and is instead implemented as a single loop here:</p>
<p><a href="https://github.com/SciML/SimpleDiffEq.jl/blob/v1.10.0/src/tsit5/gpuatsit5.jl#L104" rel="nofollow noreferrer">https://github.com/SciML/SimpleDiffEq.jl/blob/v1.10.0/src/tsit5/gpuatsit5.jl#L104</a></p>
<p>Thus for someone trying to understand or explain the code in a classroom setting, using GPUATsit5 instead of Tsit5 can be helpful.</p>
<p>As another example of a solver library, there are the <code>CVODE_BDF</code> methods from Sundials which are a wrapper over the SUNDIALS C++ library, and this wrapper code is found at the Sundials.jl repository:</p>
<p><a href="https://github.com/SciML/Sundials.jl" rel="nofollow noreferrer">https://github.com/SciML/Sundials.jl</a></p>
<p>A greater list of different solver packages are:</p>
<ul>
<li><a href="https://github.com/rveltz/LSODA.jl" rel="nofollow noreferrer">https://github.com/rveltz/LSODA.jl</a></li>
<li><a href="https://github.com/SciML/SimpleDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/SimpleDiffEq.jl</a></li>
<li><a href="https://github.com/SciML/DASKR.jl" rel="nofollow noreferrer">https://github.com/SciML/DASKR.jl</a></li>
<li><a href="https://github.com/SciML/ODEInterfacediffeq.jl" rel="nofollow noreferrer">https://github.com/SciML/ODEInterfacediffeq.jl</a></li>
<li><a href="https://github.com/SciML/SciPyDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/SciPyDiffEq.jl</a></li>
<li><a href="https://github.com/SciML/deSolveDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/deSolveDiffEq.jl</a></li>
<li><a href="https://github.com/SciML/IRKGaussLegendre.jl" rel="nofollow noreferrer">https://github.com/SciML/IRKGaussLegendre.jl</a></li>
<li><a href="https://github.com/SciML/MATLABDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/MATLABDiffEq.jl</a></li>
<li><a href="https://github.com/PerezHz/TaylorIntegration.jl" rel="nofollow noreferrer">https://github.com/PerezHz/TaylorIntegration.jl</a></li>
<li><a href="https://github.com/SciML/NeuralPDE.jl" rel="nofollow noreferrer">https://github.com/SciML/NeuralPDE.jl</a></li>
<li><a href="https://github.com/QuantumBFS/QuDiffEq.jl" rel="nofollow noreferrer">https://github.com/QuantumBFS/QuDiffEq.jl</a></li>
</ul>
<p>And there's more, the list is ever growing. Thus DifferentialEquations.jl is a common interface, where <code>solve(prob, alg)</code> works for any algorithm type that dispatches appropriately, and there's 10+ packages that are now supplying algorithm types to this interface that all solve ODEs in different ways, with some traditional methods while some of these are using neural networks or generating circuits to run on quantum computers. But all of them take the same input and choosing the solver of a different library is just a few characters change.</p>
<p>This interface is kept open by using multiple dispatch on the algorithm choice. If you're curious, more information about that is here: <a href="https://www.sciencedirect.com/science/article/abs/pii/S0965997818310251" rel="nofollow noreferrer">https://www.sciencedirect.com/science/article/abs/pii/S0965997818310251</a>. It's specifically kept open so that researchers can add new methods to the interface without requiring that they contributing to existing libraries. This allows for someone to make a self-contained ODE solver, but then just add a single dispatch function and now it presents itself as part of the DifferentialEquations.jl interface. This is the reason why there isn't a single canonical repository to point to for DifferentialEquations.jl's solvers: it's intentionally built as an expandable interface.</p>
<h2>Summary</h2>
<p>So to summarize:</p>
<ul>
<li>If you're looking for examples, look at the DifferentialEquations.jl tutorials and examples sections which have many examples.</li>
<li>If you want more examples, look at the DiffEqProblemLibrary.jl which has the code for many more examples that are not included in the documentation.</li>
<li>There's many different repos that implement different ODE solvers, some repos with one solver while others have hundreds.</li>
<li>DifferentialEquations.jl is a metapackage / interface and thus through this interface this fact is mostly abstracted from the user and user code almost doesn't have to change in order to switch solver packages. But if you are looking for "where is the solver code", indeed you have to go find the specific repo.</li>
<li>The canonical solvers that are developed as part of DifferentialEquations.jl are those of OrdinaryDiffEq.jl that can be found here <a href="https://github.com/SciML/OrdinaryDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/OrdinaryDiffEq.jl</a>.</li>
<li>A good set of solvers to look at which are designed for small-scale performance and teaching is the SimpleDiffEq.jl set found here <a href="https://github.com/SciML/SimpleDiffEq.jl" rel="nofollow noreferrer">https://github.com/SciML/SimpleDiffEq.jl</a></li>
</ul>
| 244
|
solve differential equations
|
Solving system of differential equations
|
https://stackoverflow.com/questions/65201682/solving-system-of-differential-equations
|
<p>I am trying to solve RLC circuits using python.
When there are no Cs and Ls, after calculating i got to a linear system of equations which i solved using <code>numpy.linalg</code>, e.g :</p>
<pre><code>a + b = 1
a + c = 2
c - b = 1
</code></pre>
<p>But now i need to solve this with differential elements in it.<br />
I know that i probably should use <code>scipy</code> and i read the docs and other Qs about solving non-linear system of equations but i couldn't find anything helpful.<br />
Any help would be appreciated.</p>
| 245
|
|
solve differential equations
|
Solving differential equations with discrete values in MATLAB using ode45
|
https://stackoverflow.com/questions/64633010/solving-differential-equations-with-discrete-values-in-matlab-using-ode45
|
<p>I have a differential equation-</p>
<pre><code>L'(x) = F1(x,L(x))
</code></pre>
<p>Using ode45, I have obtained the solution for L(x). I have an array of values for L(x) denoted by L_val. Using this solution, I intend to solve another differential equation.</p>
<pre><code>w'(x)=L(x)/x
</code></pre>
<p>How can I solve for w(x)? Especially since L(x) is not a function of x, but an array of discrete values.</p>
|
<p>Use the cumulative trapezoidal integration function:
<a href="https://fr.mathworks.com/help/matlab/ref/cumtrapz.html" rel="nofollow noreferrer">https://fr.mathworks.com/help/matlab/ref/cumtrapz.html</a></p>
<p>Alternatively, you may use other rules with a better accuracy (Simpson's rule, search in Matlab central for the function file). Another pragrmatic way is too compute a high-order interpolant of your function and to integrate it directly.</p>
| 246
|
solve differential equations
|
Solve differential equation using finite difference method
|
https://stackoverflow.com/questions/76059170/solve-differential-equation-using-finite-difference-method
|
<p>I am trying to solve a second order differential equation using finite difference method. In the following code I have a function to calculate the first derivative and the second derivative. I am creating a sparse matrix with one column and trying to solve it using <code>np.linalg</code>. However, the <code>f</code> output is not correctly populated, it's all zeros apart from the boundaries. What am I doing wrong?</p>
<pre><code>import numpy as np
from scipy.sparse import csc_matrix
# Define the dimensions of the grid
Lx = 2 # Length of the grid in the x-direction
Nx = 11 # Number of grid points in the x-direction
fa = 2
fb = 0.2
I = np.eye(Nx)
def compute_derivatives(Nx, dx):
# Compute grid spacing
x = np.linspace(0, Lx, Nx)
dx = Lx/(Nx-1)
# Initialize arrays for first and second order derivatives
DX = np.zeros(Nx)
DX2 = np.zeros(Nx)
# Compute first and second order derivatives using central differences
for i in range(1, Nx-1):
DX[i] = (x[i+1] - x[i-1]) / (2*dx)
DX2[i] = (x[i+1] - 2*x[i] + x[i-1]) / (dx**2)
return DX, DX2
DX,DX2 = compute_derivatives(Nx,Lx)
A = DX2 + 5*DX + 6*I
b = csc_matrix((Nx, 1)).toarray()
A[[0, Nx-1], :] = 0
A[0,0]=1
A[Nx-1,Nx-1] = 1
b[0] = fa
b[-1] = fb
f = np.linalg.solve(A, np.asarray(np.array(b), dtype=float))
print(f)
</code></pre>
| 247
|
|
solve differential equations
|
Problems using Python to solve coupled delay differential equations (DDEs)
|
https://stackoverflow.com/questions/19104131/problems-using-python-to-solve-coupled-delay-differential-equations-ddes
|
<p>I am trying to use <code>pydelay</code> library to solve a system of delay differential equations. I have followed <a href="http://pydelay.sourceforge.net/" rel="nofollow">instructions</a> to setup my model. However, I ran into some errors and really appreciate any suggestions.</p>
<p>Here is my code:</p>
<pre><code>import numpy as np
import pylab as pl
from pydelay import dde23
# define the equations
eqns = {
'B' : 'L*((D*D/(D*D+b*b))*(H/(H+v)))- w*B',
'H' : 'w*B(t-tau) - H*(a_min + a_max*(b*b/(b*b+D*D))-s*(F/(F-H)))',
'F' : 'H*(a_min + a_max*(b*b/(b*b+D*D))-s*(F/(F-H))) - m*F',
'D' : 'c*F - r_a*(F+H) - r_b*B'
}
#define the parameters
params = {
'L' : 2000.0, # laying rate
'b' : 500, #
'w' : 1.0/9, # pupation rate of brood
'tau' : 12.0, # time lag
'a_min' : 0.25, #
'a_max' : 0.25,
's' : 0.75,
'm' : 0.3,
'c' : 0.10,
'r_a' : 0.007,
'r_b' : 0.018,
'v' : 5000.0
}
# Initialise the solver
dde = dde23(eqns=eqns, params=params)
# set the simulation parameters
dde.set_sim_params(tfinal=50, dtmax=1.0)
# set the history
n=12
t_hist = np.linspace(0, n, n)
B_hist = np.array([5] * n)
H_hist = np.array([20] * n)
F_hist = np.array([10] * n)
D_hist = np.array([100] * n)
histdic = {
't': t_hist,
'B': B_hist,
'H': H_hist,
'F': F_hist,
'D': D_hist,
}
dde.hist_from_arrays(histdic)
# run the simulator
dde.run()
</code></pre>
<p>The errors I ran into were:</p>
<pre><code>Number of Minimum steps taken: 23090
Error: Scipy could not calculate spline for variable H.
Error: Scipy could not calculate spline for variable B.
Error: Scipy could not calculate spline for variable D.
Error: Scipy could not calculate spline for variable F.
</code></pre>
| 248
|
|
solve differential equations
|
A Python library for solving some integro-differential equations?
|
https://stackoverflow.com/questions/18841528/a-python-library-for-solving-some-integro-differential-equations
|
<p>I have <em>a huge set of <strong>coupled nonlinear integro-partial differential</strong> equations</em>. After a long while trying to simplify the equations and solve them at least semi-analytically I have come to conclude there has been left no way for me but an efficient numerical method. Finite element seems most amenable as is based on Galerkin method which gives a <strong>weak form</strong> solution, so a great hope that it might finally solve the equations. But at the same time I am so new to this field to write the codes all from the scratch:</p>
<p>Is there any Python library already available that can efficiently do a Finite Element analysis?</p>
<p>Also I was interested if softwares like FEniCS/Dolphin might also solve integro-differential equations?</p>
| 249
|
|
solve differential equations
|
Solve a system of differential equations using Euler's method
|
https://stackoverflow.com/questions/29870245/solve-a-system-of-differential-equations-using-eulers-method
|
<p>I'm trying to solve a system of ordinary differential equations with Euler's method, but when I try to print velocity I get</p>
<pre><code>RuntimeWarning: overflow encountered in double_scalars
</code></pre>
<p>and instead of printing numbers I get <code>nan</code> (not a number). I think the problem might be when defining the acceleration, but I don't know for sure, I would really appreciate if someone could help me. </p>
<pre><code>from numpy import *
from math import pi,exp
d=0.0005*10**-6
a=[]
while d<1.0*10**-6 :
d=d*2
a.append(d)
D=array(a)
def a_particula (D,x, v, t):
cc=((1.00+((2.00*lam)/D))*(1.257+(0.400*exp((-1.10*D)/(2.00*lam)))))
return (g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
def euler (acel,D, x, v, tv, n=15):
nv, xv, vv = tv.size, zeros_like(tv), zeros_like(tv)
xv[0], vv[0] = x, v
for k in range(1, nv):
t, Dt = tv[k-1], (tv[k]-tv[k-1])/float(n)
for i in range(n):
a = acel(D,x, v, t)
t, v = t+Dt, v+a*Dt
x = x+v*Dt
xv[k], vv[k] = x, v
return (xv, vv)
g=(9.80)
densaire= 1.225
lam=0.70*10**-6
densparticula=1000.00
mu=(1.785*10**-5)
tv = linspace(0, 5, 50)
x, v = 0, 0 #initial conditions
for j in range(len(D)):
xx, vv = euler(a_particula, D[j], x, v, tv)
print(D[j],xx,vv)
</code></pre>
|
<p>In future it would be helpful if you included the full warning message in your question - it will contain the line where the problem occurs:</p>
<pre><code>tmp/untitled.py:15: RuntimeWarning: overflow encountered in double_scalars
return (g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
</code></pre>
<p><a href="http://en.wikipedia.org/wiki/Arithmetic_overflow" rel="nofollow">Overflow</a> occurs when the magnitude of a variable exceeds the largest value that can be represented. In this case, <code>double_scalars</code> refers to a 64 bit float, which has a maximum value of:</p>
<pre><code>print(np.finfo(float).max)
# 1.79769313486e+308
</code></pre>
<p>So there is a scalar value in the expression:</p>
<pre><code>(g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
</code></pre>
<p>that is exceeding ~1.79e308. To find out which one, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.errstate.html" rel="nofollow"><code>np.errstate</code></a> to raise a <code>FloatingPointError</code> when this occurs, then catch it and start the <a href="https://docs.python.org/2/library/pdb.html" rel="nofollow">Python debugger</a>:</p>
<pre><code> ...
with errstate(over='raise'):
try:
ret = (g-((densaire*g)/densparticula)-((mu*18.0*v)/(cc*densparticula* (D**2.00))))
except FloatingPointError:
import pdb
pdb.set_trace()
return ret
...
</code></pre>
<p>From within the debugger you can then check the values of the various parts of this expression. The overflow seems to occur in:</p>
<pre><code>(mu*18.0*v)/(cc*densparticula* (D**2.00))
</code></pre>
<p>The first time the warning occurs, <code>(cc*densparticula* (D**2.00)</code> evaluates as 2.3210168586496022e-12, whereas <code>(mu*18.0*v)</code> evaluates as -9.9984582297025182e+299.</p>
<p>Basically you are dividing a very large number by a very small number, and the magnitude of the result is exceeding the maximum value that can be represented. This might be a problem with your math, or it might be that your inputs to the function are not reasonably scaled.</p>
| 250
|
solve differential equations
|
solving system of differential equations with odient
|
https://stackoverflow.com/questions/78783282/solving-system-of-differential-equations-with-odient
|
<p>I tried to solve system of two differential equations using <code>scipy.integrate.odient</code>.
The results are far from my expectations as one can see from the plots I attached below.</p>
<p>This is the system of ODEs:</p>
<p><img src="https://i.sstatic.net/KIuEvcGy.png" alt="" /></p>
<p>where alpha and beth functions are functions of the time t, and repsernting function of the input signal.</p>
<p>My plots for the signals in my code are:</p>
<p><img src="https://i.sstatic.net/maS20JDs.png" alt="" /></p>
<p>The expected plots</p>
<p><img src="https://i.sstatic.net/IYO0A7PW.png" alt="" /></p>
<p>My code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.integrate import odeint ,solve_ivp
import matplotlib.pyplot as plt
def normalize_array(A):
# Convert to NumPy array if it's not already
A = np.array(A)
# Calculate min and max
min_val = np.min(A)
max_val = np.max(A)
# Normalize
if max_val == min_val:
# Avoid division by zero if all values are the same
return np.zeros_like(A)
else:
return (A - min_val) / (max_val - min_val)
def solve_system_of_ODE(init_vec,t,alpha_func,beth_func,gamma,delta):
# Define the system of differential equations
def system(init_vec,t,alpha_func,beth_func,gamma,delta):
x,A = init_vec
dydt = [alpha_func(t)*(A-x)-beth_func(t)*x-gamma*x+delta*(1-A),
delta*(1-A)-gamma*x]
return dydt
# Solve the system of differential equations
solution = odeint(system,init_vec,t,args=(alpha_func,beth_func,gamma,delta))
return solution
# Initial conditions
init_vec = [0,0]
# Time points where the solution is evaluated
t = np.linspace(0,50,500)
signals=[lambda t: 1 / (1 + np.exp(-5 * (t - 10))),
lambda t: 0.3*np.sin(t/3)+ 0.25*np.sin(t/2)+np.sin(t/5)+np.sin(t/10),
lambda t: np.sin(t/3)]
n_signals=len(signals)
delta=0.2
gamma=0.4
X_lst,A_lst=[],[]
for i in range(len(signals)):
alpha=signals[i]
beth=signals[i]
solution=solve_system_of_ODE(init_vec,t,alpha,beth,gamma,delta)
X = solution[:,0]
X=normalize_array(X)
A = solution[:,1]
A=normalize_array(A)
X_lst.append(X)
A_lst.append(A)
fig, axes = plt.subplots(2, n_signals, figsize=(10, 5))
for i in range(len(signals)):
y=signals[i](t)
axes[0,i].plot(t,y)
axes[0, i].set_title(f" signal({i+1})")
axes[0, i].grid(True)
axes[0, i].set_xlabel('t')
axes[0, i].set_ylabel('u(t)')
#plot the results
axes[1,i].plot(t, X_lst[i], label='X(t)')
axes[1,i].plot(t, A_lst[i], label='A(t)')
axes[1,i].set_title('Solutions ')
axes[1,i].set_xlabel('t')
axes[1,i].set_ylabel('Solutions X, A')
axes[1, i].grid(True)
axes[1, i].legend()
plt.tight_layout()
plt.show()
</code></pre>
<p>I tried to change the initial condition of the ODE system - but it is not help me.</p>
| 251
|
|
solve differential equations
|
Solving a large system of coupled differential equations using RK4
|
https://stackoverflow.com/questions/69273973/solving-a-large-system-of-coupled-differential-equations-using-rk4
|
<p>I want to solve a system of differential equations using the RK4 method, like given below. <a href="https://i.sstatic.net/RJhmw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RJhmw.png" alt="enter image description here" /></a></p>
<p>So for this case, I've to solve 3 differential equations-</p>
<p>d/dt(x0) = -2*x1</p>
<p>d/dt(x1) = -2* x0 -8* x1 -2√2*x2 and</p>
<p>d/dt(x2) = -2√2*x1 -14 *x2</p>
<p>Initial condition at t=0: x0, x1, x2 are -0.00076896, -0.01033249, -0.06899846 respectively.</p>
<p>Here, I am using M-matrix dimension 3x3 as an example. But for my actual problem, the dimension of the M matrix can be 100x100(M is a tridiagonal matrix). I can solve two coupled differential equations using <a href="https://www.nsc.liu.se/%7Eboein/f77to90/rk.html" rel="nofollow noreferrer">this process</a> but with this many equations, I have no idea how to even define them properly and solve using RK4.</p>
<p>Can someone help me? Thanks</p>
| 252
|
|
solve differential equations
|
Solving differential equations in realtime in a webapp
|
https://stackoverflow.com/questions/16735309/solving-differential-equations-in-realtime-in-a-webapp
|
<p>Is this undoable? Say I have a complex system of differential equations that I want to have a live user-based input to them and then I want to plot the results in real time. It doesn't seem like javascript has the precision or mathematical dexterity. My initial thought was to use python with scipy for ode solving + django for user interaction and displaying, but I assume that the cross-talk between server and client would take way too long. </p>
| 253
|
|
solve differential equations
|
Solving differential equations with ode solvers and finite difference in Matlab
|
https://stackoverflow.com/questions/79252262/solving-differential-equations-with-ode-solvers-and-finite-difference-in-matlab
|
<p>I have implemented following equations both with the Matlab own ODE solvers and with a very simple finite difference scheme. The latter works properly, while the ODE code does not produce suitable solutions. What I am doing wrong? These are a set of coupled implicit differential equations.</p>
<pre><code> % Define the range of r for the solution
r_span = [r2, r_outlet];
% Initial state vector [cm, cu, rho, p]
y0 = [cm2, cu2, rho2, p2];
% Define the ODE system
syms cm(r) cu(r) rho(r) p(r)
eq2 = cm*r*diff(rho) + rho*r*diff(cm)+cm*rho == 0;
eq3 = diff(r*cu) == -r^2*cu*cf*rho*sqrt(cu^2+cm^2)/(rho2*cm2*b2*r2);
eq4 = diff(p)/rho == -cu*diff(cu)-cm*diff(cm)-(cd*rho*r* (cu^2+cm^2)^1.5)/(rho2*cm2*b2*r2);
eq5 = (gamma/(gamma-1))*diff(p/rho) == -cu*diff(cu)-cm*diff(cm);
% eqns = [eq1; eq2; eq3; eq4; eq5];
eqns = [ eq2; eq3; eq4; eq5];
[V, S] = odeToVectorField(eqns);
odefcn = matlabFunction(V, 'vars', {'r', 'Y'});
opt = odeset('RelTol', 1e-6, 'AbsTol', 1e-8);
% Solve the ODE system using ode45
[r_sol, y_sol] = ode15s(@(r, Y) odefcn(r, Y), r_span, y0,opt);
</code></pre>
<p>Thanks in advance!</p>
<p>As said before a comparison with a finite difference (FD) scheme, showed that these code cannot solve the system of equations. Also with FD it was possible to generate physically sound results. For example, conservation of mass is given for FD. In the ODE code this conservation is defined in eq1</p>
| 254
|
|
solve differential equations
|
python: Initial condition in solving differential equation
|
https://stackoverflow.com/questions/49964702/python-initial-condition-in-solving-differential-equation
|
<p>I want to solve this differential equation:
y′′+2y′+2y=cos(2x) with initial conditions:</p>
<ol>
<li><p>y(1)=2,y′(2)=0.5</p></li>
<li><p>y′(1)=1,y′(2)=0.8</p></li>
<li><p>y(1)=0,y(2)=1</p></li>
</ol>
<p>and it's code is:</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
def dU_dx(U, x):
return [U[1], -2*U[1] - 2*U[0] + np.cos(2*x)]
U0 = [1,0]
xs = np.linspace(0, 10, 200)
Us = odeint(dU_dx, U0, xs)
ys = Us[:,0]
plt.xlabel("x")
plt.ylabel("y")
plt.title("Damped harmonic oscillator")
plt.plot(xs,ys);
</code></pre>
<p>how can I fulfill it?</p>
|
<p>Your initial conditions are not, as they give values at two different points. These are all boundary conditions.</p>
<pre class="lang-py prettyprint-override"><code>def bc1(u1,u2): return [u1[0]-2.0,u2[1]-0.5]
def bc2(u1,u2): return [u1[1]-1.0,u2[1]-0.8]
def bc3(u1,u2): return [u1[0]-0.0,u2[0]-1.0]
</code></pre>
<p>You need a BVP solver to solve these boundary value problems.</p>
<p>You can either make your own solver using the shooting method, in case 1 as</p>
<pre><code>def shoot(b): return odeint(dU_dx,[2,b],[1,2])[-1,1]-0.5
b = fsolve(shoot,0)
T = linspace(1,2,N)
U = odeint(dU_dx,[2,b],T)
</code></pre>
<p>or use the secant method instead of <code>scipy.optimize.fsolve</code>, as the problem is linear this should converge in 1, at most 2 steps.</p>
<p>Or you can use the <code>scipy.integrate.solve_bvp</code> solver (which is perhaps newer than the question?). Your task is similar to the documented examples. Note that the argument order in the ODE function is switched in all other solvers, even in <code>odeint</code> you can give the option <code>tfirst=True</code>.</p>
<pre class="lang-py prettyprint-override"><code>def dudx(x,u): return [u[1], np.cos(2*x)-2*(u[1]+u[0])]
</code></pre>
<hr />
<p><em>Solutions generated with <code>solve_bvp</code>, the nodes are the automatically generated subdivision of the integration interval, their density tells how "non-flat" the ODE is in that region.</em>
<a href="https://i.sstatic.net/vjcCj.png" rel="noreferrer"><img src="https://i.sstatic.net/vjcCj.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>xplot=np.linspace(1,2,161)
for k,bc in enumerate([bc1,bc2,bc3]):
res = solve_bvp(dudx, bc, [1.0,2.0], [[0,0],[0,0]], tol=1e-5)
print res.message
l,=plt.plot(res.x,res.y[0],'x')
c = l.get_color()
plt.plot(xplot, res.sol(xplot)[0],c=c, label="%d."%(k+1))
</code></pre>
<hr />
<p><em>Solutions generated using the shooting method using the initial values at <code>x=0</code> as unknown parameters to then obtain the solution trajectories for the interval <code>[0,3]</code>.</em></p>
<p><a href="https://i.sstatic.net/4HBhp.png" rel="noreferrer"><img src="https://i.sstatic.net/4HBhp.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>x = np.linspace(0,3,301)
for k,bc in enumerate([bc1,bc2,bc3]):
def shoot(u0): u = odeint(dudx,u0,[0,1,2],tfirst=True); return bc(u[1],u[2])
u0 = fsolve(shoot,[0,0])
u = odeint(dudx,u0,x,tfirst=True);
l, = plt.plot(x, u[:,0], label="%d."%(k+1))
c = l.get_color()
plt.plot(x[::100],u[::100,0],'x',c=c)
</code></pre>
| 255
|
solve differential equations
|
How to solve this system of IVP differential equations
|
https://stackoverflow.com/questions/61442582/how-to-solve-this-system-of-ivp-differential-equations
|
<p>I have a system of differential equations like this</p>
<p><code>
dy/dt = f(t,y,y1,y2,....,yn,x),
dy1/dt = f(t,y,y1,y2,..yn,x),
.
.
.
dyn/dt = f(t,y,y1,y2,..yn,x),
x(t) = f(t,y1,y2,...yn,x)
</code></p>
<p>And I have the values y_i(0),x(0)
If I had dx/dt then simply using scipy.integrate IVP I could solve this. I can't calculate the derivative of x(t) it's too complex . Can I simulate this system in python without finding dx/dt ?</p>
|
<p>No problem, define </p>
<pre class="lang-py prettyprint-override"><code>def derivatives(t,y):
xt = func_x(t,y);
dy = func_y(t,y,xt)
return dy
</code></pre>
<p>where <code>func_y</code> takes scalar (<code>t</code>,<code>xt</code>) and vector (<code>y</code>) arguments and returns a vector of the same size as <code>y</code>. This then works well with <code>solve_ivp</code>.</p>
| 256
|
solve differential equations
|
Solving System of Differential Equations using SciPy
|
https://stackoverflow.com/questions/30566994/solving-system-of-differential-equations-using-scipy
|
<p>I'm trying to solve the following system of differential equations using scipy:</p>
<pre><code>q1''(t) + M/L1 * q2''(t) + R1/L1 * q1'(t) + 1/(C1 * L1) * q1(t) = 0
q2''(t) + M/L2 * q1''(t) + R2/L2 * q2'(t) + 1/(C2 * L2) * q2(t) = 0
</code></pre>
<p>I'm trying to use scipy.integrate.odeint to obtain a numerical solution. With the substitution:</p>
<pre><code>Y[0] = q1
Y[1] = q1'
Y[2] = q1''
Y[3] = q2
Y[4] = q2'
Y[5] = q2''
</code></pre>
<p>I used the following code to get the solution. </p>
<pre><code>def deriv(Y, t):
return np.array([
Y[1],
(-ML1 * Y[5] - R1L1 * Y[1] - Y[0] / C1L1),
??
Y[4],
(-ML2 * Y[2] - R2L2 * Y[4] - Y[3] / C2L2),
??
])
def main():
t = np.arange(0.0, 500.0, 0.01)
initial_cond = [ 1.0, 0.0, 0.0, 0.0, 0.0, 0.0 ]
sol = integrate.odeint(deriv, initial_cond, t)
print(sol)
</code></pre>
<p>My question is what do I put for the derivatives of q1''(t) and q2''(t). Is there a different substitution I can use instead?</p>
|
<p>You have two coupled second order equations. When this system is converted to a system of first order equations, there will be four equations, not six.</p>
<p>Do a little algebra by hand to solve for the vector [q1''(t), q2''(t)] in terms of q1(t), q1'(t), q2(t) and q2'(t). (For example, you can use that fact the inverse of the matrix [[1, M/L1], [M/L1, 1]] is [[1, -M/L1], [-M/L1, 1]]/(1-(M/L1)**2), if M/L1 is not 1.) Define your state vector Y to be [q1(t), q1'(t), q2(t), q2'(t)]. Then the vector that you need to return from <code>deriv</code> is</p>
<pre><code>Y[0]'(t) = q1'(t) = Y[1]
Y[1]'(t) = q1''(t) = (expression you found using algebra, with q1(t), q1'(t), q2(t) and q2'(t) replaced with Y[0], Y[1], Y[2], Y[3], resp.)
Y[2]'(t) = q2'(t) = Y[3]
Y[3]'(t) = q2''(t) = (expression you found using algebra, with q1(t), q1'(t), q2(t) and q2'(t) replaced with Y[0], Y[1], Y[2], Y[3], resp.)
</code></pre>
| 257
|
solve differential equations
|
I would like to solve a system of network-based differential equations using python 3.6
|
https://stackoverflow.com/questions/57447343/i-would-like-to-solve-a-system-of-network-based-differential-equations-using-pyt
|
<p>I would like to solve a system of network-based differential equations using python 3.6. The system of equations is as follows:</p>
<pre><code>dx_i/dt = omega_i - epsilon_i * y_i * x_i - mu_i * x_i,
dy_i/dt = epsilon_i * y_i * x_i - zeta_i * y_i - rho_i * y_i * z_i,
dv_i/dt = c_i * y_i - gamma_i * v_i + \sum_{i neq j} beta_{ij} * v_i * x_i,
dz_i/dt = k_i * y_i * z_i - delta_i * z_i,
where beta_{ij} = beta (1 - sigma_{ij}) * exp(- alpha|i-j|)
i = 1,2,3,...,N
</code></pre>
<p>I have written the code below in an attempt to solve the system of differential equations in a spatial network. </p>
<pre><code>from jitcode import jitcode, y
import numpy as np
import sympy
#import symengine
import matplotlib.pyplot as plt
#from scipy.integrate import ode
from numpy.random import uniform
n = 10
alpha = 0.05
#beta = 0.1
beta = uniform(0.01,3.0,n)
beta.sort()
mu = uniform(0.01,3.0,n)
mu.sort()
epsilon = uniform(0.01,3.0,n)
epsilon.sort()
pi = uniform(0.01,3.0,n)
pi.sort()
gamma = uniform(0.01,3.0,n)
gamma.sort()
omega = uniform(0.01,3.0,n)
omega.sort()
zeta = uniform(0.01,3.0,n)
zeta.sort()
rho = uniform(0.01,3.0,n)
rho.sort()
k = uniform(0.01,3.0,n)
k.sort()
c = uniform(0.01,3.0,n)
c.sort()
# Knonecker delta
M = np.einsum('ij-> ij',np.eye(n,n))
print(M)
# Adjacency matrix
A = beta * M * sympy.exp(-alpha)
print(A)
def f():
for i in range(n):
coupling_sum = sum(y(i+2) * y(i) for j in range(n) if A[i, j]
)
yield omega[i] - epsilon[i] * y(i+2) * y(i) - mu[i] * y(i)
yield epsilon[i] * y(i+2) * y(i) - zeta[i] * y(i+1) - rho[i] * y(i+1)* y(i+3)
yield c[i] * y(i+1) - gamma[i] * y(i+2) + coupling_sum
yield k[i]* y(i+1) * y(i+3) - delta[i] *y(i+3)
#integrate
#---------------
initial_state = np.random.random(n)
ODE = jitcode(f,n=n)
ODE.set_integrator("dopri5", atol=1e-6,rtol=0)
initial = np.linspace(0.1,0.4,num=n)
ODE.set_initial_value(initial_state,time=0.0)
#data structure: x[0], w[0], v[0], z[0], ..., x[n], w[n], v[n], z[n]
data = []
data = np.vstack(ODE.integrate(T) for T in range(0, 100, 0.1))
print(data)
fig = plt.figure()
</code></pre>
<p>I am expecting to get solutions for the four differential equations and some simulations to represent the equations. The error message that Iam getting says "RuntimeError: Not Implemented"</p>
|
<p>Your translation of the model into code has index problems. You addressed this once in the sum calculation, but then never again. To help with that, define helper functions</p>
<pre class="lang-py prettyprint-override"><code>def X(i): return y(4*i)
def Y(i): return y(4*i+1)
def V(i): return y(4*i+2)
def Z(i): return y(4*i+3)
</code></pre>
<p>Then you can express the generator of the symbolic right sides as</p>
<pre class="lang-py prettyprint-override"><code>def f():
for i in range(n):
coupling_sum = V(i) * sum(beta[i,j]*X(j) for j in range(n) if j!=i )
yield omega[i] - epsilon[i] * Y(i) * X(i) - mu[i] * X(i)
yield epsilon[i] * Y(i) * X(i) - zeta[i] * Y(i) - rho[i] * Y(i)*Z(i)
yield c[i] * Y(i) - gamma[i] * V(i) + coupling_sum
yield k[i] * Y(i) * Z(i) - delta[i] * Z(i)
</code></pre>
<p>with an appropriate definition of <code>beta[i,j]</code>. The code you wrote is strange and does not match the formula. In the formula, <code>beta</code> is first a constant and then a matrix. In the code, <code>beta</code> is an array. This is quite incompatible.</p>
<p>In the call to compile the function, you also should give the correct dimension of the state space, you have <code>n</code> components of <code>x,y,v,z</code>, making <code>4*n</code> components in total.</p>
<pre class="lang-py prettyprint-override"><code>initial_state = np.random.random(4*n)
ODE = jitcode(f,n=4*n)
</code></pre>
<p>With the correct dimension of the state space arguments, the integration calls probably will go through.</p>
| 258
|
solve differential equations
|
Solving differential equations in two different domains using the bvp solver in python
|
https://stackoverflow.com/questions/62016035/solving-differential-equations-in-two-different-domains-using-the-bvp-solver-in
|
<p>I am trying to solve the following set of coupled differential equations ( variables are p,n and psi) using the bvp solver in python :-</p>
<pre><code>1. d2n/dx2-(d2psi/dx2)n-(dpsi/dx)(dn/dx)=k1 in domain 1
2. d2p/dx2+(d2psi/dx2)p+(dpsi/dx)(dp/dx)=k2 in domain 1
3. d2psi/dx2= k*(p-n) in domain 1
4. d2psi/dx2= 0 in domain 2 where domain 1 is given by x<=L and domain 2 is given by L<x<=(L+t)
</code></pre>
<p>subjected to the boundary conditions :</p>
<pre><code>p(0)= p0, n(0)= n0, psi(0)= 0, p(L)= p0*exp(-psi/Vth), n(L)= n0*exp(psi/Vth), psi(L+t)= 5.
</code></pre>
<p>I can set the equations up. But considering the fact that there are two different domains where the equations are to be solved, I am facing some trouble. Any help regarding the same would be highly appreciated.</p>
| 259
|
|
solve differential equations
|
Sympy: solve a differential equation
|
https://stackoverflow.com/questions/33340217/sympy-solve-a-differential-equation
|
<p>I want to find an elegant way of solving the following differential equation:</p>
<pre><code>from sympy import *
init_printing()
M, phi, t, r = symbols('M phi t r')
eq = Eq(-M * phi(t).diff(t), Rational(3, 2) * m * r**2 * phi(t).diff(t) * phi(t).diff(t,t))
</code></pre>
<p><a href="https://i.sstatic.net/0l4ID.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0l4ID.png" alt="enter image description here"></a></p>
<p>I assume that phi(t).diff(t) is not zero. Hence the left and right side are shortened. </p>
<p>This is how I get to the solution:</p>
<pre><code># I assume d/dt(phi(t)) != 0
theta = symbols('theta')
eq = eq.subs({phi(t).diff(t, 2): theta}) # remove the second derivative
eq = eq.subs({phi(t).diff(t): 1}) # the first derivative is shortened
eq = eq.subs({theta: phi(t).diff(t, 2)}) # get the second derivative back
</code></pre>
<p><a href="https://i.sstatic.net/vR9hi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vR9hi.png" alt="enter image description here"></a></p>
<pre><code>dsolve(eq, phi(t))
</code></pre>
<p><a href="https://i.sstatic.net/WHGsj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WHGsj.png" alt="enter image description here"></a></p>
<p>How do I solve this more elegantly?</p>
|
<p>Ideally <code>dsolve()</code> would be able to solve the equation directly, but it doesn't know how (it needs to learn that it can factor an equation and solve the factors independently). I opened an <a href="https://github.com/sympy/sympy/issues/10043" rel="nofollow">issue</a> for it. </p>
<p>My only other suggestion is to divide phi' out directly:</p>
<pre><code>eq = Eq(eq.lhs/phi(t).diff(t), eq.rhs/phi(t).diff(t))
</code></pre>
<p>You can also use</p>
<pre><code>eq.xreplace({phi(t).diff(t): 1})
</code></pre>
<p>to replace the first derivative with 1 without modifying the second derivative (unlike <code>subs</code>, <code>xreplace</code> has no mathematical knowledge of what it is replacing; it just replaces expressions exactly).</p>
<p>And don't forget that <code>phi(t) = C1</code> is also a solution (for when phi' does equal 0).</p>
| 260
|
solve differential equations
|
Solve differential equation with two variables in Simulink
|
https://stackoverflow.com/questions/42350080/solve-differential-equation-with-two-variables-in-simulink
|
<p>I have a differential equation of the form:</p>
<pre><code>xs'' = rhs * theta
</code></pre>
<p>to solve in Simulink, where <strong>xs</strong> and <strong>theta</strong> are variables and <strong>rhs</strong> is a numerical constant. So far, I've got this: <a href="https://i.sstatic.net/EzVLY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EzVLY.png" alt="enter image description here"></a></p>
<p>but it's incomplete and I feel it is wrong. How can I simulate this equation in Simulink?</p>
<p>Regards</p>
|
<p>This works as expected:</p>
<p><a href="https://i.sstatic.net/mWDcZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mWDcZ.png" alt="enter image description here"></a></p>
<p>Thanks to @Ander Biguri!</p>
| 261
|
solve differential equations
|
How to solve this system of differential equations in matlab?
|
https://stackoverflow.com/questions/57548600/how-to-solve-this-system-of-differential-equations-in-matlab
|
<p>My task is to model a certain physical problem and use matlab to solve it's differential equations. I made the model but it seems far more complex than what I've learned so far so I have no idea how to solve this.</p>
<p>The black color means it's a constant</p>
<p><img src="https://i.sstatic.net/aRzfw.jpg" alt="The equations"></p>
|
<p><strong>I assume that by "solve" you seek a closed form solution of the form x(t) = ..., z(t) = ...</strong> Unforunately, it's very likely you cannot solve this system of differential equations. Only very specific <em>canonical</em> systems actually have a closed-form solution, and they are the most simple (few terms and dependent variables). See <a href="https://en.wikipedia.org/wiki/Ordinary_differential_equation" rel="nofollow noreferrer">Wikipedia's entry for Ordinary Differential Equations</a>, in particular the section <em>Summary of exact solutions</em>.</p>
<p>Nevertheless, the procedure for attempting to solve with Matlab's Symbolic Math Toolbox is described <a href="https://www.mathworks.com/help/symbolic/solve-a-single-differential-equation.html" rel="nofollow noreferrer">here</a>. </p>
<p><strong>If instead you were asking for numerical integration</strong>, then I will give you some pointers, but you must carry out the math:</p>
<ol>
<li><p>Convert the second order system to a first order system by using a substitution w(t) = dx/dt, allowing you to replace the d2x/dt2 term by dw/dt. <a href="https://www.mathworks.com/help/symbolic/solve-differential-equation-numerically-1.html" rel="nofollow noreferrer">Example here.</a></p></li>
<li><p>Read the documentation for <a href="https://uk.mathworks.com/help/matlab/ref/ode15i.html" rel="nofollow noreferrer">ode15i</a> and implement your transformed model as an implicit differential equation system.</p></li>
</ol>
<p>N.B. You must supply numerical values for your constants. </p>
| 262
|
solve differential equations
|
Solving differential equations in a loop with different parameters each time R
|
https://stackoverflow.com/questions/36941968/solving-differential-equations-in-a-loop-with-different-parameters-each-time-r
|
<p>I think, this should be simple.
I'm solving numerically a set of differential equations in R. When I do it one it is fine. However I need to test the set of differential equations for several groups of parameters, thus I am using a loop. The problem is that for all the set of parameters it shows the same values, which is not right.</p>
<pre><code>contrand<-sample(paste0("p",1:18),
size=nb, replace=TRUE,
prob=c(0, 0, 0.052, 0.013, 0.033, 0.017, 0.002,0.026, 0.005, 0.093,
0.186, 0.006, 0.001, 0.004, 0.0044, 0.4395, 0.005, 0.113))
z.cont <- data.frame(matrix(rep(NA,1*nb),nrow=nb,ncol=1))
for (i in 1:nb){
params <- get(contrand[i])
model<-function(t,x,params){
S<-x[1]
I<-x[2]
R<-x[3]
with(as.list(params),{
N<- S+I+R
dS <- birth*N - beta*S*I- death*S
dI <- beta*S*I - gamma*I - death*I
dR <- gamma*I - death*R
dx <- c(dS,dI,dR)
list(dx)
})
}
out<-as.data.frame(lsoda(xstart,times,model,params))
I<-as.data.frame(out$I)
z.cont[i,1] <- Iv[157,]
}
</code></pre>
| 263
|
|
solve differential equations
|
Solving nonlinear system of differential equations in wolfram mathematica
|
https://stackoverflow.com/questions/37351824/solving-nonlinear-system-of-differential-equations-in-wolfram-mathematica
|
<p>How can I solve nonlinear system of differential equations and get plot for this solution? The system is without initial conditions. For example,</p>
<p>x'= (x + y)^2 - 1
y'= -y^2 - x + 1</p>
|
<p><a href="https://reference.wolfram.com/language/tutorial/DSolveIntroduction.html" rel="nofollow">Introduction to Differential Equation Solving with DSolve</a></p>
<p>See also <a href="https://reference.wolfram.com/language/tutorial/DSolveSettingUpTheProblem.html" rel="nofollow">Setting Up the Problem</a> scroll down to In(5) to see
how to set up a system of differential equations.</p>
| 264
|
solve differential equations
|
syntax for solving system of differential equations in sympy
|
https://stackoverflow.com/questions/26172733/syntax-for-solving-system-of-differential-equations-in-sympy
|
<p>I am new to sympy and in the process of learning it. I was browsing through the documentation and questions in stack exchange regarding symbolically solving a system of differential equations with initial conditions using sympy. </p>
<p>I have a simple system of ODE-s</p>
<pre><code>( dV/dt ) = -( 1 / RC ) * ( V(t) ) + I(t)/C
( dI/dt ) = -( R1 / L ) * ( I(t) ) - ( 1 / L) * V(t) + Vs/L
</code></pre>
<p>with initial conditions <code>V(0) = V0</code> and <code>I(0) = I0</code></p>
<p>I browsed through a lot of questions in stack exchange and not successful in finding an appropriate answer.
It would be of great help if somebody can show me a syntax to enter a system of coupled differential equations with initial conditions. </p>
|
<p>System of ODEs support is only in the development version of SymPy. It will be added in 0.7.6. The syntax would be</p>
<pre><code>V, I = symbols("V I", cls=Function)
RC, t, C, Vs, L, R1, V0, I0 = symbols("RC t C Vs L R1 V0 I0")
system = [Eq(V(t).diff(t), -1/RC*V(t) + I(t)/C), Eq(I(t).diff(t), -R1/L*I(t) - 1/L*V(t) + Vs/L)]
ics = {V(0): V0, I(0): I0}
dsolve(system, [V(t), I(t)], ics=ics)
</code></pre>
<p>It seems that there is a bug that prevents this from working in the current SymPy master, unless I mistyped something (<a href="https://github.com/sympy/sympy/issues/8193" rel="noreferrer">https://github.com/sympy/sympy/issues/8193</a>).</p>
| 265
|
solve differential equations
|
Solving a system of first order differential equations and second order differential equations (Non-linear)
|
https://stackoverflow.com/questions/64372007/solving-a-system-of-first-order-differential-equations-and-second-order-differen
|
<p><strong>The problem</strong></p>
<p>I currently have a system of four equations. Two are second-order differential equations and two are first-order differential equations:</p>
<p><a href="https://i.sstatic.net/IQtr6.png" rel="nofollow noreferrer">Four equations</a></p>
<p>The initial conditions are:</p>
<pre><code>x = 0 |
y = 0.3 |
f(x) = 2.05 |
f(y) = 0.55 |
</code></pre>
<p>All angles are in degrees.</p>
<p><strong>What I have tried</strong></p>
<p>I have tried to use Google Colabs and worked with SciPy and NumPy. Unfortunately, I cannot figure out how to program it as these equations are non-linear. Could somebody give me some tips on any other modules to use?</p>
|
<p>Scipy has a <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.integrate.RK45.html" rel="nofollow noreferrer">Runge Kutta solver</a>. First, you have to transform your ODEs to first order system (you can always do that by setting z=y') and then try the RK solver.</p>
| 266
|
solve differential equations
|
Using a forloop to solve coupled differential equations in python
|
https://stackoverflow.com/questions/66361935/using-a-forloop-to-solve-coupled-differential-equations-in-python
|
<p>I am trying to solve a set of differential equations, but I have been having difficulty making this work. My differential equations contain an "i" subscript that represents numbers from 1 to n. I tried implementing a forloop as follows, but I have been getting this index error (the error message is below). I have tried changing the initial conditions (y0) and other values, but nothing seems to work. In this code, I am using solve_ivp. The code is as follows:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import solve_ivp
def testmodel(t, y):
X = y[0]
Y = y[1]
J = y[2]
Q = y[3]
a = 3
S = 0.4
K = 0.8
L = 2.3
n = 100
for i in range(1,n+1):
dXdt[i] = K**a+(Q[i]**a) - S*X[i]
dYdt[i] = (K*X[i])-(L*Y[i])
dJdt[i] = S*Y[i]-(K*Q[i])
dQdt[i] = K*X[i]/L+J[i]
return dXdt, dYdt, dJdt, dQdt
t_span= np.array([0, 120])
times = np.linspace(t_span[0], t_span[1], 1000)
y0 = 0,0,0,0
soln = solve_ivp(testmodel, t_span, y0, t_eval=times,
vectorized=True)
t = soln.t
X = soln.y[0]
Y = soln.y[1]
J = soln.y[2]
Q = soln.y[3]
plt.plot(t, X,linewidth=2, color='red')
plt.show()
</code></pre>
<p>The error I get is</p>
<pre><code>IndexError Traceback (most recent call last)
<ipython-input-107-3a0cfa6e42ed> in testmodel(t, y)
15 n = 100
16 for i in range(1,n+1):
--> 17 dXdt[i] = K**a+(Q[i]**a) - S*X[i]
IndexError: index 1 is out of bounds for axis 0 with size 1
</code></pre>
<p>I have scattered the web for a solution to this, but I have been unable to apply any solution to this problem. I am not sure what I am doing wrong and what to actually change.</p>
<p>I have tried to remove the "vectorized=True" argument, but then I get an error that states I cannot index scalar variables. This is confusing because I do not think these values should be scalar. How do I resolve this problem, my ultimate goal is to plot these differential equations. Thank you in advance.</p>
|
<p>It is nice that you provide the standard solver with a vectorized ODE function for multi-point evalutions. But the default method is the explicit RK45, and explicit methods do not use Jacobi matrices. So there is no need for multi-point evaluations for difference quotients for the partial derivatives.</p>
<p>In essence, the coordinate arrays always have size 1, as the evaluation is at a single point, so for instance <code>Q</code> is an array of length 1, the only valid index is 0. Remember, in all "true" programming languages, array indices start at 0. It is only some CAS script languages that use the "more mathematical" 1 as index start. (Setting <code>n=100</code> and ignoring the length of the arrays provided by the solver is wrong as well.)</p>
<p>You can avoid all that and shorten your routine by taking into account that the standard arithmetic operations are applied element-wise for numpy arrays, so</p>
<pre><code>def testmodel(t, y):
X,Y,J,Q = y
a = 3; S = 0.4; K = 0.8; L = 2.3
dXdt = K**a + Q**a - S*X
dYdt = K*X - L*Y
dJdt = S*Y - K*Q
dQdt = K*X/L + J
return dXdt, dYdt, dJdt, dQdt
</code></pre>
<h2>Modifying your code for multiple compartments with the same dynamic</h2>
<p>You need to pass the solver a flat vector of the state. The first design decision is how the compartments and their components are arranged in the flat vector. One variant that is most compatible with the existing code is to cluster the same components together. Then in the ODE function the first operation is to separate out these clusters.</p>
<pre><code> X,Y,J,Q = y.reshape([4,-1])
</code></pre>
<p>This splits the input vector into 4 pieces of equal length. At the end you need to reverse this split so that the derivatives are again in a flat vector.</p>
<pre><code> return np.concatenate([dXdt, dYdt, dJdt, dQdt])
</code></pre>
<p>Everything else remains the same. Apart from the initial vector, which needs to have 4 segments of length <code>N</code> containing the data for the compartments. Here that could just be</p>
<pre><code> y0 = np.zeros(4*N)
</code></pre>
<p>If the initial data is from any other source, and given in records per compartment, you might have to transpose the resulting array before flattening it.</p>
<p>Note that this construction is <strong>not</strong> vectorized, so leave that option unset in its default <code>False</code>.</p>
<p>For uniform interaction patterns like in a circle I recommend the use of <code>numpy.roll</code> to continue to avoid the use of explicit loops. For an interaction pattern that looks like a network one can use connectivity matrices and masks like in <a href="https://stackoverflow.com/questions/60367831/using-python-built-in-functions-for-coupled-odes">Using python built-in functions for coupled ODEs</a></p>
| 267
|
solve differential equations
|
solving a system of 6 differential equations by ODE45 in matlab
|
https://stackoverflow.com/questions/21588348/solving-a-system-of-6-differential-equations-by-ode45-in-matlab
|
<p>I am trying to solve a system of 6 differential equations using matlab. I created a set of 6 differential equations as follows in a function m file named as Untitled.m</p>
<pre><code>function ydot=Untitled(z,y)
ydot = zeros(6,1);
%y(1)=A
%y(2)=B
%y(3)=C
%y(4)=D
%y(5)=P
y(6)=T
A=0.50265
k11=(333/106.7)*1.15*1000*exp(-59660/(8.314*960))
k31=(333/40)*73.6*exp(-47820/(8.314*960))
k32=(333/14.4)*1.79*exp(-30950/(8.314*960))
k21=(106.7/40)*426*exp(-68830/(8.314*960))
k22=(106.7/14.4)*0.000599*exp(-57740/(8.314*960))
Pcat=1450
g=9.81
phi=exp(-al*tc)
al=59100*exp(-67210/(8.314*T))
tc=(0.50265*1450*33)/143.64
H11=393000
H31=795000
H32=1200000
H21=1150000
H22=151000
E=1-((285.765*17.56)/((6.1*1450)+(17.56*285.765)))
Fcat=143.64
Cpcat=1087
Cp=1000*(y(1)*3.3+y(2)*3.3+y(3)*3.3+y(4)*1.087)
F=19.95
ydot(1)= -(A*(1-E)*Pcat*(k11+k31+k32)*phi*y(1)*y(1))/F
ydot(2)= (A*(1-E)*Pcat*(k11*y(1)*y(1)-(k21+k22)*y(2))*phi)/F
ydot(3)= (A*(1-E)*Pcat*(k31*y(1)*y(1)+ k21*y(2))*phi)/F
ydot(4)= (A*(1-E)*Pcat*(k32*y(1)*y(1)+k22*y(2))*phi)/F
ydot(5)= -(Pcat*g*(1-E))
ydot(6) = ((phi*(1-E)*Pcat*A)*(y(1)*y(1)((k11*H11)+(k31*H31)+(k32*H32))+y(2)((k21*H21)+ (k22*H22)))/((F*Cp)+(Fcat*Cpcat)))
%UNTITLED Summary of this function goes here
% Detailed explanation goes here
</code></pre>
<p>then i have created another file for using the ODE45 solver to solve the equations</p>
<pre><code>function main
options = odeset('RelTol',1e-6); %,'AbsTol',[1e-5 1e-5 1e-5 1e-5 1e-5 1e-5 ]);
Y0=[1.0;0.0;0.0;0.0;180000.0;959.81]
zspan=0:0.5:33;
[z,y]= ode45(@Untitled,zspan,Y0,options);
figure
hold on
plot(z,y(:,1));
plot(z,y(:,2),':');
</code></pre>
<p>but I am getting errors </p>
<pre><code>??? Error using ==> feval
Error: File: Untitled.m Line: 41 Column: 37
()-indexing must appear last in an index expression.
Error in ==> odearguments at 110
f0 = feval(ode,t0,y0,args{:}); % ODE15I sets args{1} to yp0.
Error in ==> ode45 at 173
[neq, tspan, ntspan, next, t0, tfinal, tdir, y0, f0, odeArgs, odeFcn, ...
Error in ==> ode at 6
[z,y]= ode45(@Untitled,zspan,Y0,options);
</code></pre>
<p>could anyone please help me with this as it is really important for me and i am new to MATLAB</p>
<p>thanx in advance :)</p>
|
<p>Search for <code>)(</code> in your code, it's syntactically not allowed. Probably you missed a <code>*</code> in between.</p>
| 268
|
solve differential equations
|
Why isn't ode in R solving easy differential equations?
|
https://stackoverflow.com/questions/72897940/why-isnt-ode-in-r-solving-easy-differential-equations
|
<p>I want to use deSolve to solve coupled partial differential equations, and am trying to get used to the package by solving easy differential equations with ode. I know the solution to y' = sqrt(1-y^2) is sin(t + c), so I tested it with the starting conditions y(0) = 0 and y(0) = 1, expecting to get sin(t) and sin(t + pi/2), respectively. But the plots of the solutions are completely different, and very strange.</p>
<pre><code>yini <- c(y = 0)
derivs <- function(t, y)
list(sqrt(1 - y^2))
times <- seq(from = 0, to = 4, by = 0.2)
out <- ode(y = yini, times = times, func = derivs)
head(out, n =3)
yini <- c(y = 1)
out2 <- ode(y = yini, times = times, func = derivs)
plot(out, out2, main = "", lwd = 2)
</code></pre>
<p><a href="https://i.sstatic.net/xNoLk.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Can anyone tell me why?</p>
|
<p>This is as expected, the right side <code>sqrt(1-y^2)</code> is always positive on the domain with value zero at <code>y=1</code>, so solutions are increasing to the constant solution <code>y=1</code>. As the right side is not Lipschitz there, numerical solvers might get a problem when reaching close to that value and interrupt the integration prematurely.</p>
| 269
|
solve differential equations
|
Fortran or C and f2py to solve differential equations
|
https://stackoverflow.com/questions/33988654/fortran-or-c-and-f2py-to-solve-differential-equations
|
<p>This is more of a design question. I am involved with a project that requires us to solve a bunch of first order differential equations. I know the python has modules to this and we have been using those functions.</p>
<p>However, we need the integrator to be fast, so we want to use adaptive step sizes and test some other routines not included in the scipy packages. To that end, I had a f2py question since it seems to make sense to write the ODE solver in fortran or C, and wrap it using f2py. Where does the 'slow down' occur in interfacing between fortran, for example, and python? Is it in transferring memory back and forth? I am wondering what I need to consider at the front end. Of course, I could write this directly in python (for starters), but I have heard that looping in python is very slow.</p>
<p>Anyway, just looking for general advice and things to consider.</p>
<p>Thank you.</p>
|
<p>I'm not familiar with f2py, but have you considered trying <a href="http://docs.cython.org/" rel="nofollow">Cython</a> first? If you're not familiar with it, it lets you write code in a superset of the standard Python language, and then it translates that into C code which is then compiled. If your code is mostly loops, this might speed it up quite a bit. Keep in mind, though, that any time you call a pure Python function from within Cython (or from regular C code, for that matter) it will create a bottleneck. (I believe that this is the slowdown you were referring to.)</p>
<p>That being said, I recently threw out a Cython module I'd written and replaced it with a hand-written C++ library using the <a href="https://docs.python.org/3.4/c-api/" rel="nofollow">Python/C API</a>. This was better since it gave me more control over the C++ code's interaction with NumPy, which was a bit challenging with Cython.</p>
| 270
|
solve differential equations
|
Solving coupled nonlinear differential equations
|
https://stackoverflow.com/questions/47231292/solving-coupled-nonlinear-differential-equations
|
<p>I have a differential equation that is as follows:</p>
<pre><code>%d/dt [x;y] = [m11 m12;m11 m12][x;y]
mat = @(t) sin(cos(w*t))
m11 = mat(t) + 5 ;
m12 = 5;
m21 = -m12 ;
m22 = -m11 ;
</code></pre>
<p>So I have that my matrix is specifically dependent on t. For some reason, I am having a super difficult time solving this with ode45. My thoughts were to do as follows ( I want to solve for x,y at a time T that was defined):</p>
<pre><code>t = linspace(0,T,100) ; % Arbitrary 100
x0 = (1 0); %Init cond
[tf,xf] = ode45(@ddt,t,x0)
function xprime = ddt(t,x)
ddt = [m11*x(1)+m12*x(2) ; m12*x(1)+m12*x(2) ]
end
</code></pre>
<p>The first error I get is that </p>
<pre><code>Undefined function or variable 'M11'.
</code></pre>
<p>Is there a cleaner way I could be doing this ?</p>
|
<p>I'm assuming you're running this within a script, which means that your function <code>ddt</code> is a <a href="https://www.mathworks.com/help/matlab/matlab_prog/local-functions-in-scripts.html" rel="nofollow noreferrer">local function</a> instead of a <a href="https://www.mathworks.com/help/matlab/matlab_prog/nested-functions.html" rel="nofollow noreferrer">nested function</a>. That means it doesn't have access to your matrix variables <code>m11</code>, etc. Another issue is that you will want to be evaluating your matrix variables at the specific value of <code>t</code> within <code>ddt</code>, which your current code doesn't do.</p>
<p>Here's an alternative way to set things up that should work for you:</p>
<pre><code>% Define constants:
w = 1;
T = 10;
t = linspace(0, T, 100);
x0 = [1 0];
% Define anonymous functions:
fcn = @(t) sin(cos(w*t));
M = {@(t) fcn(t)+5, 5; -5 @(t) -fcn(t)-5};
ddt = @(t, x) [M{1, 1}(t)*x(1)+M{2, 1}*x(2); M{1, 2}*x(1)+M{2, 2}(t)*x(2)];
% Solve equations:
[tf, xf] = ode45(ddt, t, x0);
</code></pre>
| 271
|
solve differential equations
|
Solve system of differential equation in python
|
https://stackoverflow.com/questions/66354995/solve-system-of-differential-equation-in-python
|
<p>I'm trying to solve a system of differential equations in python.
I have a system composed by two equations where I have two variables, A and B.
The initial condition are that A0=1e17 and B0=0, they change simultaneously.
I wrote the following code using ODEINT:</p>
<pre><code>import numpy as np
from scipy.integrate import odeint
def dmdt(m,t):
A, B = m
dAdt = A-B
dBdt = (A-B)*A
return [dAdt, dBdt]
# Create time domain
t = np.linspace(0, 100, 1)
# Initial condition
A0=1e17
B0=0
m0=[A0, B0]
solution = odeint(dmdt, m0, t)
</code></pre>
<p>Apparently I obtain an output different from the expected one but I don't understand the error.
Can someone help me?
Thanks</p>
|
<p>From <code>A*A'-B'=0</code> one concludes</p>
<pre class="lang-none prettyprint-override"><code>B = 0.5*(A^2 - A0^2)
</code></pre>
<p>Inserted into the first equation that gives</p>
<pre class="lang-none prettyprint-override"><code>A' = A - 0.5*A^2 + 0.5*A0^2
= 0.5*(A0^2+1 - (A-1)^2)
</code></pre>
<p>This means that the <code>A</code> dynamic has two fixed points at about <code>A0+1</code> and <code>-A0+1</code>, is growing inside that interval, the upper fixed point is stable. However, in standard floating point numbers there is no difference between <code>1e17</code> and <code>1e17+1</code>. If you want to see the difference, you have to encode it separately.</p>
<p>Also note that the standard error tolerances <code>atol</code> and <code>rtol</code> in the range somewhere between <code>1e-6</code> and <code>1e-9</code> are totally incompatible with the scales of the problem as originally stated, also highlighting the need to rescale and shift the problem into a more appreciable range of values.</p>
<p>Setting <code>A = A0+u</code> with <code>|u|</code> in an expected scale of <code>1..10</code> then gives</p>
<pre class="lang-none prettyprint-override"><code>B = 0.5*u*(2*A0+u)
u' = A0+u - 0.5*u*(2*A0+u) = (1-u)*A0 - 0.5*u^2
</code></pre>
<p>This now suggests that the time scale be reduced by <code>A0</code>, set <code>t=s/A0</code>. Also, <code>B = A0*v</code>. Insert the direct parametrizations into the original system to get</p>
<pre class="lang-none prettyprint-override"><code>du/ds = dA/dt / A0 = (A0+u-A0*v)/A0 = 1 + u/A0 - v
dv/ds = dB/dt / A0^2 = (A0+u-A0*v)*(A0+u)/A0^2 = (1+u/A0-v)*(1+u/A0)
u(0)=v(0)=0
</code></pre>
<p>Now in floating point and the expected range for <code>u</code>, we get <code>1+u/A0 == 1</code>, so effectively <code>u'(s)=v'(s)=1-v</code> which gives</p>
<pre class="lang-none prettyprint-override"><code>u(s)=v(s)=1-exp(-s)`,
A(t) = A0 + 1-exp(-A0*t) + very small corrections
B(t) = A0*(1-exp(-A0*t)) + very small corrections
</code></pre>
<p>The system in <code>s,u,v</code> should be well-computable by any solver in the default tolerances.</p>
| 272
|
solve differential equations
|
Using parameters to solve differential equations in GEKKO python
|
https://stackoverflow.com/questions/70939582/using-parameters-to-solve-differential-equations-in-gekko-python
|
<p>I need to integrate over a system of differential equations in GEKKO and want to use parameters to alter a gradient.</p>
<p>Conceptually this works, but the output is not what I expected. I added a code snippet for a sample problem below to illustrate the problem.</p>
<p>The solver will integrate over a specified time horizon, in this case t = [0, 1, 2, 3]</p>
<p>A parameter is defined that represents the gradient at each value in time called p = [0, 1, 2, 3].</p>
<p>My expectation is that the gradient at t=0 is 0, at t=1 is 1 and so forth. Instead GEKKO interprets it as the gradient at t=0 is 1, at t=1 is 2 etc.</p>
<p>Is there a reason why GEKKO does not use the gradient information at t=0?</p>
<pre><code>from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
n = 3
m = GEKKO(remote=False)
m.time = np.linspace(0,n,n+1) # [0, 1, 2 .. n]
y = m.Var(value=5)
p = m.Param(value=list(range(n+1))) # [0, 1, 2 .. n]
m.Equation(y.dt()==1*p)
m.options.IMODE=4
m.solve(disp=False)
plt.plot(m.time, y.value, '-x')
plt.xlabel('time'); plt.ylabel('y')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/3UsaQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3UsaQ.png" alt="enter image description here" /></a></p>
|
<p>Set <code>NODES=3</code> to get the desired output. The default in Gekko is <code>NODES=2</code> that is fast but not include interior calculation points for each step. Increasing the Nodes has the effect of increasing solution accuracy but also more variables to solve. For large problems with a long time horizon, improve the speed of simulation by using <code>IMODE=7</code> (sequential solution) instead of <code>IMODE=4</code> (simultaneous solution).</p>
<p><a href="https://i.sstatic.net/nzxie.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nzxie.png" alt="Nodes" /></a></p>
<pre class="lang-py prettyprint-override"><code>from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
n = 3
t = np.linspace(0,n,n+1)
for nodes in [2,3]:
m = GEKKO(remote=False)
m.time = t
y = m.Var(value=5)
p = m.Param(t)
m.Equation(y.dt()==1*p)
m.options.IMODE=4
m.options.NODES=nodes
m.solve(disp=False)
plt.plot(m.time, y.value, '-x')
plt.legend(['NODES=2','NODES=3'])
plt.xlabel('time'); plt.ylabel('y')
plt.grid()
plt.show()
</code></pre>
<p>Using <code>t = np.linspace(0,n,301)</code> and <code>p = m.Param(np.floor(t))</code> with more time points shows that the solutions converge with either <code>NODES=2</code> or <code>NODES=3</code>.</p>
<p><a href="https://i.sstatic.net/Livv7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Livv7.png" alt="Solutions Converge" /></a></p>
<pre class="lang-py prettyprint-override"><code>from gekko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
n = 3
t = np.linspace(0,n,301)
for nodes in [2,3]:
m = GEKKO(remote=False)
m.time = t
y = m.Var(value=5)
p = m.Param(np.floor(t))
m.Equation(y.dt()==1*p)
m.options.IMODE=4
m.options.NODES=nodes
m.solve(disp=False)
plt.plot(m.time, y.value, '-.')
plt.legend(['NODES=2','NODES=3'])
plt.xlabel('time'); plt.ylabel('y')
plt.grid()
plt.show()
</code></pre>
<p>Additional information on <code>NODES</code> is in the <a href="https://apmonitor.com/do" rel="nofollow noreferrer">Dynamic Optimization course</a> in the section on <a href="https://apmonitor.com/do/index.php/Main/OrthogonalCollocation" rel="nofollow noreferrer">Orthogonal Collocation on Finite Elements</a>.</p>
| 273
|
solve differential equations
|
Solve ordinary differential equations using SciPy
|
https://stackoverflow.com/questions/40832798/solve-ordinary-differential-equations-using-scipy
|
<p>I have a following
<a href="https://i.sstatic.net/7RcuQ.png" rel="nofollow noreferrer">ordinary differential equation</a>
and numeric parameters <a href="https://i.sstatic.net/5Ol8B.png" rel="nofollow noreferrer">Sigma</a>=0.4, x(0) = 4 and dx(0)/dt = 0<br>
My task is to get Cauchy problem solution (Initial value problem solution) of differential equation using <code>ode</code> function<br>
Can someone help me? I don't even know how to write equation and especially numeric parameters in correct way for SciPy.
<br> P.S. Sorry for not posting images, I've just registered.</p>
|
<p>Like Warren said, scipy.integrate.odeint is the 'SciPy' way to solve this.</p>
<p>But before you take your problem to SciPy (or whatever solver you end up using) you'll want to convert your 2nd order ODE to a first order ODE using something like: <a href="http://tutorial.math.lamar.edu/Classes/DE/SystemsDE.aspx" rel="nofollow noreferrer">http://tutorial.math.lamar.edu/Classes/DE/SystemsDE.aspx</a></p>
<p>To get things into SciPy you need to get your equation looking like:</p>
<p>y' = f(y)</p>
<p>But right now your equation is written like:</p>
<p>y'' = f(y, y')</p>
<p>The solution is to add more variables to your system, but the link will explain it more thoroughly.</p>
| 274
|
solve differential equations
|
Solving 1st order differential equations for matrices
|
https://stackoverflow.com/questions/59953428/solving-1st-order-differential-equations-for-matrices
|
<p>I'd like to code in python a coupled system of differential equations : <code>dF/dt=A(F)</code> where <code>F</code> is a matrix and <code>A(F)</code> is a function of the matrix <code>F</code>.</p>
<p>When <code>F</code> and <code>A(F)</code> are vectors the equation is solved using <code>scipy.integrate.odeint</code>.</p>
<p>However, <code>scipy.integrate.odeint</code> doesn't work for matrices, and I get an error :</p>
<pre><code>tmin, tmax, tstep = (0., 200., 1)
t_test=np.arange(tmin, tmax, tstep) #time vector
dydt_testm=np.array([[0.,1.],[2.,3.]])
Y0_test=np.array([[0,1],[0,1]])
def dydt_test(y,t):
return dydt_testm
result = si.odeint(dydt_test, Y0_test,t_test)
</code></pre>
<blockquote>
<p>ValueError: Initial condition y0 must be one-dimensional.</p>
</blockquote>
|
<p>As commented by Warren Weckesser in the comments, <code>odeintw</code> does the job.</p>
<pre><code>from odeintw import odeintw
import numpy as np
Y0_test=np.array([[0,1],[0,1]])
tmin, tmax, tstep = (0., 200., 1)
t_test=np.arange(tmin, tmax, tstep) #time vector
dydt_testm=np.array([[0.,1.],[2.,3.]])
def dydt_test(y,t):
return dydt_testm
result = odeintw(dydt_test, #Computes the derivative of y at t
Y0_test, #Initial condition on y (can be a vector).
t_test)
plt.plot(t_test,result[:,0,1])
plt.show()
</code></pre>
| 275
|
solve differential equations
|
How to solve this system of differential equations where one differential is part of another?
|
https://stackoverflow.com/questions/57576549/how-to-solve-this-system-of-differential-equations-where-one-differential-is-par
|
<p>I need to solve this system of differential equations. </p>
<p>I tested it with removing <code>rk(3)</code> from the <code>rk(2)</code> equation and in that case I do get some solution. The code runs without error. However when I keep the <code>rk(3)</code> in the <code>rk(2)</code> equation I get a bunch of errors.</p>
<pre><code>function rk = odes(t,y)
sigma1=sqrt(10e5);
sigma2=0.1;
sigma0=10e5;
m=1;k=2;vb=0.1;mis=0.15;mik=0.1;g=9.81;Fn=m*g;Fs=mis*Fn;Fc=mik*Fn;vs=0.001
rk(1)=y(2);
rk(2)=1/m*(sigma1*rk(3)+sigma0*y(3)+sigma2*vb-y(2)*(k+sigma2));
rk(3)=(vb-y(2))-((sigma0*(vb-y(2)))/(Fc+(Fs-Fc)*exp(-((vb-y(2)/vs)^2))));
rk=rk(:);
end
</code></pre>
<pre><code>clc
close all
clear all
timerange=[0 20]
IC=[0.1;0;0.1]
[t,y]=ode45(@(t,y) odes(t,y),timerange,IC)
figure
plot(t,y(:,1));
</code></pre>
<p><img src="https://i.sstatic.net/hXz4N.png" alt="errors" title="errors"></p>
| 276
|
|
solve differential equations
|
Returning odd result when solving differential equations with solve_ivp
|
https://stackoverflow.com/questions/66756967/returning-odd-result-when-solving-differential-equations-with-solve-ivp
|
<p>I am trying to solve a series of differential equations as shown in the code below. But when I do simulate these differential equations, I am getting an inaccurate result. Here, I have 4 variables and I first reshaped my vector and then concatenated the values in hopes of having equal vector lengths for each of my variables. With this, I set my initial condition as the number of variables * N (N represents the number of cells I want to simulate). The for loop illustrated in the code is for simulating populations of cells as opposed to single cells and thus I set my for loop from values 1 to N. The trouble I am having, however, is that when I return my results and plot the variables (X, Y, Z, V), my X, Z, and V values are at 0, when in reality I should be seeing oscillations. Additionally, my F value, which represents the mean-field, is supposed to also show oscillations but has negative values when I use a print statement (I am not supposed to get any negative values for any of these equations). I have checked my code many times over and I am not sure why I am not getting any oscillations when all of the equations were listed properly. I think I am not passing the variables correctly into the solver or if I need to use a different way of having vectors of equal length. Does anyone have any idea why I am not getting oscillations for X, Z, V, but am getting oscillations for Y? And how I can correct this error? Any help is much appreciated (code and plot are below).</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.integrate import solve_ivp
def testing(t,y):
X,Y,Z,V = y.reshape([4,-1])
N = 1000; v1 = 6.8355; K1 = 2.7266; n = 5.6645; v2 = 8.4297; K2 = 0.2910
k3 = 0.1177; v4 = 1.0841; K4 = 8.1343; k5 = 0.3352; v6 = 4.6645
K6 = 9.9849; k7 = 0.2282; v8 = 3.5216; K8 = 7.4519; vc = 6.7924
Kc = 4.8283; K = 1; L = 0
dXdt = np.zeros(N)
dYdt = np.zeros(N)
dZdt = np.zeros(N)
dVdt = np.zeros(N)
for i in range(1,N+1):
F = 1/N * sum([V[i]])
dXdt[i] = (v1*(K1**n))/((K1**n)+(Z[i]**n))-((v2*X[i])/(K2+X[i])) + ((vc*K*F)/(Kc+(K*F))) + L
dYdt[i] = (k3*X[i]) - ((v4*Y[i])/(K4+Y[i]))
dZdt[i] = (k5*Y[i]) - ((v6*Z[i])/(K6+Z[i]))
dVdt[i] = (k7*X[i]) - ((v8*V[i])/(K8+V[i]))
return np.concatenate([dXdt,dYdt,dZdt,dVdt])
t_span = (0, 96)
t = np.linspace(t_span[0], t_span[1], 3000)
N = 1000
y0 = np.zeros(4*N)
soln = solve_ivp(testing, t_span, y0, t_eval=t)
t = soln.t; X = soln.y[0]; Y = soln.y[1]; Z = soln.y[2]; V = soln.y[3]
</code></pre>
<p>resulting plot:
<a href="https://i.sstatic.net/cp8KI.png" rel="nofollow noreferrer">CLICK TO SEE PLOT</a></p>
|
<p>There are several problems in the ODE function <code>testing</code></p>
<ul>
<li><p>The indentation of the return statement is wrong, it is in the loop, so triggered in the first run through it, for <code>i=0</code>.</p>
</li>
<li><p><code>F = 1/N * sum([V[i]])</code> is visibly wrong, you want to put F = 1/N * sum(V) outside the loop, so it is computed once for all i.</p>
</li>
<li><p><code>N</code> should be the actual length of the received arrays, so that the mean calculation actually computes the mean, <code>N=len(V)</code>.</p>
</li>
<li><p>The solver tries to evaluate the fractional power of <code>Z</code> at some negative value, probably in some intermediary stage in the RK45 method. Make it safe via absolute value, to keep some sign use <code>Z*abs(Z)**(n-1)</code>.</p>
</li>
</ul>
<p>In your interpretation of the solution, the indices are also not correct.</p>
<ul>
<li><code>X = soln.y[0]; Y = soln.y[1]; Z = soln.y[2]; V = soln.y[3]</code> does not correspond to the state component layout. The components for the first cell are <code>X = soln.y[0]; Y = soln.y[N]; Z = soln.y[2*N]; V = soln.y[3*N]</code>, or in short, <code>X,Y,Z,V = soln.y[0::N]</code>.</li>
</ul>
<p>A corrected version using numpy array operations is</p>
<pre><code>def testing(t,y):
X,Y,Z,V = y.reshape([4,-1])
N = len(X); v1 = 6.8355; K1 = 2.7266; n = 5.6645; v2 = 8.4297; K2 = 0.2910
k3 = 0.1177; v4 = 1.0841; K4 = 8.1343; k5 = 0.3352; v6 = 4.6645
K6 = 9.9849; k7 = 0.2282; v8 = 3.5216; K8 = 7.4519; vc = 6.7924
Kc = 4.8283; K = 1; L = 0
F = 1/N * sum(V)
dXdt = (v1*(K1**n))/((K1**n)+(Z*abs(Z)**(n-1)))-((v2*X)/(K2+X)) + ((vc*K*F)/(Kc+(K*F))) + L
dYdt = (k3*X) - ((v4*Y)/(K4+Y))
dZdt = (k5*Y) - ((v6*Z)/(K6+Z))
dVdt = (k7*X) - ((v8*V)/(K8+V))
return np.concatenate([dXdt,dYdt,dZdt,dVdt])
</code></pre>
<p>and gives, even for <code>N=10</code>, the nice oscillations</p>
<p><a href="https://i.sstatic.net/gtqpc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gtqpc.png" alt="some plots of solutions" /></a></p>
| 277
|
solve differential equations
|
Plot of ND solve differential equation with another parameter
|
https://stackoverflow.com/questions/24537894/plot-of-nd-solve-differential-equation-with-another-parameter
|
<p>I am trying to solve a differential equation numerically but I need to vary y0 for my plot and view result for constant x. I can solve my equation normally as I expected:but I can't get result when I try for my real purpose as you can see</p>
<pre><code>`\[Sigma] = 1;
n = 23.04;
Rop = y[x];
R = 0.5;
sz = R/(Rop + R);
F = -n*\[Sigma]*y[x]*(1 - 2*sz);
s = NDSolve[{y'[x] == F, y[0] == 0.8}, y, {x, 0, 0.07}]
Plot[Evaluate[y[x] /. s], {x, 0, 0.07}, PlotRange -> All,]`
`[Sigma] = 1;
n = 23.04;
Rop = y[x];
R = 0.5;
sz = R/(Rop + R);
F = -n*\[Sigma]*y[x]*(1 - 2*sz);
y0 = 0.8;
\!\(\*
ButtonBox["Array",
BaseStyle->"Link",
ButtonData->"paclet:ref/Array"]\)[s, 140]
i = 1;
For[i < 140,
s = NDSolve[{y'[x] == F, y[0] == y0}, y, {x, 0, 0.07}]
Plot[Evaluate[y[] /. s], x = 0.07, {y0, 0.8, 2.2}] // print
y0 == y0 + i*0.01];`
</code></pre>
|
<p>A variety of typos or misunderstandings</p>
<pre><code>\[Sigma] = 1;
n = 23.04;
Rop = y[x];
R = 0.5;
sz = R/(Rop + R);
F = -n*\[Sigma]*y[x]*(1 - 2*sz);
y0 = 0.8;
For[i = 1, i < 140, i++,
s = NDSolve[{y'[x] == F, y[0] == y0}, y, {x, 0, 0.07}];
Plot[Evaluate[y[x] /. s], {x, 0, 0.07}] // Print;
y0 = y0 + i*0.01
];
</code></pre>
<p>Go through that and compare it a character at a time against your original.
After you have figured out why each of the changes were made then you can try to decide whether to put your Button back in that or not.</p>
| 278
|
solve differential equations
|
Solving coupled differential equations of quadratic drag
|
https://stackoverflow.com/questions/64410747/solving-coupled-differential-equations-of-quadratic-drag
|
<p><strong>Goal</strong></p>
<p>I have been attempting to solve and plot the following coupled differential equation belonging to quadratic drag:</p>
<p><a href="https://i.sstatic.net/onuqg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/onuqg.png" alt="enter image description here" /></a></p>
<p>The variables from the equations are defined as:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>c = 0.0004
m = 0.0027
starting position of projectile= (0,0.3)
velocity on horizontal component = 2.05
velocity on vertical component = 0.55
g = 9.81</code></pre>
</div>
</div>
</p>
<p><strong>The problem</strong></p>
<p>I cannot seem to solve the equation correctly and have some errors in programming</p>
<p><strong>What I have tried</strong></p>
<p>I have tried using the code from online on MatLab and I have tried on Matematica but none of them is able to program this equation. I have also tried looking at SciPy on Python but it does not seem to work.</p>
<p>Could anybody please set me in the right direction on how to code this properly?</p>
|
<p>You can use a number of MATLAB built-in ODE solvers. <code>ode45</code> is usually a good place to start.</p>
<p>You have two positions and two velocities (4 states total), so you need to pass 4 ODEs to the solver <code>ode45</code> (one derivative for each state).
If <code>x(1)</code> is the x-position, <code>x(2)</code> is the y-position, <code>x(3)</code> is the x-velocity, and <code>x(4)</code> is the y-velocity, then the derivative of <code>x(1)</code> is <code>x(3)</code>, the derivative of <code>x(2)</code> is <code>x(4)</code> and the derivatives of <code>x(3)</code> and <code>x(4)</code> are the ones given by your two drag equations.</p>
<p>In the end, the MATLAB implementation might look like this:</p>
<pre><code>c = 0.0004;
m = 0.0027;
p0 = [0; 0.3]; % starting positions
v0 = [2.05; 0.55]; % starting velocities
g = -9.81; % gravitational acceleration
tspan = [0 5];
x0 = [p0; v0]; % initial states
[t_sol, x_sol] = ode45(@(t,x) drag_ode_fun(t,x,c,m,g), tspan, x0);
function dxdt = drag_ode_fun(t,x,c,m,g)
dxdt = zeros(4,1);
dxdt(1) = x(3);
dxdt(2) = x(4);
dxdt(3) = -c/m*x(3)*sqrt(x(3)^2+x(4)^2);
dxdt(4) = g-c/m*x(4)*sqrt(x(3)^2+x(4)^2);
end
</code></pre>
<p>And you can plot the results as follows:</p>
<pre><code>figure;
subplot(3,1,1); grid on;
plot(x_sol(:,1), x_sol(:,2))
xlabel('x (m)'); ylabel('y (m)')
subplot(3,1,2); grid on;
plot(t_sol, x_sol(:,3))
xlabel('time'); ylabel('v_x (m/s)')
subplot(3,1,3); grid on;
plot(t_sol, x_sol(:,4))
xlabel('time')
ylabel('v_y (m/s)')
</code></pre>
| 279
|
solve differential equations
|
Solving fractional differential equations in Matlab using fde12 function
|
https://stackoverflow.com/questions/16978825/solving-fractional-differential-equations-in-matlab-using-fde12-function
|
<pre><code>function dfdt=myfun(t,x)
dfdt = [...
x(2);
(1.5*((x(2))^2)*(cos(3*(x(1)))))-(((pi/2)^2) * ...
(sin((pi*t)/2)))-(20*((x(1))-(sin((pi*t)/2)))) - ...
((0.5*((x(2))^2)*abs(cos(3*(x(1)))))+0.1) * ...
sat(((x(2)-((pi/2)*cos((pi*t)/2))) + ...
(20*(x(1)-(sin((pi*t)/2)))))/0.1)-(((abs(sin(t)))+1) * ...
(cos(3*x(1)))*((x(2))^2))
];
</code></pre>
<p><code>sat</code> in this equation is defined as follows: </p>
<pre><code> function f = sat(y)
if abs(y) <= 1
f = y;
else
f = sign(y);
end
</code></pre>
<p>I am solving it first as an ODE using ODE45 where I define the differential equations as a vector:</p>
<pre><code> [t, x] = ode45(@myfun, [0 4], [0 pi/2])
</code></pre>
<p>This works fine. But when I try to solve the same set of equations using <a href="http://www.mathworks.com/matlabcentral/fileexchange/32918-predictor-corrector-pece-method-for-fractional-differential-equations" rel="nofollow"><code>fde12</code></a>:</p>
<pre><code>[T,Y] = FDE12(ALPHA,FDEFUN,T0,TFINAL,Y0,h)
</code></pre>
<p>Now I call it:</p>
<pre><code>t0 = 0;
tfinal= 4 ;
h = 0.01;
x0 = [0 pi/2];
[t, x] = fde12(0.95, @myfun, t0,tfinal, x0,h);
</code></pre>
<p>(<code>alpha</code> is the order of fractional differentiation, e.g., <code>0.95</code>)</p>
<p>it gives the following error:</p>
<pre><code>Attempted to access x(2); index out of bounds because numel(x) = 1.
</code></pre>
|
<p>RTFM - or in this case: <a href="http://www.mathworks.com/matlabcentral/fileexchange/32918-predictor-corrector-pece-method-for-fractional-differential-equations" rel="nofollow">the description</a>: </p>
<blockquote>
<p>The set of initial conditions Y0 is a matrix with a number of rows equal to the size of the problem </p>
</blockquote>
<p>Yet, you specify </p>
<pre><code>x0 = [0 pi/2];
</code></pre>
<p>This has two columns. If you change it to two rows: </p>
<pre><code>x0 = [0; pi/2];
</code></pre>
<p>It will work. (I just tried with your example).</p>
| 280
|
solve differential equations
|
How to solve differential equation in matlab
|
https://stackoverflow.com/questions/42324200/how-to-solve-differential-equation-in-matlab
|
<p>How can I show that <code>y(t)=Yo/Yo+(1-Yo)e^-at</code> is the solution of the differential equation <code>dy/dt=ay(1-y)</code> using MATLAB. What function should I use?</p>
|
<p>if you want to simulate the results use the ode's family</p>
<p><a href="https://www.mathworks.com/help/matlab/ref/ode45.html" rel="nofollow noreferrer">https://www.mathworks.com/help/matlab/ref/ode45.html</a></p>
<p>else you can define your equation in syms and use diff</p>
<p><a href="https://www.mathworks.com/help/symbolic/diff.html" rel="nofollow noreferrer">https://www.mathworks.com/help/symbolic/diff.html</a></p>
<p>other wise you can solve it numerically</p>
| 281
|
solve differential equations
|
how to solve this system of differential equations numerically?
|
https://stackoverflow.com/questions/62245231/how-to-solve-this-system-of-differential-equations-numerically
|
<p><a href="https://i.sstatic.net/XO1BZ.png" rel="nofollow noreferrer">system of differential equations</a></p>
<p>Pls look at an image above</p>
<p>How can i build up the derivation function for this system to use scipy.integrate.odeint?
I tried to express 3 DEs as linear combinations of second derivatives of coordinates(q,alpha,beta) and calculate them in derivation function(taken by odeint) with first or non derivative terms.
But I found that the coefficient matrix has degeneracy and got some trouble.
How can I solve this kind of system?</p>
|
<p>You have a system</p>
<pre><code>M(z)*D*z'' = F(t,z,z')
</code></pre>
<p>You can implement it using the linear systems solvers of numpy</p>
<pre class="lang-py prettyprint-override"><code>def system(t,u):
z,Dz = np.reshape(u,[-1,2])
D2z = diag([1,1/q,1/b]).dot( np.linalg.solve(M(z), F(t,z,Dz)) )
return np.concatenate(Dz, D2z)
</code></pre>
<p>or similar.</p>
| 282
|
solve differential equations
|
Solving a system of coupled differential equations with dsolve_system in python (sympy)
|
https://stackoverflow.com/questions/65393787/solving-a-system-of-coupled-differential-equations-with-dsolve-system-in-python
|
<p>I want to solve a system of 4 coupled differential equations with python (sympy):</p>
<pre><code>eqs = [Eq(cP1(t).diff(t), k1*cE1(t)**3), Eq(cE1(t).diff(t), -k1 * cE1(t)**3 + k6 * cE3(t)**2), Eq(cE2(t).diff(t), -k8 * cE2(t)), Eq(cE3(t).diff(t), k8 * cE2(t) - k6 * cE3(t)**2)]
</code></pre>
<p>When I try to solve the system with "dsolve_system":</p>
<pre><code>solution = dsolve_system(eqs, ics={cP1(0): 0, cE1(0): cE1_0, cE2(0):cE2_0, cE3(0):cE3_0})
</code></pre>
<p>I get the answer: "NotImplementedError: <br>
The system of ODEs passed cannot be solved by dsolve_system."</p>
<p>Does anyone know, whats the problem? Or is there a better way of solving this system of Differential equations in Sympy?<br>
Thanks a lot!</p>
|
<p>It's generally nice and friendly to show the complete code:</p>
<pre class="lang-py prettyprint-override"><code>In [18]: cP1, cE1, cE2, cE3 = symbols('cP1, cE1:4', cls=Function)
In [19]: t, k1, k6, k8 = symbols('t, k1, k6, k8')
In [20]: eqs = [Eq(cP1(t).diff(t), k1*cE1(t)**3), Eq(cE1(t).diff(t), -k1 * cE1(t)**3 + k6 * cE3(t)**2),
...: Eq(cE2(t).diff(t), -k8 * cE2(t)), Eq(cE3(t).diff(t), k8 * cE2(t) - k6 * cE3(t)**2)]
...:
In [21]: for eq in eqs:
...: pprint(eq)
...:
d 3
──(cP₁(t)) = k₁⋅cE₁ (t)
dt
d 3 2
──(cE₁(t)) = - k₁⋅cE₁ (t) + k₆⋅cE₃ (t)
dt
d
──(cE₂(t)) = -k₈⋅cE₂(t)
dt
d 2
──(cE₃(t)) = - k₆⋅cE₃ (t) + k₈⋅cE₂(t)
dt
In [22]: dsolve(eqs)
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-22-69ab769a7261> in <module>
----> 1 dsolve(eqs)
~/current/sympy/sympy/sympy/solvers/ode/ode.py in dsolve(eq, func, hint, simplify, ics, xi, eta, x0, n, **kwargs)
609 "number of functions being equal to number of equations")
610 if match['type_of_equation'] is None:
--> 611 raise NotImplementedError
612 else:
613 if match['is_linear'] == True:
NotImplementedError:
</code></pre>
<p>This means that <code>dsolve</code> can not yet handle this particular type of system. Note that in general nonlinear systems of ODEs are very unlikely to have analytic solutions (<code>dsolve</code> is for finding analytic solutions, if you want numerical solutions use something like scipy's <code>odeint</code>).</p>
<p>As nonlinear systems go this one is relatively friendly so it might be possible to solve it. Let's see...</p>
<p>Firstly there is a conserved quantity (the sum of all 4 variables) that we can use to eliminate one equation. Actually that doesn't help as much because the first equation is already isolated from the others: if we knew <code>cE1</code> we could just integrate but the conserved quantity gives it more easily if the other variables are known.</p>
<p>The structure of the system is like:</p>
<pre><code>cE2 ---> cE3 ---> cE1 ---> cP1
</code></pre>
<p>implying that it can be solved as a sequence of ODEs where we solve the 3rd equation for cE2 and then the 4th equation for cE3 and then use that for cE1 and so on. So we can reduce this to a problem involving a sequence of mostly nonlinear single ODEs. That also is very unlikely to have an analytic solution but let's give it a try:</p>
<pre><code>In [24]: dsolve(eqs[2], cE2(t))
Out[24]:
-k₈⋅t
cE₂(t) = C₁⋅ℯ
In [25]: cE2sol = dsolve(eqs[2], cE2(t)).rhs
In [26]: eqs[3].subs(cE2(t), cE2sol)
Out[26]:
d -k₈⋅t 2
──(cE₃(t)) = C₁⋅k₈⋅ℯ - k₆⋅cE₃ (t)
dt
</code></pre>
<p>At this point in principle we could solve for <code>cE3</code> here but sympy doesn't have any way of solving this particular nonlinear ODE so <code>dsolve</code> gives a series solution (I don't think that's what you want) and the only other solver that might handle this is <code>lie_group</code> but that actually fails.</p>
<p>Since we can't get an expression for the solution for <code>cE3</code> we also can't go on to solve for cE1 and cP1. The ODE that fails there is a Riccati equation but this particular type of Ricatti equation is not yet implemented in <code>dsolve</code>. It looks like Wolfram Alpha gives an answer in terms of Bessel functions:
<a href="https://www.wolframalpha.com/input/?i=solve+dx%2Fdt+%3D+e%5E-t+-+x%5E2" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=solve+dx%2Fdt+%3D+e%5E-t+-+x%5E2</a></p>
<p>Judging from that I guess it's unlikely that we would be able to solve the next equation. At that point Wolfram Alpha also gives up and says "classification: System of nonlinear differential equations":
<a href="https://www.wolframalpha.com/input/?i=solve+dx%2Fdt+%3D+e%5E-t+-+x%5E2%2C+dy%2Fdt+%3D+-y%5E3+%2B+x%5E2" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=solve+dx%2Fdt+%3D+e%5E-t+-+x%5E2%2C+dy%2Fdt+%3D+-y%5E3+%2B+x%5E2</a></p>
<p>I suspect that there is no analytic solution for your system but you could try numerical solutions or otherwise a more qualitative analysis.</p>
| 283
|
solve differential equations
|
How to solve a differential equation in formal power series?
|
https://stackoverflow.com/questions/46427586/how-to-solve-a-differential-equation-in-formal-power-series
|
<p>I would like to have first several coefficients of formal power series defined implicitly by a differential equation.</p>
<blockquote>
<p><strong>Example.</strong></p>
</blockquote>
<pre class="lang-py prettyprint-override"><code>import sympy as sp
sp.init_printing() # math as latex
from IPython.display import display
z = sp.Symbol('z')
F = sp.Function('F')(z)
F_ = sp.Derivative(F, z)
equation = sp.Eq(F**2 + 1 - F_, 0)
display(equation)
</code></pre>
<p><a href="https://i.sstatic.net/kRr7j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kRr7j.png" alt="enter image description here" /></a></p>
<pre><code>solution = sp.dsolve(equation)
display(solution)
</code></pre>
<p><a href="https://i.sstatic.net/446vc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/446vc.png" alt="enter image description here" /></a></p>
<pre><code>sp.series(sp.tan(z), n = 8)
</code></pre>
<p><a href="https://i.sstatic.net/uTT9U.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uTT9U.png" alt="enter image description here" /></a></p>
<blockquote>
<p><strong>Question.</strong> How to compute formal power series solution of ODE without explicitly solving it in terms of elementary functions?</p>
<p><strong>Sidenote.</strong> Some differential equations (even linear) have solutions in divergent power series only, for example an equation <code>L = z^2 + z^2 L + z^4 L'</code>. It is interesting to know whether <code>sympy</code> supports such equations along with usual ones.</p>
</blockquote>
<h2>Related.</h2>
<p>The current question is a sequel of a more easy question.</p>
<p><a href="https://stackoverflow.com/questions/46422538/sympy-how-to-solve-algebraic-equation-in-formal-power-series">Sympy: how to solve algebraic equation in formal power series?</a></p>
|
<p>UPDATE: an answer to a similar question has been posted <a href="https://stackoverflow.com/questions/46422538/how-to-solve-an-algebraic-equation-in-formal-power-series">here</a>. I use this second answer instead of the one presented below. The solution using standard functions from sympy works very slowly.</p>
<hr />
<p>This seems to be (partially) possible in <code>sympy-1.1.1</code>. Be sure to update to the corresponding version first. I refer to official documentation part on ordinary differential equations</p>
<p><a href="http://docs.sympy.org/latest/modules/solvers/ode.html" rel="nofollow noreferrer">http://docs.sympy.org/latest/modules/solvers/ode.html</a></p>
<p>We can use the method <code>dsolve</code> with additional hints asking to represent the solution as formal power series. This is only possible for certain types of equations. For the above equation, we ask for possible hint types.</p>
<pre><code>>>> sp.classify_ode(equation)
('separable',
'1st_exact',
'1st_power_series',
'lie_group',
'separable_Integral',
'1st_exact_Integral')
</code></pre>
<p>Continuing the example in the question, we specify the hint <code>'1st_power_series'</code> and initial conditions <code>F(0) = 0</code>:</p>
<pre class="lang-py prettyprint-override"><code>solution = sp.dsolve(equation, hint='1st_power_series', ics={F.subs(z,0):0})
display(solution)
</code></pre>
<p><a href="https://i.sstatic.net/6mlrL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6mlrL.png" alt="enter image description here" /></a></p>
<blockquote>
<p><strong>Issue 1.</strong> If we want more terms, even say <code>n = 10</code>, the function works for a very long time (normally, I expect 20 coefficients within several seconds). Combinatorially, one can write a very fast recurrence for linear ODE. I don't know whether it is implemented in <code>sympy</code>.</p>
<p><strong>Issue 2.</strong> This technique doesn't seem to apply to Ricatti equations like the one mentioned in the original question <code>L = z^2 + z^2 L + z^4 L'</code>.</p>
</blockquote>
<p>If someone knows a better solution, (s)he is warmly welcome!</p>
| 284
|
solve differential equations
|
Solving 3 coupled nonlinear differential equations using 4th order Runge Kutta in python
|
https://stackoverflow.com/questions/66664733/solving-3-coupled-nonlinear-differential-equations-using-4th-order-runge-kutta-i
|
<p>I am trying to plot orbits of a charged particle around a Reissner–Nordström blackhole(Charged blackhole).</p>
<p>I have three 2nd order differential equations as well as 3 first order differential equations. Due to the nature of the problem each derivative is in terms of proper time rather than time t. The equations of motion are as follows.</p>
<p><a href="https://i.sstatic.net/WO7dD.png" rel="nofollow noreferrer">2 first order differential equation second order differential equations</a></p>
<p><a href="https://i.sstatic.net/KlHJG.png" rel="nofollow noreferrer">3 second order differential equations</a></p>
<p><a href="https://i.sstatic.net/GSnE3.png" rel="nofollow noreferrer">1 first order differential equation(there should be a negative multiplied by everything under the square root.</a></p>
<p>I am using the 4th order Runge Kutta method to integrate orbits. My confusion, and where I most likely am making a mistake comes from the fact that usually when you have a second order coupled differential equation you reduce it into 2 first order differential equations. However in my problem I have been given 3 first order differential equations along with their corresponding second order differential equations. I assumed since I was given these first order equations I wouldn't have to reduce the second order at all. The fact that these equations are non linear does complicate things further.</p>
<p>I am sure I can use Runge kutta to solve such problems however I'm not sure about my implementation of the equations of motion. When I run the code I get an error that a negative number is under the square root of F2, however this should not be the case because F2 should equal exactly zero (Undoubtedly a precision issue arising from F1). However even when I take the absolute value of everything under the square root of F1,F2,F3... my angular momentum L and energy E is not being conserved. I mainly would like someone to comment on the way I am using my differential equation inside my Runge kutta loop and inform me how I am supposed to reduce the 2nd order differential equations.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import math as math
#=============================================================================
h=1
M = 1 #Mass of RN blackhole
r = 3*M #initital radius of particle from black hole
Q = 0 #charge of particle
r_s = 2*M #Shwar radius
S = 0 # initial condition for RK4
V = .5 # Initial total velocity of particle
B = np.pi/2 #angle of initial velocity
V_p = V*np.cos(B) #parallel velocity
V_t = V*np.sin(B) #transverse velocity
t = 0
Theta = 0
E = np.sqrt(Q**2-2*r*M+r**2)/(r*np.sqrt(1-V**2))
L = V_t*r/(np.sqrt(1-V**2))
r_dot = V_p*np.sqrt(r**2-2*M+Q**2)/(r*np.sqrt(1-V**2))
Theta_dot = V_t/(r*np.sqrt(1-V**2))
t_dot = E*r**2/(r**2-2*M*r+Q**2)
#=============================================================================
while(r>2*M and r<10*M): #Runge kutta while loop
A1 = 2*(Q**2-M*r) * r_dot*t_dot / (r**2-2*M*r+Q**2) #defines T double dot fro first RK4 step
B1 = -2*Theta_dot*r_dot / r #defines theta double dot for first RK4 step
C1 = (r-2*M*r+Q**2)*(Q**2-M*r)*t_dot**2 / r**5 + (M*r-Q**2)*r_dot**2 / (r**2-2*M*r+Q**2) #defines r double dot for first RK4 step
D1 = E*r**2/(r**2-2*M*r+Q**2) #defines T dot for first RK4 step
E1 = L/r**2 #defines theta dot for first RK4 step
F1 = math.sqrt(-(1-r_s/r+Q**2/r**2) * (1-(1-r_s/r+Q**2/r**2)*D1**2 + r**2*E1**2)) #defines r dot for first RK4 step
t_dot_1 = t_dot + (h/2) * A1
Theta_dot_1 = Theta_dot + (h/2) * B1
r_dot_1 = r_dot + (h/2) * C1
t_1 = t + (h/2) * D1
Theta_1 = Theta + (h/2) * E1
r_1 = r + (h/2) * F1
S_1 = S + (h/2)
A2 = 2*(Q**2-M*r_1) * r_dot_1*t_dot_1 / (r_1**2-2*M*r_1+Q**2)
B2 = -2*Theta_dot_1*r_dot_1 / r_1
C2 = (r_1-2*M*r_1+Q**2)*(Q**2-M*r_1)*t_dot_1**2 / r_1**5 + (M*r_1-Q**2)*r_dot_1**2 / (r_1**2-2*M*r_1+Q**2)
D2 = E*r_1**2/(r_1**2-2*M*r_1+Q**2)
E2 = L/r_1**2
F2 = np.sqrt(-(1-r_s/r_1+Q**2/r_1**2) * (1-(1-r_s/r_1+Q**2/r_1**2)*D2**2 + r_1**2*E2**2))
t_dot_2 = t_dot + (h/2) * A2
Theta_dot_2 = Theta_dot + (h/2) * B2
r_dot_2 = r_dot + (h/2) * C2
t_2 = t + (h/2) * D2
Theta_2 = Theta + (h/2) * E2
r_2 = r + (h/2) * F2
S_2 = S + (h/2)
A3 = 2*(Q**2-M*r_2) * r_dot_2*t_dot_2 / (r_2**2-2*M*r_2+Q**2)
B3 = -2*Theta_dot_2*r_dot_2 / r_2
C3 = (r_2-2*M*r_2+Q**2)*(Q**2-M*r_2)*t_dot_2**2 / r_2**5 + (M*r_2-Q**2)*r_dot_2**2 / (r_2**2-2*M*r_2+Q**2)
D3 = E*r_2**2/(r_2**2-2*M*r_2+Q**2)
E3 = L/r_2**2
F3 = np.sqrt(-(1-r_s/r_2+Q**2/r_2**2) * (1-(1-r_s/r_2+Q**2/r_2**2)*D3**2 + r_2**2*E3**2))
t_dot_3 = t_dot + (h/2) * A3
Theta_dot_3 = Theta_dot + (h/2) * B3
r_dot_3 = r_dot + (h/2) * C3
t_3 = t + (h/2) * D3
Theta_3 = Theta + (h/2) * E3
r_3 = r + (h/2) * F3
S_3 = S + (h/2)
A4 = 2*(Q**2-M*r_3) * r_dot_3*t_dot_3 / (r_3**2-2*M*r_3+Q**2)
B4 = -2*Theta_dot_3*r_dot_3 / r_3
C4 = (r_3-2*M*r_3+Q**2)*(Q**2-M*r_3)*t_dot_3**2 / r_3**5 + (M*r_3-Q**2)*r_dot_3**2 / (r_3**2-2*M*r_3+Q**2)
D4 = E*r_3**2/(r_3**2-2*M*r_3+Q**2)
E4 = L/r_3**2
F4 = np.sqrt(-(1-r_s/r_3+Q**2/r_3**2) * (1-(1-r_s/r_3+Q**2/r_3**2)*D3**2 + r_3**2*E3**2)) #defines r dot for first RK4 step
t_dot = t_dot + (h/6.0) * (A1+(2.*A2)+(2.0*A3) + A4)
Theta_dot = Theta_dot + (h/6.0) * (B1+(2.*B2)+(2.0*B3) + B4)
r_dot = r_dot + (h/6.0) * (C1+(2.*C2)+(2.0*C3) + C4)
t = t + (h/6.0) * (D1+(2.*D2)+(2.0*D3) + D4)
Theta = Theta + (h/6.0) * (E1+(2.*E2)+(2.0*E3) + E4)
r = r + (h/6.0) * (F1+(2.*F2)+(2.0*F3) + F4)
S = S+h
print(L,r**2*Theta_dot)
plt.axes(projection = 'polar')
plt.polar(Theta, r, 'g.')
</code></pre>
|
<p>Take the three second order differential equations you have provided. These are the geodesic equations parametrized by proper time. You original metric however is rotationally invariant (i.e. SO(3) invariant), so it has a set of simple conservation laws, plus the conservation of the metric (i.e. conservation of proper-time). This means that the second order differential equations for <code>t</code> and <code>theta</code> can be integrated once, leading to a set of two first order differential equations for <code>t</code> and <code>theta</code> and one second order differential equation for <code>r</code>:</p>
<pre><code>dt/ds = c_0 * r**2 / (r**2 - 2*M*r + Q**2)
dtheta/ds = c_1 / r**2
d**2r/ds**2 = ( (r**2-2*M*r + Q**2)*(Q**2 - M*r)/r**5) * (dt/ds)**2
+ ( (M*r - Q**2) /(r**2 - 2*M*r + Q**2) ) * (dr/ds)**2
</code></pre>
<p>You can go different ways here, one of them is deriving and equation of a first order differential equation of motion for <code>r</code> by substituting the first two equations above into the equation that the metric evaluated on the trajectory is equal to 1. But you can also just go directly here and plug the right-hand side of the equation of <code>dt/ds</code> into the third equation for <code>r</code>, expressing the system as</p>
<pre><code>dt/ds = c_0 * r**2 / (r**2 - 2*M*r + Q**2)
dtheta/ds = c_1 / r**2
d**2r/ds**2 = ( c_0**2*(Q**2 - M*r)/(r*(r**2-2*M*r + Q**2)))
+ ( (M*r - Q**2) /(r**2 - 2*M*r + Q**2) ) * (dr/ds)**2
</code></pre>
<p>and to avoid using square roots and complications (square roots are also expensive computations, while rational functions are simple faster algebraic computations), define the equivalent system of four first-order differential equations</p>
<pre><code>dt/ds = c_0 * r**2 / (r**2 - 2*M*r + Q**2)
dtheta/ds = c_1 / r**2
dr/ds = u
du/ds = ( c_0**2*(Q**2 - M*r)/(r*(r**2-2*M*r + Q**2)))
+ ( (M*r - Q**2) /(r**2 - 2*M*r + Q**2) ) * u**2
</code></pre>
<p>With the help of the initial conditions for <code>t, theta, r</code> and their derivatives <code>dt/dt, dtheta/dt, dr/dt</code> you can compute the constants <code>c_0</code> and <code>c_1</code> used in the first and second equation and then compute the initial condition for <code>u = dr/dt</code>.</p>
| 285
|
solve differential equations
|
Solving nonlinear differential equations in python
|
https://stackoverflow.com/questions/75479380/solving-nonlinear-differential-equations-in-python
|
<p>I am trying to solve the differential equation 4(y')^3-y'=1/x^2 in python. I am familiar with the use of odeint to solve coupled ODEs and linear ODEs, but can't find much guidance on nonlinear ODEs such as the one I'm grappling with.</p>
<p>Attempted to use odeint and scipy but can't seem to implement properly</p>
<p>Any thoughts are much appreciated</p>
<p>NB: y is a function of x</p>
|
<p>The problem is that you get 3 valid solutions for the direction at each point of the phase space (including double roots). But each selection criterion breaks down at double roots.</p>
<p>One way is to use a DAE solver (which does not exist in scipy) on the system <code>y'=v, 4v^3-v=x^-2</code></p>
<p>The second way is to take the derivative of the equation to get an explicit second-order ODE <code>y''=-2/x^3/(12*y'^2-1)</code>.</p>
<p>Both methods require the selection of the initial direction from the 3 roots of the cubic at the initial point.</p>
| 286
|
solve differential equations
|
Using Mathematica to solve this differential equation
|
https://stackoverflow.com/questions/54920753/using-mathematica-to-solve-this-differential-equation
|
<p>I want to describe the kinetics of a chemical reaction and my idea of a reaction model results (simplified) in a differential equation of the following form:</p>
<pre><code>y1'(t)=y1(t)+y2(t)
</code></pre>
<p>where y1 is the from an experiment measured concentration of a reactant and y2 the measured concentration of a product over time. The differential equation has the following boundary conditions:</p>
<pre><code>y1(0) = A
y2(0) = 0
</code></pre>
<p>now I couldn't solve the equation on my own, therefore, I tried to use Mathematica, but I always get an error when applying the second boundary condition:</p>
<pre><code>In: DSolve[{y'[t] == k*y[t] + k2*y2[t], y[0] == A, y2[0] == 0}, y[t], t]
Out: DSolve::deqx: Supplied equations are not differential equations of the given functions.
</code></pre>
<p>Does that mean that this differential equation has no analytical solution? Anyone has an idea?</p>
<p>Thanks in advance! </p>
<p>Best regards
Manuel</p>
|
<p>You only have one equation to solve a simultaneous equation.</p>
<p>For example, this works:-</p>
<pre><code>vars = {x[t], y[t]};
eqns = {x'[t] == y[t], y'[t] == x[t]};
inits = {x[0] == 1, y[0] == 0};
DSolve[eqns, vars, t] // Simplify
sol = vars /. DSolve[Join[eqns, inits], vars, t][[1]]
Plot[sol, {t, 0, 2}]
</code></pre>
<p>But your simultaneous equation is underdetermined.</p>
<pre><code>vars = {y[t], y2[t]};
eqns = {y'[t] == k*y[t] + k2*y2[t]};
inits = {y[0] == A, y2[0] == 0};
DSolve[eqns, vars, t] // Simplify
</code></pre>
<blockquote>
<p>DSolve: There are more dependent variables than equations, so the system is underdetermined.</p>
</blockquote>
| 287
|
solve differential equations
|
Solve system of complex differential equations in Octave
|
https://stackoverflow.com/questions/13480667/solve-system-of-complex-differential-equations-in-octave
|
<p>Well, I wanna solve a system of complex ODE in the form:</p>
<p>$ i\hbar \dfrac{\partial \rho}{\partial t} = \left[ H, \rho \right] $</p>
<p><a href="http://mathbin.heroku.com/rq65idQ" rel="nofollow" title="Paste Bin of Equation">http://mathbin.heroku.com/rq65idQ</a></p>
<p>where $\rho$ and $H$ are $n x n$ matrices. </p>
<p>I tried to define a general function using:</p>
<pre><code>function xdot = f(x,t)
i*xdot(1)=
i*xdot(2)=
endfunction
x0=[0;0];
t=linspace(0,20,200);
y=lsode("f",x0,t).
</code></pre>
<p>but I got a error message concerning the <em>i</em> . My question is, then: how to solve a complex differential equation in Octave?</p>
|
<p>I think the issue is because i is on the LHS of the equal sign rather than the RHS.</p>
<p>Have you tried?</p>
<pre><code>xdot(1) = <whatever your RHS expression is> / i;
xdot(2) = <whatever your RHS expression is> / i;
</code></pre>
| 288
|
solve differential equations
|
how to use solve_ivp solve Partial Differential Equations with spectral method?
|
https://stackoverflow.com/questions/60974226/how-to-use-solve-ivp-solve-partial-differential-equations-with-spectral-method
|
<p>I want to use the spectral method to solve partial differential equations. The equations like that, <a href="https://i.sstatic.net/vDAgk.png" rel="nofollow noreferrer">formula</a>,the initial condition is u(t=0,x)=(a^2)*sech(x),u'_t (t=0)=0.</p>
<p>To solve it, I use the python with the spectral method. Following is the code,</p>
<pre><code>import numpy as np
from scipy.integrate import solve_ivp
from scipy.fftpack import diff as psdiff
#RHS of equations
def f(t,u):
uxx= psdiff(u[N:],period=L,order=2)
du1dt=u[:N]
du2dt =a**2*uxx
dudt=np.append(du1dt,du2dt)
return dudt
a=1
amin=-40;bmax=40
L=bmax-amin;N=256
deltax=L/N
x=np.arange(amin,bmax,deltax)
u01 = 2*np.cosh(x)**(-1)
u02=np.zeros(N)
# y0
inital=np.append(u01,u02)
sola1 = solve_ivp(f, t_span=[0,40],y0=inital,args=(a,))
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x,sola1.y[:N,5])
plt.show()
</code></pre>
<p>Following is my expected result,</p>
<p><a href="https://i.sstatic.net/XHwm4.png" rel="nofollow noreferrer">expected result</a>.</p>
<p>My python code can run,but I can't get the expected result,and can't find the problem.Following is the result from my python code,
<a href="https://i.sstatic.net/YodoR.png" rel="nofollow noreferrer">my result</a></p>
<p>-----------------------------Update----------------------------------------------
I also try a new code,but still can't solve</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from scipy.fftpack import diff as psdiff
from itertools import chain
def lambdifide_odes(y,t,a):
# uxx =- (1j)**2*k**2*u[:N]
u1=y[::2]
u2=y[1::2]
dudt=np.empty_like(y)
du1dt=dudt[::2]
du2dt=dudt[1::2]
du1dt=u2
uxx=psdiff(u1,order=2,period=L)
du2dt=a**2*uxx
return dudt
a=1
amin=-40;bmax=40
L=bmax-amin;N=256
deltax=L/N
x=np.arange(amin,bmax,deltax)
u01 = 2*np.cosh(x)**(-1)
u02=np.zeros(N)
initial=np.array(list(chain.from_iterable(zip(u01,u02))))
t0=np.linspace(0,40,100)
sola1 = odeint(lambdifide_odes,y0=initial,t=t0,args=(a,))
fig, ax = plt.subplots()
ax.plot(x,sola1[20,::2])
plt.show()
</code></pre>
|
<p>You have some slight problem with the design of your state vector and using this in the ODE function. The overall intent is that <code>u[:N]</code> is the wave function and <code>u[N:]</code> its time derivative. Now you want the second space derivative of the wave function, thus you need to use</p>
<pre><code>uxx= psdiff(u[:N],period=L,order=2)
</code></pre>
<p>at the moment you use the time derivative, making this a mixed third derivative that does not occur in the equation.</p>
| 289
|
solve differential equations
|
How to solve and plot differential equations in R
|
https://stackoverflow.com/questions/25001337/how-to-solve-and-plot-differential-equations-in-r
|
<p>Taken from Sal Khan's lecture <a href="https://www.youtube.com/watch?v=oiDvNs15tkE" rel="nofollow">https://www.youtube.com/watch?v=oiDvNs15tkE</a> of the Khan academy, if I know that dN/dt=rN(1-(N/K)) (the logistic differential equation)</p>
<p>How can I solve for N and plot the N=f(t) with R?</p>
<p>Thanks</p>
|
<p>This logistic equation has an analytical solution (see for example <a href="https://www.youtube.com/watch?v=j_Taf2Tgggo" rel="noreferrer">here</a>), so you can plot it directly. Another option is to solve it numerically using one of the available solvers (see <a href="http://cran.r-project.org/web/views/DifferentialEquations.html" rel="noreferrer">here</a>)</p>
<pre><code>## Using the `deSolve` package
library(deSolve)
## Time
t <- seq(0, 100, 1)
## Initial population
N0 <- 10
## Parameter values
params <- list(r=0.1, K=1000)
## The logistic equation
fn <- function(t, N, params) with(params, list(r * N * (1 - N / K)))
## Solving and plotin the solution numerically
out <- ode(N0, t, fn, params)
plot(out, lwd=2, main="Logistic equation\nr=0.1, K=1000, N0=10")
## Ploting the analytical solution
with(params, lines(t, K * N0 * exp(r * t) / (K + N0 * (exp(r * t) - 1)), col=2, lwd=2))
</code></pre>
<p><img src="https://i.sstatic.net/jtIyE.png" alt="enter image description here"></p>
| 290
|
solve differential equations
|
Can we prioritise SymPy over SciPy while solving well posed differential equations?
|
https://stackoverflow.com/questions/67430010/can-we-prioritise-sympy-over-scipy-while-solving-well-posed-differential-equatio
|
<p>Till now, I haven't faced any issue while solving well-posed differential equations problems using sympy only in python such as</p>
<ol>
<li>Solving a second order linear differential equation:</li>
</ol>
<pre><code>import matplotlib.pyplot as plt
from sympy import dsolve, symbols, Function, Eq
t= symbols('t')
x= symbols('x',cls=Function)
deq =Eq(x(t).diff(t,t)+2*t**2*x(t).diff(t)+x(t),0)
sol =dsolve(deq, x(t))
print(sol)
</code></pre>
<ol start="2">
<li>Solving an initial-valued problem:</li>
</ol>
<pre><code>from sympy import dsolve, symbols, Function, Eq, pprint
t = symbols('t')
x = symbols('x', cls = Function)
deq = Eq(x(t).diff(t)+t*x(t), t**3)
sol = dsolve(deq, n=8, ics={x(0):1})
pprint(sol)
</code></pre>
<ol start="3">
<li>Solving a system of differential equations:</li>
</ol>
<pre><code>from sympy import dsolve, Eq, Function, symbols
t= symbols('t')
x= symbols('x',cls=Function)
y = symbols('y',cls=Function)
deq = (Eq(x(t).diff(t),x(t)*y(t)*sin(t)),Eq(y(t).diff(t),y(t)*2*sin(t)))
sol = dsolve(deq1, [x(t),y(t)])
print(sol)
</code></pre>
<ol start="4">
<li>Solving a differential equation and plotting a particular integral curve:</li>
</ol>
<pre><code>from matplotlib import pyplot as plt
from sympy import dsolve, Eq, symbols, Function
import numpy as np
t = symbols('t')
x = symbols('x', cls=Function)
deqn4 = Eq(2*(t-1)*x(t).diff(t),3*t**2+4*t+2)
sol4 = dsolve(deqn4, x(t))
print(sol4)
t = np.arange(-2.2,2.2,.2)
x = np.arange(-2,4.2,.2)
T, X = np.meshgrid(t,x)
Z = X**2 -2*X -T**3 -2*T**2 -2*T
fig, ax = plt.subplots()
CS = ax.contour(T, X,Z, [3])
</code></pre>
<p>I know that scipy is more flexible than the sympy, but results of sympy looks more fascinating as per my tittle experience of solving dynamical system problems using python.</p>
| 291
|
|
solve differential equations
|
Solve system of differential equation with embedded non diferential equations, using Octave/Matlab (see picture)
|
https://stackoverflow.com/questions/60235976/solve-system-of-differential-equation-with-embedded-non-diferential-equations-u
|
<p>I have <a href="https://i.sstatic.net/1txPv.png" rel="nofollow noreferrer">the following equation system (click to see picture)</a>
, and would like to solve for X(t), Y(t), Z(t), hopefully using Octave/Matlab, which I'm familiar with, but I would not mind solving it by any other means necessary. </p>
<p>Now, Fsolve is useful for regular equation systems, and Ode45, Lsode are useful for differential equations. But, what about this particular system? Note that the differential equation at the bottom contains not only Y, but, also, both X and Z, and those are dependent on the two non-differential equations above. </p>
<p>To be honest, I'm not quite sure how to approach a basic code to solve this system, and after spending some time thinking, I decided to ask for help. I really appreciate any guidance to solve this problem somewhat efficiently, if it is even possible. Almost any reply would be of some use to me right now.</p>
|
<p>If you know <code>y</code>, you can solve for <code>x</code>, and this even unconditionally as the second equation in monotonous in <code>x</code></p>
<pre><code>x = fsolve(@(x) y^2-1413.7*x-1095.2*cos(x)+2169, 0)
</code></pre>
<p>Then once you know <code>x</code>, you can solve for <code>z</code> using the known inverse cosine</p>
<pre><code>z = acos(0.20978-cos(x))
</code></pre>
<p>This may actually fail to give a result if <code>cos(x)</code> comes close to <code>-1</code>. One can artificially cut out that error, introducing a possibly wrong solution</p>
<pre><code>z = acos(min(1,0.20978-cos(x)))
</code></pre>
<p>For simplicity, assemble these operations in a helper function</p>
<pre><code>function [x,z] = solve_xz(y)
x = fsolve(@(x) y^2-1413.7*x-1095.2*cos(x)+2169, 0);
z = acos(min(1,0.20978-cos(x)));
end
</code></pre>
<p>Now use that to get the ODE for <code>y</code></p>
<pre><code>function dy = ode_y(t,y)
[x,z] = solve_xz(y(1));
dy = [ y(2); y(3); 6666.6667*(z-x)-333.3333*y(1)-33.3333*y(2)-5*y(3) ];
end
</code></pre>
<p>and apply the ODE solver of your choice. It is quite likely that the system is stiff, so <code>ode45</code> might not be the best solver.</p>
| 292
|
solve differential equations
|
Solving non-linear coupled differential equations in python
|
https://stackoverflow.com/questions/51313086/solving-non-linear-coupled-differential-equations-in-python
|
<p>I am working on simulation of a system that contains coupled differential equations. My main aim is to solve the mass balance in steady condition and feed the solution of steady state as initial guess for the dynamic simulation.
There are basically three state variables Ss,Xs and Xbh. The rate equations look like this:</p>
<blockquote>
<p>r1=µH(Ss/(Ks+Ss))(So/(Koh+So))Xbh+Kh(
(Xs⁄Xbh)/(Xs⁄Xbh+Kx))(So/(Koh+So))Xbh</p>
<p>r2=(1-fp)bH*Xbh-Kh( (Xs⁄Xbh)/(Xs⁄Xbh+Kx))(So/(Koh+So))Xbh </p>
<p>r3=µH(Ss/(Ks+Ss))(So/(Koh+So))Xbh-bH*Xbh</p>
</blockquote>
<p>And the main differential equations derived from mole balance for CSTR are:</p>
<blockquote>
<p>dSs/dt = Q(Ss_in-Ss)+r1*V</p>
<p>dXs/dt= Q(Xs_in-Xs)+r2*V</p>
<p>dXbh/dt= Q(Xbh_in-Xbh)+r2*V</p>
</blockquote>
<p>Here is my code till now:</p>
<pre><code>import numpy as np
from scipy.optimize import fsolve
parameter=dict()
parameter['u_h']=6.0
parameter['k_oh']=0.20
parameter['k_s']=20.0
parameter['k_h']=3.0
parameter['k_x']=0.03
parameter['Y_h']=0.67
parameter['f_p']=0.08
parameter['b_h']=0.62
Bulk_DO=2.0 #mg/L
#influent components:
infcomp=[56.53,182.9,16.625] #mgCOD/l
Q=684000 #L/hr
V=1040000 #l
def steady(z,*args):
Ss=z[0]
Xs=z[1]
Xbh=z[2]
def monod(My_S,My_K):
return My_S/(My_S+My_K)
#Conversion rates
#Conversion of Ss
r1=((-1/parameter['Y_h'])*parameter['u_h']*monod(Ss,parameter['k_s'])\
+parameter['k_h']*monod(Xs/Xbh,parameter['k_x'])*monod(Bulk_DO,parameter['k_oh']))\
*Xbh*monod(Bulk_DO,parameter['k_oh'])
#Conversion of Xs
r2=((1-parameter['f_p'])*parameter['b_h']-parameter['k_h']*monod(Xs/Xbh,parameter['k_x']))*Xbh
#Conversion of Xbh
r3=(parameter['u_h']*monod(Ss,parameter['k_s'])*monod(Bulk_DO,parameter['k_oh'])-parameter['b_h'])*Xbh
f=np.zeros(3)
f[0]=Q*(infcomp[0]-Ss)+r1*V
f[1]=Q*(infcomp[1]-Xs)+r2*V
f[2]=Q*(infcomp[2]-Xbh)+r3*V
return f
initial_guess=(0.1,0.1,0.1)
soln=fsolve(steady,initial_guess,args=parameter)
print (soln)
</code></pre>
<p>How can I plot steady condition like this?
<a href="https://i.sstatic.net/e5or9.jpg" rel="nofollow noreferrer">steady state plot</a>
The solution is also not what I want since the equations implies reduction in Ss and Xs and increase of Xbh values with time. Also one solution has negative value which is practically impossible.
Any suggestions would be highly appreciated. Thanks in advance !!</p>
|
<p>This is a solution to getting negative values for your solution: instead of using fsolve, use least_squares, which allows you to set bounds to the possible values.</p>
<p>In the top, import:</p>
<pre><code>from scipy.optimize import least_squares
</code></pre>
<p>And replace the fsolve statement with:</p>
<pre><code>soln = least_squares(steady, initial_guess, bounds=[(0,0,0),(np.inf,np.inf,np.inf)], args=parameter)
</code></pre>
| 293
|
solve differential equations
|
How to numerically solve this system of arbitrary number of differential equations?
|
https://stackoverflow.com/questions/63354720/how-to-numerically-solve-this-system-of-arbitrary-number-of-differential-equatio
|
<p>How can I solve a system of k differential equations with derivatives appearing in every equation? I am trying to use Scipy's solve_ivp.</p>
<p>All the equations are of the following form:</p>
<p><a href="https://i.sstatic.net/3Gqqn.png" rel="nofollow noreferrer">equations</a></p>
<p>How can this system of equations be numerically solved using any solver? using solve_ivp, it seems you should be able to write every equation independent of the other ones, which seems not possible in this case when we have more than 2 equations.</p>
|
<p>If you set <code>C[i]=B[i,i]</code> then you can transform the equations to the linear system <code>B*z'=A</code>. This can be solved as</p>
<pre><code>zdot = numpy.linalg.solve(B,A)
</code></pre>
<p>so that the derivative is this constant solution of a constant linear system, and the resulting solution for <code>z</code> is linear, <code>z(t)=z(0)+zdot*t</code>.</p>
| 294
|
solve differential equations
|
Mathematica solving differential equations
|
https://stackoverflow.com/questions/4020606/mathematica-solving-differential-equations
|
<p>I would like to numerically find the solution to </p>
<p><code>u_t - u_xx - u_yy = f</code> on the square y ∈ [0,1], x ∈ [0,1]</p>
<p>where f=1 if in the unit circle and f=0 otherwise. The boundary conditions are u=0 on all four edges.</p>
<p>I have been trying to use ND-solve for ages but I keep getting error messages. I'm new to Mathematica so I don't know how to define f before ND-Solve. </p>
<p>Thank you in advance.</p>
|
<p>The Mathematica help files are incredibly complete for stuff like this. I'm going to <a href="http://reference.wolfram.com/mathematica/ref/NDSolve.html" rel="nofollow">reference the online version</a>, but note that, in the help browser in Mathematica, you can <strong>interactively modify and evaluate the help examples</strong>. This was the only way I learned much about Mathematica.</p>
<p>One point before you start trying more things: It's very easy to define a function, such as <em>u</em> or <em>f</em> when intending to make a comparison statement. A surefire, but somewhat lazy, way of fixing potential lingering problems from this is to quit the current kernel and then evaluate something again. Quitting the kernel loses all symbols you may have accidentally defined up to that point. (Starting up a new kernel is also why Mathematica churns a little bit the first time you do any operation, even if it's simple addition.)</p>
<p>Continuing on with the question at hand... In the reference I linked above, a sample is given which solves the one-dimensional heat equation which is very similar to your example. Let's start with that and modify it:</p>
<pre><code>NDSolve[{D[u[t, x], t] == D[u[t, x], x, x], u[0, x] == Q0,
u[t, 0] == Sin[t], u[t, 5] == 0},
u, {t, 0, 10}, {x, 0, 5}]
</code></pre>
<p>Their equations are <em>∂u<sub>t</sub> = ∂u<sub>xx</sub></em> with boundary conditions of the initial energy being <em>Q0</em> at time 0 ∀x, the energy at <em>x = 0</em> being <em>sin(t)</em> and the energy at <em>x = 5</em> being 0 <em>∀t</em>. You can see above how those map hopefully. Now, let's try to the same for your terms:</p>
<pre><code>NDSolve[{D[u[t, x, y], t] - D[u[t, x, y], x, x] - D[u[t, x, y], y, y] == f[x, y],
f[x,y] == If[x^2 + y^2 < 1, 1, 0],
u[t, 0, y] == 0, u[t, 1, y] == 0,
u[t, x, 0] == 0, u[t, y, 1] == 0 }
u, {x, 0, 1}, {y, 0, 1}]
</code></pre>
<p>I think that's about right. However, there's at least one problem here still, though. <em>t</em> is still a free variable. You didn't provide any equations to restrain it, so the system is underspecified.</p>
| 295
|
solve differential equations
|
How to solve a differential equation with time dependent parameters?
|
https://stackoverflow.com/questions/66604009/how-to-solve-a-differential-equation-with-time-dependent-parameters
|
<p>I have a differential equation with time varying function W_at(t). How can I solve it using sympy's <code>dsolve</code> with initial conditions.</p>
<pre><code>from sympy import *
init_printing(use_unicode=True)
theta_vr, theta_vd, theta_vo, t = symbols('theta_vr theta_vd theta_vo t')
W_at = Function('W_at')
S_vt = Function('S_vt')
</code></pre>
<p>My differential equations as follows:</p>
<pre><code>dSvt_dt = S_vt(t).diff(t)
CV = Eq(dSvt_dt, (1 + exp(-1 * theta_vr * theta_vd))/(1+ (exp((-1 * theta_vr * (W_at(t) - theta_vo + theta_vd)))) + ((exp( (theta_vr) * (W_at(t) - theta_vo) - 1))) ))
</code></pre>
<p>I have used the following function from sympy to solve but it involves time varying W_at(t).</p>
<pre><code>dsolve(CV, ics={S_vt(0):0})
</code></pre>
<p>How do I pass values of time varying function <code>W_at(t)</code> from csv file or spline curve? how do I get an array of values of values for S_vt at any given time interval.</p>
| 296
|
|
solve differential equations
|
Using scilab to solve and plot differential equations
|
https://stackoverflow.com/questions/29477235/using-scilab-to-solve-and-plot-differential-equations
|
<p>How can I solve the second order differential equation using scilab ode() function.
(For example: y'' + 3y' +2y = f(x), y(0)=0, y'(0)=0)
And then plot the result of function y(x).</p>
<p>I want to use this to model the RLC-circuit signal with step-function input</p>
<p>Here is the code I tried</p>
<pre><code>function y=u(t)
y=(sign(t)+1)/2
endfunction
L=0.001
R=10
C=0.000001
function zdot=f(t,y)
zdot(1)= y(2);
zdot(2)=(u(t)-y(1)-L*y(2)/R)/(L*C);
endfunction
y0=[0,0];
t0=0;
t=0:0.00001:0.001;
out=ode(y0,t0,t,f);
clf();
plot(out);
</code></pre>
<p>Thank you a lot</p>
|
<p>You were nearly there, you only had problems with the shape of the vectors and how that affects the collection of the trajectory, that is, the construction of the return array of <code>ode</code>, as an array of vectors.</p>
<pre><code>function y=u(t)
y=(sign(t)+1)/2
endfunction
L=0.001
R=10
C=0.000001
function zdot=f(t,y)
zdot = [ y(2); (u(t)-y(1)-L*y(2)/R)/(L*C)];
endfunction
y0=[0;0];
t0=0;
t=0:0.00001:0.001;
out=ode(y0,t0,t,f);
clf();
subplot(211)
plot(t,out(1,:),"r.--");
subplot(212)
plot(t,out(2,:),"b-..");
</code></pre>
<p>Note that all vectors are forced to be column vectors. And that the plotting is by components, using the provided time scale as x axis.</p>
<p>Also take note that the two components, function and derivative, differ significantly in their magnitudes.</p>
<p><a href="https://i.sstatic.net/yw1Lt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yw1Lt.png" alt="enter image description here"></a></p>
| 297
|
solve differential equations
|
Matrix differential equation. Solve Ax'=Bx+b
|
https://stackoverflow.com/questions/50396666/matrix-differential-equation-solve-ax-bxb
|
<p>I have the matrix differential equation <code>Ax'=Bx+b</code>, where <code>A</code> and <code>B</code> are matrices of <code>N*N</code>, and <code>b</code> is a vector.</p>
<p>I want to solve it with python. Hope someone could help me.</p>
<p>Cheers!</p>
|
<p>If your matrix <code>A</code> is regular, the function to pass to <code>odeint</code> is </p>
<pre><code>def odefunc(x,t):
return numpy.linalg.solve(A, B.dot(x)+c)
</code></pre>
<p>You can of course also compute the inverse of <code>A</code> and left-multiply the equation with it.</p>
<pre><code>B = numpy.linalg.solve(A, B)
c = numpy.linalg.solve(A, c)
odefunc = lambda x,t: B.dot(x)+c
</code></pre>
| 298
|
solve differential equations
|
how to specify final value (rather than initial value) for solving differential equations
|
https://stackoverflow.com/questions/40580485/how-to-specify-final-value-rather-than-initial-value-for-solving-differential
|
<p>I would like to solve a differential equation in R (with <code>deSolve</code>?) for which I do not have the initial condition, but only the final condition of the state variable. How can this be done?</p>
<p>The typical code is: <code>ode(times, y, parameters, function ...)</code> where <code>y</code> is the initial condition and <code>function</code> defines the differential equation.</p>
|
<p>Are your equations <a href="https://en.wikipedia.org/wiki/Time_reversibility" rel="nofollow noreferrer">time reversible</a>, that is, can you change your differential equations so they run backward in time? Most typically this will just mean reversing the sign of the gradient. For example, for a simple exponential growth model with rate <code>r</code> (gradient of <code>x</code> = <code>r*x</code>) then flipping the sign makes the gradient <code>-r*x</code> and generates exponential decay rather than exponential growth.</p>
<p>If so, all you have to do is use your final condition(s) as your initial condition(s), change the signs of the gradients, and you're done.</p>
<p>As suggested by @LutzLehmann, there's an even easier answer: <code>ode</code> can handle negative time steps, so just enter your time vector as <code>(t_end, 0)</code>. Here's an example, using <code>f'(x) = r*x</code> (i.e. exponential growth). If <code>f(1) = 3</code>, <code>r=1</code>, and we want the value at <code>t=0</code>, analytically we would say:</p>
<pre><code>x(T) = x(0) * exp(r*T)
x(0) = x(T) * exp(-r*T)
= 3 * exp(-1*1)
= 1.103638
</code></pre>
<p>Now let's try it in R:</p>
<pre class="lang-r prettyprint-override"><code>library(deSolve)
g <- function(t, y, parms) { list(parms*y) }
res <- ode(3, times = c(1, 0), func = g, parms = 1)
print(res)
## time 1
## 1 1 3.000000
## 2 0 1.103639
</code></pre>
<hr />
<p>I initially misread your question as stating that you knew <em>both</em> the initial and final conditions. This type of problem is called a <a href="https://en.wikipedia.org/wiki/Boundary_value_problem" rel="nofollow noreferrer">boundary value problem</a> and requires a separate class of numerical algorithms from standard (more elementary) <a href="https://en.wikipedia.org/wiki/Initial_value_problem" rel="nofollow noreferrer">initial-value problems</a>.</p>
<pre><code>library(sos)
findFn("{boundary value problem}")
</code></pre>
<p>tells us that there are several R packages on CRAN (<code>bvpSolve</code> looks the most promising) for solving these kinds of problems.</p>
| 299
|
gradient descent implementation
|
Fast gradient-descent implementation in a C++ library?
|
https://stackoverflow.com/questions/11524657/fast-gradient-descent-implementation-in-a-c-library
|
<blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://stackoverflow.com/questions/11513926/fast-gradient-descent-implementation-in-a-c-library">Fast gradient-descent implementation in a C++ library?</a> </p>
</blockquote>
<p>I'm looking to run a gradient descent optimization to minimize the cost of an instantiation of variables. My program is very computationally expensive, so I'm looking for a popular library with a fast implementation of GD. What is the recommended library/reference?</p>
| 300
|
|
gradient descent implementation
|
Is my implementation of stochastic gradient descent correct?
|
https://stackoverflow.com/questions/21352101/is-my-implementation-of-stochastic-gradient-descent-correct
|
<p>I am trying to develop stochastic gradient descent, but I don't know if it is 100% correct.</p>
<ul>
<li>The cost generated by my stochastic gradient descent algorithm is sometimes very far from the one generated by FMINUC or Batch gradient descent.</li>
<li>while batch gradient descent cost converge when I set a learning rate alpha of 0.2, I am forced to set a learning rate alpha of 0.0001 for my stochastic implementation for it not to diverge. Is this normal? </li>
</ul>
<p>Here are some results I obtained with a training set of 10,000 elements and num_iter = 100 or 500</p>
<pre><code> FMINUC :
Iteration #100 | Cost: 5.147056e-001
BACTH GRADIENT DESCENT 500 ITER
Iteration #500 - Cost = 5.535241e-001
STOCHASTIC GRADIENT DESCENT 100 ITER
Iteration #100 - Cost = 5.683117e-001 % First time I launched
Iteration #100 - Cost = 7.047196e-001 % Second time I launched
</code></pre>
<p><strong>Gradient descent implementation for logistic regression</strong></p>
<pre><code>J_history = zeros(num_iters, 1);
for iter = 1:num_iters
[J, gradJ] = lrCostFunction(theta, X, y, lambda);
theta = theta - alpha * gradJ;
J_history(iter) = J;
fprintf('Iteration #%d - Cost = %d... \r\n',iter, J_history(iter));
end
</code></pre>
<p><strong>Stochastic gradient descent implementation for logistic regression</strong></p>
<pre><code>% number of training examples
m = length(y);
% STEP1 : we shuffle the data
data = [y, X];
data = data(randperm(size(data,1)),:);
y = data(:,1);
X = data(:,2:end);
for iter = 1:num_iters
for i = 1:m
x = X(i,:); % Select one example
[J, gradJ] = lrCostFunction(theta, x, y(i,:), lambda);
theta = theta - alpha * gradJ;
end
J_history(iter) = J;
fprintf('Iteration #%d - Cost = %d... \r\n',iter, J);
end
</code></pre>
<p><em>For reference, here is the logistic regression cost function used in my example</em></p>
<pre><code>function [J, grad] = lrCostFunction(theta, X, y, lambda)
m = length(y); % number of training examples
% We calculate J
hypothesis = sigmoid(X*theta);
costFun = (-y.*log(hypothesis) - (1-y).*log(1-hypothesis));
J = (1/m) * sum(costFun) + (lambda/(2*m))*sum(theta(2:length(theta)).^2);
% We calculate grad using the partial derivatives
beta = (hypothesis-y);
grad = (1/m)*(X'*beta);
temp = theta;
temp(1) = 0; % because we don't add anything for j = 0
grad = grad + (lambda/m)*temp;
grad = grad(:);
end
</code></pre>
|
<p>This is pretty much ok. If you are worried about choosing the appropriate learning rate <code>alpha</code>, you should think about applying a <strong>line search</strong> method.</p>
<p>Line search is a method which chooses an optimal learning rate for gradient descent at every iteration, which is better than using fixed learning rate throughout the whole optimization process. Optimal value for learning rate <code>alpha</code> is one which locally (from current <code>theta</code> in the direction of the negative gradient) minimizes cost function.</p>
<p>At each iteration of the gradient descent, start from the learning rate <code>alpha = 0</code> and gradually increase <code>alpha</code> by the fixed step <code>deltaAlpha = 0.01</code>, for example. Recalculate parameters <code>theta</code> and evaluate the cost function. Since the cost function is convex, by increasing <code>alpha</code> (that is, by moving in the direction of negative gradient) cost function will first start decreasing and then (at some moment) increasing. At that moment stop the line search and take the last <code>alpha</code> before cost function started increasing. Now update the parameters <code>theta</code> with that <code>alpha</code>. In case that the cost function never starts increasing, stop at <code>alpha = 1</code>. </p>
<p><strong>Note:</strong> For big regularization factors (<code>lambda = 100</code>, <code>lambda = 1000</code>) it is possible that <code>deltaAlpha</code> is too big and that gradient descent diverges. If that is the case, decrease <code>deltaAlpha</code> 10 times (<code>deltaAlpha = 0.001</code>, <code>deltaAlpha = 0.0001</code>) until you get to the appropriate <code>deltaAlpha</code> for which gradient descent converges.</p>
<p>Also, you should think about using some terminating condition other than the number of iterations, e.g. when difference between cost functions in two subsequent iterations becomes small enough (less than some <code>epsilon</code>).</p>
| 301
|
gradient descent implementation
|
Stochastic gradient descent from gradient descent implementation in R
|
https://stackoverflow.com/questions/37485138/stochastic-gradient-descent-from-gradient-descent-implementation-in-r
|
<p>I have a working implementation of multivariable linear regression using gradient descent in R. I'd like to see if I can use what I have to run a stochastic gradient descent. I'm not sure if this is really inefficient or not. For example, for each value of α I want to perform 500 SGD iterations and be able to specify the number of randomly picked samples in each iteration. It would be nice to do this so I could see how the number of samples influences the results. I'm having trouble through with the mini-batching and I want to be able to easily plot the results.</p>
<p>This is what I have so far:</p>
<pre><code> # Read and process the datasets
# download the files from GitHub
download.file("https://raw.githubusercontent.com/dbouquin/IS_605/master/sgd_ex_data/ex3x.dat", "ex3x.dat", method="curl")
x <- read.table('ex3x.dat')
# we can standardize the x vaules using scale()
x <- scale(x)
download.file("https://raw.githubusercontent.com/dbouquin/IS_605/master/sgd_ex_data/ex3y.dat", "ex3y.dat", method="curl")
y <- read.table('ex3y.dat')
# combine the datasets
data3 <- cbind(x,y)
colnames(data3) <- c("area_sqft", "bedrooms","price")
str(data3)
head(data3)
################ Regular Gradient Descent
# http://www.r-bloggers.com/linear-regression-by-gradient-descent/
# vector populated with 1s for the intercept coefficient
x1 <- rep(1, length(data3$area_sqft))
# appends to dfs
# create x-matrix of independent variables
x <- as.matrix(cbind(x1,x))
# create y-matrix of dependent variables
y <- as.matrix(y)
L <- length(y)
# cost gradient function: independent variables and values of thetas
cost <- function(x,y,theta){
gradient <- (1/L)* (t(x) %*% ((x%*%t(theta)) - y))
return(t(gradient))
}
# GD simultaneous update algorithm
# https://www.coursera.org/learn/machine-learning/lecture/8SpIM/gradient-descent
GD <- function(x, alpha){
theta <- matrix(c(0,0,0), nrow=1)
for (i in 1:500) {
theta <- theta - alpha*cost(x,y,theta)
theta_r <- rbind(theta_r,theta)
}
return(theta_r)
}
# gradient descent α = (0.001, 0.01, 0.1, 1.0) - defined for 500 iterations
alphas <- c(0.001,0.01,0.1,1.0)
# Plot price, area in square feet, and the number of bedrooms
# create empty vector theta_r
theta_r<-c()
for(i in 1:length(alphas)) {
result <- GD(x, alphas[i])
# red = price
# blue = sq ft
# green = bedrooms
plot(result[,1],ylim=c(min(result),max(result)),col="#CC6666",ylab="Value",lwd=0.35,
xlab=paste("alpha=", alphas[i]),xaxt="n") #suppress auto x-axis title
lines(result[,2],type="b",col="#0072B2",lwd=0.35)
lines(result[,3],type="b",col="#66CC99",lwd=0.35)
}
</code></pre>
<p>Is it more practical to find a way to use <code>sgd()</code>? I can't seem to figure out how to have the level of control I'm looking for with the <code>sgd</code> package</p>
|
<p>Sticking with what you have now</p>
<pre><code>## all of this is the same
download.file("https://raw.githubusercontent.com/dbouquin/IS_605/master/sgd_ex_data/ex3x.dat", "ex3x.dat", method="curl")
x <- read.table('ex3x.dat')
x <- scale(x)
download.file("https://raw.githubusercontent.com/dbouquin/IS_605/master/sgd_ex_data/ex3y.dat", "ex3y.dat", method="curl")
y <- read.table('ex3y.dat')
data3 <- cbind(x,y)
colnames(data3) <- c("area_sqft", "bedrooms","price")
x1 <- rep(1, length(data3$area_sqft))
x <- as.matrix(cbind(x1,x))
y <- as.matrix(y)
L <- length(y)
cost <- function(x,y,theta){
gradient <- (1/L)* (t(x) %*% ((x%*%t(theta)) - y))
return(t(gradient))
}
</code></pre>
<p>I added <code>y</code> to your <code>GD</code> function and created a wrapper function, <code>myGoD</code>, to call yours but first subsetting the data</p>
<pre><code>GD <- function(x, y, alpha){
theta <- matrix(c(0,0,0), nrow=1)
theta_r <- NULL
for (i in 1:500) {
theta <- theta - alpha*cost(x,y,theta)
theta_r <- rbind(theta_r,theta)
}
return(theta_r)
}
myGoD <- function(x, y, alpha, n = nrow(x)) {
idx <- sample(nrow(x), n)
y <- y[idx, , drop = FALSE]
x <- x[idx, , drop = FALSE]
GD(x, y, alpha)
}
</code></pre>
<p>Check to make sure it works and try with different Ns</p>
<pre><code>all.equal(GD(x, y, 0.001), myGoD(x, y, 0.001))
# [1] TRUE
set.seed(1)
head(myGoD(x, y, 0.001, n = 20), 2)
# x1 V1 V2
# V1 147.5978 82.54083 29.26000
# V1 295.1282 165.00924 58.48424
set.seed(1)
head(myGoD(x, y, 0.001, n = 40), 2)
# x1 V1 V2
# V1 290.6041 95.30257 59.66994
# V1 580.9537 190.49142 119.23446
</code></pre>
<p>Here is how you can use it</p>
<pre><code>alphas <- c(0.001,0.01,0.1,1.0)
ns <- c(47, 40, 30, 20, 10)
par(mfrow = n2mfrow(length(alphas)))
for(i in 1:length(alphas)) {
# result <- myGoD(x, y, alphas[i]) ## original
result <- myGoD(x, y, alphas[i], ns[i])
# red = price
# blue = sq ft
# green = bedrooms
plot(result[,1],ylim=c(min(result),max(result)),col="#CC6666",ylab="Value",lwd=0.35,
xlab=paste("alpha=", alphas[i]),xaxt="n") #suppress auto x-axis title
lines(result[,2],type="b",col="#0072B2",lwd=0.35)
lines(result[,3],type="b",col="#66CC99",lwd=0.35)
}
</code></pre>
<p><a href="https://i.sstatic.net/MdjJS.png" rel="noreferrer"><img src="https://i.sstatic.net/MdjJS.png" alt="enter image description here"></a></p>
<p>You don't need the wrapper function--you can just change your <code>GD</code> slightly. It is always good practice to explicitly pass arguments to your functions rather than relying on scoping. Before you were assuming that <code>y</code> would be pulled from your global environment; here <code>y</code> must be given or you will get an error. This will avoid many headaches and mistakes down the road.</p>
<pre><code>GD <- function(x, y, alpha, n = nrow(x)){
idx <- sample(nrow(x), n)
y <- y[idx, , drop = FALSE]
x <- x[idx, , drop = FALSE]
theta <- matrix(c(0,0,0), nrow=1)
theta_r <- NULL
for (i in 1:500) {
theta <- theta - alpha*cost(x,y,theta)
theta_r <- rbind(theta_r,theta)
}
return(theta_r)
}
</code></pre>
| 302
|
gradient descent implementation
|
Gradient Descent implementation in Python
|
https://stackoverflow.com/questions/32792703/gradient-descent-implementation-in-python
|
<p>I got stucked at a point where I am implementing gradient descent in python.</p>
<p>The formula for gradient descent is:</p>
<pre><code>for iter in range(1, num_iters):
hypo_function = np.sum(np.dot(np.dot(theta.T, X)-y, X[:,iter]))
theta_0 = theta[0] - alpha * (1.0 / m) * hypo_function
theta_1 = theta[1] - alpha * (1.0 / m) * hypo_function
</code></pre>
<p>Got an error:</p>
<blockquote>
<p>---> hypo_function = np.sum(np.dot(np.dot(theta.T, X)-y, X[:,iter]))
ValueError: shapes (1,97) and (2,) not aligned: 97 (dim 1) != 2 (dim 0)</p>
</blockquote>
<p>PS: Here my X is (2L, 97L), y is (97L,) theta is (2L,).</p>
|
<p><strong>np.dot(a,b)</strong> takes the inner product of a and b if a and b are vectors (1-D arrays) If a and b are 2D arrays, <strong>np.dot(a,b)</strong> does matrix multiplication.</p>
<p>It will throw ValueError if there is a mismatch between the size of the last dimension of a and the second to last dimension of b. They have to match.</p>
<p>In your case you are trying to multiply a something by 97 array by a 2 by something array in one of your dot products, so there is mismatch. So you need to fix your input data so the dot product/matrix multiply is computable.</p>
| 303
|
gradient descent implementation
|
Gradient descent implementation
|
https://stackoverflow.com/questions/9163801/gradient-descent-implementation
|
<p>I've implemented both the batch and stochastic gradient descent. I'm experiencing some issues though. This is the stochastic rule:</p>
<pre><code>1 to m {
theta(j):=theta(j)-step*derivative (for all j)
}
</code></pre>
<p>The issue I have is that, even though the cost function is becoming smaller and smaller the testing says it's not good. If I change the step a bit and change the number of iterations, the cost function is a bit bigger in value but the results are ok. Is this an overfitting "symptom"? How do I know which one is the right one? :) </p>
<p>As I said, even though the cost function is more minimized the testing says it's not good. </p>
|
<p>Gradient descent is a local search method for minimizing a function. When it reaches a local minimum in the parameter space, it won't be able to go any further. This makes gradient descent (and other local methods) prone to getting stuck in local minima, rather than reaching the global minimum. The local minima may or may not be good solutions for what you're trying to achieve. What to expect will depend on the function that you're trying to minimize. </p>
<p>In particular, high-dimensional NP-complete problems can be tricky. They often have exponentially many local optima, with many of them nearly as good as the global optimum in terms of cost, but with parameter values orthogonal to those for the global optimum. These are <em>hard</em> problems: you don't generally expect to be able to find the global optimum, instead just looking for a local minimum that is good enough. These are also <em>relevant</em> problems: many interesting problems have just these properties. </p>
<p>I'd suggest first testing your gradient descent implementation with an easy problem. You might try finding the minimum in a polynomial. Since it's a one-parameter problem, you can plot the progress of the parameter values along the curve of the polynomial. You should be able to see if something is drastically wrong, and can also observe how the search gets stuck in local minima. You should also be able to see that the initial parameter choice can matter quite a lot. </p>
<p>For dealing with harder problems, you might modify your algorithm to help it escape the local minima. A few common approaches:</p>
<ul>
<li><p>Add noise. This reduces the precision of the parameters you've found, which can "blur" out local minima. The search can then jump out of local minima that are small compared to the noise, while still being trapped in deeper minima. A well-known approach for adding noise is <a href="http://mathworld.wolfram.com/SimulatedAnnealing.html">simulated annealing</a>. </p></li>
<li><p>Add momentum. Along with using the current gradient to define the step, also continue in the same direction as the previous step. If you take a fraction of the previous step as the momentum term, there is a tendency to keep going, which can take the search past the local minimum. By using a fraction, the steps decay exponentially, so poor steps aren't a big problem. This was always a popular modification to gradient descent when used to train neural networks, where gradient descent is known as backpropagation. </p></li>
<li><p>Use a hybrid search. First use a global search (e.g., genetic algorithms, various Monte Carlo methods) to find some good starting points, then apply gradient descent to take advantage of the gradient information in the function. </p></li>
</ul>
<p>I won't make a recommendation on which to use. Instead, I'll suggest doing a little research to see what others have done with problems related to what you're working on. If it's purely a learning experience, momentum is probably the easiest to get working. </p>
| 304
|
gradient descent implementation
|
Stochastic gradient Descent implementation - MATLAB
|
https://stackoverflow.com/questions/5117337/stochastic-gradient-descent-implementation-matlab
|
<p>I'm trying to implement "<a href="http://en.wikipedia.org/wiki/Stochastic_gradient_descent" rel="noreferrer">Stochastic gradient descent</a>" in MATLAB. I followed the algorithm exactly but I'm getting a VERY VERY large w (coffients) for the prediction/fitting function. Do I have a mistake in the algorithm ?</p>
<p>The Algorithm :
<img src="https://i.sstatic.net/SH3mZ.png" alt="enter image description here"></p>
<pre><code>x = 0:0.1:2*pi // X-axis
n = size(x,2);
r = -0.2+(0.4).*rand(n,1); //generating random noise to be added to the sin(x) function
t=zeros(1,n);
y=zeros(1,n);
for i=1:n
t(i)=sin(x(i))+r(i); // adding the noise
y(i)=sin(x(i)); // the function without noise
end
f = round(1+rand(20,1)*n); //generating random indexes
h = x(f); //choosing random x points
k = t(f); //chossing random y points
m=size(h,2); // length of the h vector
scatter(h,k,'Red'); // drawing the training points (with noise)
%scatter(x,t,2);
hold on;
plot(x,sin(x)); // plotting the Sin function
w = [0.3 1 0.5]; // starting point of w
a=0.05; // learning rate "alpha"
// ---------------- ALGORITHM ---------------------//
for i=1:20
v = [1 h(i) h(i).^2]; // X vector
e = ((w*v') - k(i)).*v; // prediction - observation
w = w - a*e; // updating w
end
hold on;
l = 0:1:6;
g = w(1)+w(2)*l+w(3)*(l.^2);
plot(l,g,'Yellow'); // drawing the prediction function
</code></pre>
|
<p>If you use too big learning rate, SGD is likely to diverge.<br>
The learing rate should converge to zero.</p>
| 305
|
gradient descent implementation
|
Gradient descent implementation - absolute error issue
|
https://stackoverflow.com/questions/47370541/gradient-descent-implementation-absolute-error-issue
|
<p>I am trying to implement gradient descent algorithm in Python. When I plot the history of the cost function it seems to be converging but the mean absolute error I get with my implementation is way worse than the one I get from sklearn's <em>linear_model</em>. I couldn't figure out what is wrong with my implementation. </p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn import linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
def gradient_descent(x, y, theta, alpha, num_iters):
m = len(y)
cost_history = np.zeros(num_iters)
for iter in range(num_iters):
h = np.dot(x, theta)
for i in range(len(theta)):
theta[i] = theta[i] - (alpha/m) * np.sum((h - y) * x[:,i])
#save the cost in every iteration
cost_history[iter] = np.sum(np.square((h - y))) / (2 * m)
return theta, cost_history
attributes = [...]
class_field = [...]
x_df = pd.read_csv('train.csv', usecols = attributes)
y_df = pd.read_csv('train.csv', usecols = class_field)
#normalize
x_df = (x_df - x_df.mean()) / x_df.std()
#gradient descent
alpha = 0.01
num_iters = 1000
err = 0
i = 10
for i in range(i):
x_train, x_test, y_train, y_test = train_test_split(x_df, y_df, test_size=0.2)
x_train = np.array(x_train)
y_train = np.array(y_train).flatten()
theta = np.random.sample(len(x_df.columns))
theta, cost_history = gradient_descent(x_train, y_train, theta, alpha, num_iters)
err = err + mean_absolute_error(y_test, np.dot(x_test, theta))
print(np.dot(x_test, theta))
#plt.plot(cost_history)
#plt.show()
print(err/i)
regr = linear_model.LinearRegression()
regr.fit(x_train, y_train)
y_pred = regr.predict(x_test)
print(mean_absolute_error(y_test, y_pred))
</code></pre>
|
<p>It seems you have missed a bias / intercept column and coefficient.</p>
<p>Hypothesis for linear function should look like:</p>
<pre><code>H = theta_0 + theta_1 * x
</code></pre>
<p>in your implementation it looks like as follows:</p>
<pre><code>H = theta_1 * x
</code></pre>
| 306
|
gradient descent implementation
|
Is there is a gradient descent implementation that uses matrix matrix multiplication?
|
https://stackoverflow.com/questions/43411013/is-there-is-a-gradient-descent-implementation-that-uses-matrix-matrix-multiplica
|
<p>I'm using the below gradient descent implementation in <a href="https://github.com/schneems/Octave/blob/master/mlclass-ex1/gradientDescent.m" rel="nofollow noreferrer">Octave for ML</a>.</p>
<p>I tried first to increase number of CPU cores and run Octave multithreaded using OpenBlas but still I didn't get the results I'm looking for, so I tried using Nvidia's toolkit and their Tesla K80 GPU</p>
<p>I'm loading Octave using the drop in nvblas following instructions in this article:</p>
<p><a href="https://devblogs.nvidia.com/parallelforall/drop-in-acceleration-gnu-octave/" rel="nofollow noreferrer">Drop-in Acceleration of GNU Octave</a></p>
<p>When I checked nvidia-smi I found the GPU to be idle although my testing using a matrix matrix multiplication is yielding ~9 teraflops</p>
<p>Later I came to understand that the matrix vector multiplication used for the above mentioned implementation is not supported as per the nvblas documentation</p>
<p>So my question is there is a gradient descent implementation that uses matrix matrix multiplication or something equivalent that can replace the gradient descent implementation I have?</p>
| 307
|
|
gradient descent implementation
|
Gradient Descent implementation in python?
|
https://stackoverflow.com/questions/55210443/gradient-descent-implementation-in-python
|
<p>I have tried to implement gradient descent and it was working properly when I tested it on sample dataset but it's not working properly for boston dataset.</p>
<p>Can you verify what's wrong with the code. why I'm not getting a correct theta vector?</p>
<pre><code>import numpy as np
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
X = load_boston().data
y = load_boston().target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
X_train1 = np.c_[np.ones((len(X_train), 1)), X_train]
X_test1 = np.c_[np.ones((len(X_test), 1)), X_test]
eta = 0.0001
n_iterations = 100
m = len(X_train1)
tol = 0.00001
theta = np.random.randn(14, 1)
for i in range(n_iterations):
gradients = 2/m * X_train1.T.dot(X_train1.dot(theta) - y_train)
if np.linalg.norm(X_train1) < tol:
break
theta = theta - (eta * gradients)
</code></pre>
<p>I'm getting my weight vector in the shape of (14, 354). What am I doing wrong here?</p>
|
<p>Consider this (unrolled some statements for better visibility):</p>
<pre><code>for i in range(n_iterations):
y_hat = X_train1.dot(theta)
error = y_hat - y_train[:, None]
gradients = 2/m * X_train1.T.dot(error)
if np.linalg.norm(X_train1) < tol:
break
theta = theta - (eta * gradients)
</code></pre>
<p>since y_hat is (n_samples, 1) and y_train is (n_samples,) - for your example n_samples is 354 - you need to bring y_train to the same dimension with a dummy axis trick <code>y_train[:, None]</code>.</p>
| 308
|
gradient descent implementation
|
Fast gradient-descent implementation in a C++ library?
|
https://stackoverflow.com/questions/11513926/fast-gradient-descent-implementation-in-a-c-library
|
<p>I'm looking to run a gradient descent optimization to minimize the cost of an instantiation of variables. My program is very computationally expensive, so I'm looking for a popular library with a fast implementation of GD. What is the recommended library/reference?</p>
|
<p><a href="http://www.gnu.org/software/gsl/" rel="noreferrer">GSL</a> is a great (and free) library that already implements common functions of mathematical and scientific interest.</p>
<p>You can peruse through the entire <a href="http://www.gnu.org/software/gsl/manual/html_node/" rel="noreferrer">reference manual online</a>. Poking around, <a href="http://www.gnu.org/software/gsl/manual/html_node/Multimin-Algorithms-with-Derivatives.html" rel="noreferrer">this</a> starts to look interesting, but I think we'd need to know more about the problem.</p>
| 309
|
gradient descent implementation
|
Gradient Descent Implementation in Python returns Nan
|
https://stackoverflow.com/questions/15211715/gradient-descent-implementation-in-python-returns-nan
|
<p>I am trying to implement gradient descent in python; the implementation works when I try it with training_set1 but it returns not a number(nan) when I try it training_set. Any idea why my code is broken?</p>
<pre><code>from collections import namedtuple
TrainingInstance = namedtuple("TrainingInstance", ['X', 'Y'])
training_set1 = [TrainingInstance(0, 4), TrainingInstance(1, 7),
TrainingInstance(2, 7), TrainingInstance(3, 8),
TrainingInstance(8, 12)]
training_set = [TrainingInstance(60, 3.1), TrainingInstance(61, 3.6),
TrainingInstance(62, 3.8), TrainingInstance(63, 4),
TrainingInstance(65, 4.1)]
def grad_desc(x, x1):
# minimize a cost function of two variables using gradient descent
training_rate = 0.1
iterations = 5000
#while sqrd_error(x, x1) > 0.0000001:
while iterations > 0:
#print sqrd_error(x, x1)
x, x1 = x - (training_rate * deriv(x, x1)), x1 - (training_rate * deriv1(x, x1))
iterations -= 1
return x, x1
def sqrd_error(x, x1):
sum = 0.0
for inst in training_set:
sum += ((x + x1 * inst.X) - inst.Y)**2
return sum / (2.0 * len(training_set))
def deriv(x, x1):
sum = 0.0
for inst in training_set:
sum += ((x + x1 * inst.X) - inst.Y)
return sum / len(training_set)
def deriv1(x, x1):
sum = 0.0
for inst in training_set:
sum += ((x + x1 * inst.X) - inst.Y) * inst.X
return sum / len(training_set)
if __name__ == "__main__":
print grad_desc(2, 2)
</code></pre>
|
<p>Reduce <code>training_rate</code> so that the objective decreases at each iteration.</p>
<p>See Figure 6. in this paper: <a href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf" rel="nofollow">http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf</a></p>
| 310
|
gradient descent implementation
|
Pure Python Implementation of gradient descent
|
https://stackoverflow.com/questions/63094866/pure-python-implementation-of-gradient-descent
|
<p>I have tried to implement gradient descent myself using Python. I know there are similar topics on this, but for my attempt, my guess slope can always get really close to the real slope, but the guess intercept never matched or even come close to the real intercept. Does anyone know why is that happening?</p>
<p>Also, I read a lot of gradient descent post and formula, it says for each iteration, I need to multiply the gradient by the negative learning rate and repeat until it converges. As you can see in my implementation below, my gradient descent only works when I multiply the learning rate to the gradient and not by -1. Why is that? Did I understand the gradient descent wrong or is my implementation wrong? (The exam_m and exam_b will quickly go overflow if I multiply the learning rate and gradient by -1)</p>
<pre><code>intercept = -5
slope = -4
x = []
y = []
for i in range(0, 100):
x.append(i/300)
y.append((i * slope + intercept)/300)
learning_rate = 0.005
# y = mx + b
# m is slope, b is y-intercept
exam_m = 100
exam_b = 100
#iteration
#My error function is sum all (y - guess) ^2
for _ in range(20000):
gradient_m = 0
gradient_b = 0
for i in range(len(x)):
gradient_m += (y[i] - exam_m * x[i] - exam_b) * x[i]
gradient_b += (y[i] - exam_m * x[i] - exam_b)
#why not gradient_m -= (y[i] - exam_m * x[i] - exam_b) * x[i] like what it said in the gradient descent formula
exam_m += learning_rate * gradient_m
exam_b += learning_rate * gradient_b
print(exam_m, exam_b)
</code></pre>
|
<p>The reason for overflow is the missing factor <code>(2/n)</code>. I have broadly shown the use of negative signs for more clarification.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
intercept = -5
slope = -4
# y = mx + b
x = []
y = []
for i in range(0, 100):
x.append(i/300)
y.append((i * slope + intercept)/300)
n = len(x)
x = np.array(x)
y = np.array(y)
learning_rate = 0.05
exam_m = 0
exam_b = 0
epochs = 1000
for _ in range(epochs):
gradient_m = 0
gradient_b = 0
for i in range(n):
gradient_m -= (y[i] - exam_m * x[i] - exam_b) * x[i]
gradient_b -= (y[i] - exam_m * x[i] - exam_b)
exam_m = exam_m - (2/n)*learning_rate * gradient_m
exam_b = exam_b - (2/n)*learning_rate * gradient_b
print('Slope, Intercept: ', exam_m, exam_b)
y_pred = exam_m*x + exam_b
plt.xlabel('x')
plt.ylabel('y')
plt.plot(x, y_pred, '--', color='black', label='predicted_line')
plt.plot(x, y, '--', color='blue', label='orginal_line')
plt.legend()
plt.show()
</code></pre>
<p>Output:
<code>Slope, Intercept: -2.421033215481844 -0.2795651072061604</code></p>
<p><a href="https://i.sstatic.net/jflka.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jflka.png" alt="enter image description here" /></a></p>
| 311
|
gradient descent implementation
|
Gradient descent implementation is not working in Julia
|
https://stackoverflow.com/questions/47189943/gradient-descent-implementation-is-not-working-in-julia
|
<p>I am trying to Implement <strong>gradient Descent</strong> algorithm from scratch to find the slope and intercept value for my linear fit line.</p>
<p>Using the package and calculating slope and intercept, I get slope = 0.04 and intercept = 7.2 but when I use my gradient descent algorithm for the same problem, I get slope and intercept both values = (-infinity,-infinity)</p>
<p>Here is my code</p>
<pre><code>x= [1,2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20]
y=[2,3,4,5,6,7,8,9,10,11,12,13,141,5,16,17,18,19,20,21]
function GradientDescent()
m=0
c=0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp #error in predicted value
dm = 2*E*(-x[k]) # partial derivation of cost function w.r.t slope(m)
dc = 2*E*(-1) # partial derivate of cost function w.r.t. Intercept(c)
m = m + (dm * 0.001)
c = c + (dc * 0.001)
end
end
return m,c
end
Values = GradientDescent() # after running values = (-inf,-inf)
</code></pre>
|
<p>I have not done the math, but instead wrote the tests. It seems you got a sign error when assigning m and c.</p>
<p>Also, writing the tests really helps, and Julia makes it simple :)</p>
<pre><code>function GradientDescent(x, y)
m=0.0
c=0.0
for i=1:10000
for k=1:length(x)
Yp = m*x[k] + c
E = y[k]-Yp
dm = 2*E*(-x[k])
dc = 2*E*(-1)
m = m - (dm * 0.001)
c = c - (dc * 0.001)
end
end
return m,c
end
using Base.Test
@testset "gradient descent" begin
@testset "slope $slope" for slope in [0, 1, 2]
@testset "intercept for $intercept" for intercept in [0, 1, 2]
x = 1:20
y = broadcast(x -> slope * x + intercept, x)
computed_slope, computed_intercept = GradientDescent(x, y)
@test slope ≈ computed_slope atol=1e-8
@test intercept ≈ computed_intercept atol=1e-8
end
end
end
</code></pre>
| 312
|
gradient descent implementation
|
mini-batch gradient descent implementation in tensorflow
|
https://stackoverflow.com/questions/40367514/mini-batch-gradient-descent-implementation-in-tensorflow
|
<p>When reading an tensorflow implementation for a deep learning model, I am trying to understand the following code segment included in the training process. </p>
<pre><code>self.net.gradients_node = tf.gradients(loss, self.variables)
for epoch in range(epochs):
total_loss = 0
for step in range((epoch*training_iters), ((epoch+1)*training_iters)):
batch_x, batch_y = data_provider(self.batch_size)
# Run optimization op (backprop)
_, loss, lr, gradients = sess.run((self.optimizer, self.net.cost, self.learning_rate_node, self.net.gradients_node),
feed_dict={self.net.x: batch_x,
self.net.y: util.crop_to_shape(batch_y, pred_shape),
self.net.keep_prob: dropout})
if avg_gradients is None:
avg_gradients = [np.zeros_like(gradient) for gradient in gradients]
for i in range(len(gradients)):
avg_gradients[i] = (avg_gradients[i] * (1.0 - (1.0 / (step+1)))) + (gradients[i] / (step+1))
norm_gradients = [np.linalg.norm(gradient) for gradient in avg_gradients]
self.norm_gradients_node.assign(norm_gradients).eval()
total_loss += loss
</code></pre>
<p>I think it is related to mini-batch gradient descent, but I cannot understand how does it work, or I have some difficulties to connect it to the algorithm shown as follows</p>
<p><a href="https://i.sstatic.net/h407Z.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h407Z.jpg" alt="enter image description here"></a></p>
|
<p>This is not related to mini batch SGD.</p>
<p>It computes average gradient over all timesteps. After the first timestep <code>avg_gradients</code> will contain the gradient that was just computed, after the second step it will be elementwise mean of the two gradients from the two steps, after <code>n</code> steps it will be elementwise mean of all the <code>n</code> gradients computed so far. These mean gradients are then normalized (so that their norm is one). It is hard to tell why those average gradients are needed without the context in which they were presented.</p>
| 313
|
gradient descent implementation
|
Implementing a gradient descent
|
https://stackoverflow.com/questions/58833048/implementing-a-gradient-descent
|
<p>I'm trying to implement a gradient descent in Go. My goal is to predict the cost of a car from it's mileage.
Here is my data set:</p>
<pre><code>km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290
</code></pre>
<p>I've tried various approaches, like normalizing the data set, not normalizing it, leaving thetas as is, denormalizing thetas... But I cannot get the correct result.
My maths must be off somewhere, but I cannot figure out where.
The result I'm trying to get should be approximately t0 = 8500, t1 = -0.02
My implementation is the following:</p>
<pre><code>package main
import (
"encoding/csv"
"fmt"
"log"
"math"
"os"
"strconv"
)
const (
dataFile = "data.csv"
iterations = 20000
learningRate = 0.1
)
type dataSet [][]float64
var minKm, maxKm, minPrice, maxPrice float64
func (d dataSet) getExtremes(column int) (float64, float64) {
min := math.Inf(1)
max := math.Inf(-1)
for _, row := range d {
item := row[column]
if item > max {
max = item
}
if item < min {
min = item
}
}
return min, max
}
func normalizeItem(item, min, max float64) float64 {
return (item - min) / (max - min)
}
func (d *dataSet) normalize() {
minKm, maxKm = d.getExtremes(0)
minPrice, maxPrice = d.getExtremes(1)
for _, row := range *d {
row[0], row[1] = normalizeItem(row[0], minKm, maxKm), normalizeItem(row[1], minPrice, maxPrice)
}
}
func processEntry(entry []string) []float64 {
if len(entry) != 2 {
log.Fatalln("expected two fields")
}
km, err := strconv.ParseFloat(entry[0], 64)
if err != nil {
log.Fatalln(err)
}
price, err := strconv.ParseFloat(entry[1], 64)
if err != nil {
log.Fatalln(err)
}
return []float64{km, price}
}
func getData() dataSet {
file, err := os.Open(dataFile)
if err != nil {
log.Fatalln(err)
}
reader := csv.NewReader(file)
entries, err := reader.ReadAll()
if err != nil {
log.Fatalln(err)
}
entries = entries[1:]
data := make(dataSet, len(entries))
for k, entry := range entries {
data[k] = processEntry(entry)
}
return data
}
func outputResult(theta0, theta1 float64) {
file, err := os.OpenFile("weights.csv", os.O_WRONLY, 0644)
if err != nil {
log.Fatalln(err)
}
defer file.Close()
file.Truncate(0)
file.Seek(0, 0)
file.WriteString(fmt.Sprintf("theta0,%.6f\ntheta1,%.6f\n", theta0, theta1))
}
func estimatePrice(theta0, theta1, mileage float64) float64 {
return theta0 + theta1*mileage
}
func (d dataSet) computeThetas(theta0, theta1 float64) (float64, float64) {
dataSize := float64(len(d))
t0sum, t1sum := 0.0, 0.0
for _, it := range d {
mileage := it[0]
price := it[1]
err := estimatePrice(theta0, theta1, mileage) - price
t0sum += err
t1sum += err * mileage
}
return theta0 - (t0sum / dataSize * learningRate), theta1 - (t1sum / dataSize * learningRate)
}
func denormalize(theta, min, max float64) float64 {
return theta*(max-min) + min
}
func main() {
data := getData()
data.normalize()
theta0, theta1 := 0.0, 0.0
for k := 0; k < iterations; k++ {
theta0, theta1 = data.computeThetas(theta0, theta1)
}
theta0 = denormalize(theta0, minKm, maxKm)
theta1 = denormalize(theta1, minPrice, maxPrice)
outputResult(theta0, theta1)
}
</code></pre>
<p>What should I fix in order to properly implement a gradient descent?</p>
|
<p><a href="https://en.wikipedia.org/wiki/Ordinary_least_squares#Simple_linear_regression_model" rel="nofollow noreferrer">Linear Regression</a> is really simple: </p>
<pre><code>// yi = alpha + beta*xi + ei
func linearRegression(x, y []float64) (float64, float64) {
EX := expected(x)
EY := expected(y)
EXY := expectedXY(x, y)
EXX := expectedXY(x, x)
covariance := EXY - EX*EY
variance := EXX - EX*EX
beta := covariance / variance
alpha := EY - beta*EX
return alpha, beta
}
</code></pre>
<p>Try it <a href="https://play.golang.org/p/dJ-EF9j9-v2" rel="nofollow noreferrer">here</a>, Output:</p>
<pre><code>8499.599649933218 -0.021448963591702314 396270.87871142407
</code></pre>
<p>Code:</p>
<pre><code>package main
import (
"encoding/csv"
"fmt"
"strconv"
"strings"
)
func main() {
x, y := readXY(`data.csv`)
alpha, beta := linearRegression(x, y)
fmt.Println(alpha, beta, -alpha/beta) // 8499.599649933218 -0.021448963591702314 396270.87871142407
}
// https://en.wikipedia.org/wiki/Ordinary_least_squares#Simple_linear_regression_model
// yi = alpha + beta*xi + ei
func linearRegression(x, y []float64) (float64, float64) {
EX := expected(x)
EY := expected(y)
EXY := expectedXY(x, y)
EXX := expectedXY(x, x)
covariance := EXY - EX*EY
variance := EXX - EX*EX
beta := covariance / variance
alpha := EY - beta*EX
return alpha, beta
}
// E[X]
func expected(x []float64) float64 {
sum := 0.0
for _, v := range x {
sum += v
}
return sum / float64(len(x))
}
// E[XY]
func expectedXY(x, y []float64) float64 {
sum := 0.0
for i, v := range x {
sum += v * y[i]
}
return sum / float64(len(x))
}
func readXY(filename string) ([]float64, []float64) {
// file, err := os.Open(filename)
// if err != nil {
// panic(err)
// }
// defer file.Close()
file := strings.NewReader(data)
reader := csv.NewReader(file)
records, err := reader.ReadAll()
if err != nil {
panic(err)
}
records = records[1:]
size := len(records)
x := make([]float64, size)
y := make([]float64, size)
for i, v := range records {
val, err := strconv.ParseFloat(v[0], 64)
if err != nil {
panic(err)
}
x[i] = val
val, err = strconv.ParseFloat(v[1], 64)
if err != nil {
panic(err)
}
y[i] = val
}
return x, y
}
var data = `km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290`
</code></pre>
<hr>
<p><a href="https://en.wikipedia.org/wiki/Gradient_descent" rel="nofollow noreferrer">Gradient descent</a> is based on the observation that if the multi-variable function <code>F(x)</code> is defined and differentiable in a neighborhood of a point <code>a</code> , then F(x) decreases fastest if one goes from <code>a</code> in the direction of the negative gradient of <code>F</code> at <code>a</code>,<code>-∇F(a)</code>, for example: </p>
<pre><code>// F(x)
f := func(x float64) float64 {
return alpha + beta*x // write your target function here
}
</code></pre>
<p>Derivative function: </p>
<pre><code>h := 0.000001
// Derivative function ∇F(x)
df := func(x float64) float64 {
return (f(x+h) - f(x-h)) / (2 * h) // write your target function derivative here
}
</code></pre>
<p>Search:</p>
<pre><code>minimunAt := 1.0 // We start the search here
gamma := 0.01 // Step size multiplier
precision := 0.0000001 // Desired precision of result
max := 100000 // Maximum number of iterations
currentX := 0.0
step := 0.0
for i := 0; i < max; i++ {
currentX = minimunAt
minimunAt = currentX - gamma*df(currentX)
step = minimunAt - currentX
if math.Abs(step) <= precision {
break
}
}
fmt.Printf("Minimum at %.8f value: %v\n", minimunAt, f(minimunAt))
</code></pre>
| 314
|
gradient descent implementation
|
Gradient Descent implementation in octave
|
https://stackoverflow.com/questions/10591343/gradient-descent-implementation-in-octave
|
<p>I've actually been struggling against this for like 2 months now. What is it that makes these different? </p>
<pre><code>hypotheses= X * theta
temp=(hypotheses-y)'
temp=X(:,1) * temp
temp=temp * (1 / m)
temp=temp * alpha
theta(1)=theta(1)-temp
hypotheses= X * theta
temp=(hypotheses-y)'
temp=temp * (1 / m)
temp=temp * alpha
theta(2)=theta(2)-temp
theta(1) = theta(1) - alpha * (1/m) * ((X * theta) - y)' * X(:, 1);
theta(2) = theta(2) - alpha * (1/m) * ((X * theta) - y)' * X(:, 2);
</code></pre>
<p>The latter works. I'm just not sure why..I struggle to understand the need for the matrix inverse . </p>
|
<p>What you're doing in the first example in the second block you've missed out a step haven't you? I am assuming you concatenated X with a vector of ones.</p>
<pre><code> temp=X(:,2) * temp
</code></pre>
<p>The last example will work but can be vectorized even more to be more simple and efficient.</p>
<p>I've assumed you only have 1 feature. it will work the same with multiple features since all that happens is you add an extra column to your X matrix for each feature. Basically you add a vector of ones to x to vectorize the intercept.</p>
<p>You can update a 2x1 matrix of thetas in one line of code. With x concatenate a vector of ones making it a nx2 matrix then you can calculate h(x) by multiplying by the theta vector (2x1), this is (X * theta) bit.</p>
<p>The second part of the vectorization is to transpose (X * theta) - y) which gives you a 1*n matrix which when multiplied by X (an n*2 matrix) will basically aggregate both (h(x)-y)x0 and (h(x)-y)x1. By definition both thetas are done at the same time. This results in a 1*2 matrix of my new theta's which I just transpose again to flip around the vector to be the same dimensions as the theta vector. I can then do a simple scalar multiplication by alpha and vector subtraction with theta. </p>
<pre><code>X = data(:, 1); y = data(:, 2);
m = length(y);
X = [ones(m, 1), data(:,1)];
theta = zeros(2, 1);
iterations = 2000;
alpha = 0.001;
for iter = 1:iterations
theta = theta -((1/m) * ((X * theta) - y)' * X)' * alpha;
end
</code></pre>
| 315
|
gradient descent implementation
|
Tried implementing Gradient Descent
|
https://stackoverflow.com/questions/61714403/tried-implementing-gradient-descent
|
<p>I am totally new to ML and python, Read linear regression and tried to implement gradient descent </p>
<p>first, Could anyone please let me know what wrong I am doing?</p>
<p>Input Data - </p>
<pre><code> x = np.array([1,2,3,4,5,6])
y = (2*x + 5) + np.random.normal(0, 1, len(x))
curve = pd.DataFrame(np.column_stack([x,y]), columns = ['x', 'y'])
</code></pre>
<p>Gradient Descent Code - </p>
<pre><code> learningRate = 0.1
m = 0
c = 0
n = len(x)
j = [0]*300
j[1] = sum((curve.y - (m*curve.x + c))**2)/n
iter = 1
err = 1
while(err > 10**-3):
Dc = (m*curve.x +c) - curve.y
Dm = ((m*curve.x + c) - curve.y)*curve.x
m = m - 0.1 * sum(Dm)/n
c = c - 0.1 * sum(Dc)/n
iter = iter +1
j[iter] = sum((curve.y - (m*curve.x + c))**2)/n
err = abs(j[iter] - j[iter -1])
print('error :',err)
print('iter : ', iter)
print('m : ', m)
print('c : ', c)
</code></pre>
<p>It give me correct result as below error terms goes on decreasing and it comes up with estimated
vales of m and c:</p>
<pre><code>error : 97.29992615029744
error : 34.92089545773186
error : 12.579806110060302
error : 4.5766394765497145
error : 1.7080644275745156
error : 0.6783105614574572
error : 0.307139765746657
error : 0.17189857726871516
error : 0.12122915945728607
error : 0.10092634553882229
error : 0.09157601971420037
error : 0.08622305155313681
error : 0.08237404842923546
error : 0.07913349054978847
error : 0.07617816054863757
error : 0.07338987727769242
error : 0.07072397231883842
error : 0.06816218746680436
error : 0.06569580397224817
error : 0.0633195980665846
error : 0.061029673548781194
error : 0.0588226828738998
error : 0.0566955455716478
error : 0.05464534485749262
error : 0.05266928814789651
error : 0.05076469054898958
error : 0.04892896665787916
error : 0.04715962542349139
error : 0.04545426618265713
error : 0.043810575193886425
error : 0.042226322423362106
error : 0.04069935849262318
error : 0.0392276117528354
error : 0.037809085470765336
error : 0.03644185511853815
error : 0.03512406576218341
error : 0.033853929544860106
error : 0.032629723261214494
error : 0.03144978601945159
error : 0.030312516988004745
error : 0.029216373223642567
error : 0.028159867578168907
error : 0.02714156668077994
error : 0.026160088993397057
error : 0.025214102936330196
error : 0.02430232508170649
error : 0.023423518412224897
error : 0.022576490642901792
error : 0.021760092603465342
error : 0.020973216679284867
error : 0.020214795308643785
error : 0.019483799534385726
error : 0.0187792376078999
error : 0.01810015364362627
error : 0.017445626322180052
error : 0.016814767640402017
error : 0.016206721706588878
error : 0.015620663579299254
error : 0.01505579814815139
error : 0.014511359055082274
error : 0.013986607654631555
error : 0.013480832011810495
error : 0.012993345936206158
error : 0.012523488051036091
error : 0.012070620895839435
error : 0.011634130061635828
error : 0.011213423357342878
error : 0.01080793000635838
error : 0.010417099872163771
error : 0.01004040271197626
error : 0.009677327457328522
error : 0.009327381520729094
error : 0.008990090127349415
error : 0.008664995670889075
error : 0.008351657092750653
error : 0.008049649283635052
error : 0.007758562506807065
error : 0.007478001842195603
error : 0.007207586650595843
error : 0.006946950057259871
error : 0.00669573845415905
error : 0.006453611020228234
error : 0.006220239258977411
error : 0.005995306552818436
error : 0.005778507733510185
error : 0.005569548668120872
error : 0.005368145859991458
error : 0.005174026064101778
error : 0.004986925916362628
error : 0.004806591576302477
error : 0.004632778382688496
error : 0.004465250521578756
error : 0.004303780706404803
error : 0.0041481498695978836
error : 0.003998146865395347
error : 0.00385356818335314
error : 0.003714217672262876
error : 0.0035799062740193843
error : 0.0034504517671292145
error : 0.0033256785194886174
error : 0.0032054172500981526
error : 0.0030895047994004
error : 0.002977783907922804
error : 0.002870103002917457
error : 0.0027663159927209247
error : 0.0026662820685543487
error : 0.002569865513485592
error : 0.002476935518296086
error : 0.002387366004018787
error : 0.0023010354508823383
error : 0.002217826733440287
error : 0.0021376269616657506
error : 0.002060327327805034
error : 0.0019858229587441656
error : 0.0019140127737211632
error : 0.0018447993472126
error : 0.0017780887767346876
error : 0.0017137905554454047
error : 0.0016518174493684867
error : 0.0015920853790276634
error : 0.0015345133053750182
error : 0.0014790231198629211
error : 0.0014255395384494829
error : 0.0013739899994744675
error : 0.0013243045652084895
error : 0.0012764158269753523
error : 0.0012302588136838821
error : 0.0011857709036990904
error : 0.001142891739868812
error : 0.0011015631476423149
error : 0.0010617290561396597
error : 0.0010233354220909874
error : 0.0009863301565038451
iter : 134
m : 2.0833620160267663
c : 4.610637626188058
</code></pre>
<p>But when I take input as (just increased one element in my array)</p>
<pre><code>x = np.array([1,2,3,4,5,6,7])
y = (2*x + 5) + np.random.normal(0, 1, len(x))
curve = pd.DataFrame(np.column_stack([x,y]), columns = ['x', 'y'])
</code></pre>
<p>Result comes like :(why my error keeps on increasing in this case)</p>
<pre><code>error : 29.09815015431613
error : 34.01872638453614
error : 39.76520568567241
error : 46.47644714481737
error : 54.31464731003979
error : 63.46926275846772
error : 74.16159195797525
error : 86.65012723507334
error : 101.23680628672548
error : 118.27431442939576
error : 138.17461419084918
error : 161.41890853364032
error : 188.56927867181366
error : 220.28227794256668
error : 257.32481050201727
error : 300.5926788730269
error : 351.13224891948175
error : 410.16575621676475
error : 479.12086585577663
</code></pre>
<p>Please let me know what wrong I am doing?</p>
<p>Tried implementing Gradient Descent, but if I take longer input vector means more examples then I m error terms keeps on increasing instead of decreasing</p>
|
<p>You have two problems here.
First of all you defined the learning rate but didn't use it</p>
<pre><code> m = m - learningRate * sum(Dm)/n
c = c - learningRate* sum(Dc)/n
</code></pre>
<p>Secondly your learning rate is to large. Choose a value like 0.01</p>
<p>If you change your print statement to
print('error : {} m: {} c: {}'.format(err,m,c))</p>
<p>you can see the learned parameters are oscillating</p>
<pre><code>error : 4.627422172738745 m: 6.2021421523611355 c: 1.3127611648190132
error : 5.407226002504083 m: -0.5251044659276074 c: 0.013389352211670591
error : 6.318019832044391 m: 6.721890877404075 c: 1.53485336818056
</code></pre>
| 316
|
gradient descent implementation
|
Cost function not decreasing in gradient descent implementation
|
https://stackoverflow.com/questions/23576301/cost-function-not-decreasing-in-gradient-descent-implementation
|
<p>I am trying implemented batch gradient descent in C language. The problem is, my cost function increases dramatically in every turn and I am not able to understand what is wrong. I checked my code several times and it seems to me that I coded exactly the formulas. Do you have any suggestions or ideas what might be the wrong in the implementation?</p>
<p>My data set is here: <a href="https://archive.ics.uci.edu/ml/datasets/Housing" rel="nofollow">https://archive.ics.uci.edu/ml/datasets/Housing</a>
And I reference these slides for the algorithm (I googled this): <a href="http://asv.informatik.uni-leipzig.de/uploads/document/file_link/527/TMI04.2_linear_regression.pdf" rel="nofollow">http://asv.informatik.uni-leipzig.de/uploads/document/file_link/527/TMI04.2_linear_regression.pdf</a></p>
<p>I read the data set correctly into the main memory. Below part shows how I store the data set information in main memory. It is straight-forward.</p>
<pre><code>//Definitions
#define NUM_OF_ATTRIBUTES 13
#define NUM_OF_SETS 506
#define LEARNING_RATE 0.07
//Data holder
struct data_set_s
{
double x_val[NUM_OF_SETS][NUM_OF_ATTRIBUTES + 1];
double y_val[NUM_OF_SETS];
double teta_val[NUM_OF_ATTRIBUTES + 1];
};
//RAM
struct data_set_s data_set;
</code></pre>
<p>Teta values are initialized to 0 and x0 values are initialized to 1.</p>
<p>Below section is the hypothesis function, which is the standart polynomial function.</p>
<pre><code>double perform_hypothesis_a(unsigned short set_index)
{
double result;
int i;
result = 0;
for(i = 0; i < NUM_OF_ATTRIBUTES + 1; i++)
result += data_set.teta_val[i] * data_set.x_val[set_index][i];
return result;
}
</code></pre>
<p>Below section is the cost function.</p>
<pre><code>double perform_simplified_cost_func(double (*hypothesis_func)(unsigned short))
{
double result, val;
int i;
result = 0;
for(i = 0; i < NUM_OF_SETS; i++)
{
val = hypothesis_func(i) - data_set.y_val[i];
result += pow(val, 2);
}
result = result / (double)(2 * NUM_OF_SETS);
return result;
</code></pre>
<p>}</p>
<p>Below section is the gradient descent function.</p>
<pre><code>double perform_simplified_gradient_descent(double (*hypothesis_func)(unsigned short))
{
double temp_teta_val[NUM_OF_ATTRIBUTES + 1], summation, val;
int i, j, k;
for(i = 0; i < NUM_OF_ATTRIBUTES + 1; i++)
temp_teta_val[i] = 0;
for(i = 0; i < 10; i++) //assume this is "while not converged"
{
for(j = 0; j < NUM_OF_ATTRIBUTES + 1; j++)
{
summation = 0;
for(k = 0; k < NUM_OF_SETS; k++)
{
summation += (hypothesis_func(k) - data_set.y_val[k]) * data_set.x_val[k][j];
}
val = ((double)LEARNING_RATE * summation) / NUM_OF_SETS);
temp_teta_val[j] = data_set.teta_val[j] - val;
}
for(j = 0; j < NUM_OF_ATTRIBUTES + 1; j++)
{
data_set.teta_val[j] = temp_teta_val[j];
}
printf("%lg\n ", perform_simplified_cost_func(hypothesis_func));
}
return 1;
}
</code></pre>
<p>While it seems correct to me, when I print the cost function at the end of the every gradient descent, it goes like: 1.09104e+011, 5.234e+019, 2.51262e+028, 1.20621e+037...</p>
| 317
|
|
gradient descent implementation
|
Implementation of Gradient Descent (Matlab)
|
https://stackoverflow.com/questions/56115458/implementation-of-gradient-descent-matlab
|
<p>I have a problem with simple implemenation of Gradient Descent algorithm -
I wrote the following code, just turning the math of GD into a matlab code, but it does not converge. Due to the simplicity of this implementation, I think the I'm missing here very crucial thing about wroking with Matlab.</p>
<p>The goal of this code is to estimate coefficients of linear regression which give us the minimum error. So I loaded some data ('trees.data.txt') and I splited this data into a train set and test set, I want to extract the coefficients using the train and Gradient Descent but unfortunately it does not converge.</p>
<p>This is the iterative equation -
theta = theta - (alpha/m) * ((X * theta - y)' * X)';</p>
<p>Can anyone explain please what is the problem?</p>
<p>My implementation - </p>
<pre><code>% Load trees data from file.
data = load('trees.data.txt');
data=data'; % put examples in columns
% Include a row of 1s as an additional intercept feature.
data = [ ones(1,size(data,2)); data ];
% Shuffle examples.
data = data(:, randperm(size(data,2)));
% Split into train and test sets
% The last row of 'data' is the median home price.
train.X = data(1:end-1,1:400);
train.y = data(end,1:400);
test.X = data(1:end-1,401:end);
c = data(end,401:end);
m=size(train.X,2);
n=size(train.X,1);
% Initialize the coefficient vector theta to random values.
theta = rand(n,1);
X = test.X;
y = test.y;
theta_=zeros(size(theta));
delta =0.01; % convergence tolerance
alpha = 0.001; % learning rate
shift = 1000; % big number
iter = 0;
formatSpec = 'iteration: %d, error: %2.4f\n';
while (shift > delta)
iter = iter +1;
grad =zeros(size(theta));
for i = 1:m
grad = grad + (train.X(:,i)'*theta - train.y(i)).*train.X(:,i);
end
%theta_= theta-(alpha*(1/m)*((theta'*X-y)*X')');
theta_= theta - alpha*(1/m)*grad;
shift = norm(theta_ - theta);
fprintf(formatSpec, iter, shift);
theta = theta_;
clear theta_;
end
</code></pre>
<p>here is the outputs of the first 10 iterations - </p>
<pre><code>iteration: 1, error: 151.0904
iteration: 2, error: 46418.0835
iteration: 3, error: 14260790.8611
iteration: 4, error: 4381270296.1843
iteration: 5, error: 1346035405942.4905
iteration: 6, error: 413535616743995.0000
iteration: 7, error: 127048445799311180.0000
iteration: 8, error: 39032448298191028000.0000
iteration: 9, error: 11991740714070305000000.0000
iteration: 10, error: 3684161553354467700000000.0000
</code></pre>
| 318
|
|
gradient descent implementation
|
Stochastic gradient descent implementation with Python's numpy
|
https://stackoverflow.com/questions/39975050/stochastic-gradient-descent-implementation-with-pythons-numpy
|
<p>I have to implement stochastic gradient descent using python numpy library. For that purpose I'm given the following function definitions:</p>
<pre><code>def compute_stoch_gradient(y, tx, w):
"""Compute a stochastic gradient for batch data."""
def stochastic_gradient_descent(
y, tx, initial_w, batch_size, max_epochs, gamma):
"""Stochastic gradient descent algorithm."""
</code></pre>
<p>I'm also given the following help function:</p>
<pre><code>def batch_iter(y, tx, batch_size, num_batches=1, shuffle=True):
"""
Generate a minibatch iterator for a dataset.
Takes as input two iterables (here the output desired values 'y' and the input data 'tx')
Outputs an iterator which gives mini-batches of `batch_size` matching elements from `y` and `tx`.
Data can be randomly shuffled to avoid ordering in the original data messing with the randomness of the minibatches.
Example of use :
for minibatch_y, minibatch_tx in batch_iter(y, tx, 32):
<DO-SOMETHING>
"""
data_size = len(y)
if shuffle:
shuffle_indices = np.random.permutation(np.arange(data_size))
shuffled_y = y[shuffle_indices]
shuffled_tx = tx[shuffle_indices]
else:
shuffled_y = y
shuffled_tx = tx
for batch_num in range(num_batches):
start_index = batch_num * batch_size
end_index = min((batch_num + 1) * batch_size, data_size)
if start_index != end_index:
yield shuffled_y[start_index:end_index], shuffled_tx[start_index:end_index]
</code></pre>
<p>I implemented the following two functions:</p>
<pre><code>def compute_stoch_gradient(y, tx, w):
"""Compute a stochastic gradient for batch data."""
e = y - tx.dot(w)
return (-1/y.shape[0])*tx.transpose().dot(e)
def stochastic_gradient_descent(y, tx, initial_w, batch_size, max_epochs, gamma):
"""Stochastic gradient descent algorithm."""
ws = [initial_w]
losses = []
w = initial_w
for n_iter in range(max_epochs):
for minibatch_y,minibatch_x in batch_iter(y,tx,batch_size):
w = ws[n_iter] - gamma * compute_stoch_gradient(minibatch_y,minibatch_x,ws[n_iter])
ws.append(np.copy(w))
loss = y - tx.dot(w)
losses.append(loss)
return losses, ws
</code></pre>
<p>I'm not sure the iteration should be done in range(max_epochs) or in a larger range. I say this because I read that an epoch is "each time we run through the entire data set". So I think an epoch consists on more of one iteration...</p>
|
<p>In a typical implementation, a mini-batch gradient descent with batch size B should pick B data points from the dataset randomly and update the weights based on the computed gradients on this subset. This process itself will continue many number of times until convergence or some threshold maximum iteration. Mini-batch with B=1 is SGD which can be noisy sometimes.</p>
<p>Along with the above comments, you may want to play with the batch size and the learning rate (step size) since they have significance impact on the convergence rate of stochastic and mini-batch gradient descent.</p>
<p>The following plots show the impacts of these two parameters on the convergence rate of <code>SGD</code> with <code>logistic regression</code> while doing sentiment analysis on amazon product review dataset, an assignment that appeared in a coursera course on Machine Learning - Classification by the University of Washington:</p>
<p><a href="https://i.sstatic.net/fgmjw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fgmjw.png" alt="enter image description here"></a>
<a href="https://i.sstatic.net/uNdgF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uNdgF.png" alt="enter image description here"></a></p>
<p>For more detailed information on this you may refer to <a href="https://sandipanweb.wordpress.com/2017/03/31/online-learning-sentiment-analysis-with-logistic-regression-via-stochastic-gradient-ascent/?frame-nonce=987e584e16" rel="nofollow noreferrer">https://sandipanweb.wordpress.com/2017/03/31/online-learning-sentiment-analysis-with-logistic-regression-via-stochastic-gradient-ascent/?frame-nonce=987e584e16</a></p>
| 319
|
gradient descent implementation
|
Having trouble making update in gradient descent implementation?
|
https://stackoverflow.com/questions/66874237/having-trouble-making-update-in-gradient-descent-implementation
|
<p>Hi I am working on implementing gradient descent with backtracking line search. However when I try to update f(x0) the value doesn't change. Could it be something going on with the lambda expression, I am not too familiar with them?</p>
<pre><code>import numpy as np
import math
alpha = 0.1
beta = 0.6
f = lambda x: math.exp(x[0] + 3*x[1] - 0.1) + math.exp(x[0] - 3*x[1] -0.1) + math.exp(-1*x[0] - 0.1)
dfx1 = lambda x: math.exp(x[0] + 3*x[1] - 0.1) + math.exp(x[0] - 3*x[1] -0.1) - math.exp(-x[0] - 0.1)
dfx2 = lambda x: 3*math.exp(x[0] + 3*x[1] - 0.1) - 3*math.exp(x[0] - 3*x[1] -0.1)
t = 1
count = 1
x0 = np.array([1.0,1.0])
dx0 = np.array([1e-3, 1e-3])
x = []
d = np.array([-1*dfx1(x0),-1*dfx2(x0)]);
grad = np.array([1*dfx1(x0),1*dfx2(x0)])
def backtrack(x0, dfx1, dfx2, t, alpha, beta, count):
while (f(x0 + t*d) > f(x0) + alpha*t*np.dot(d,grad) or count < 50 ):
d[0] = -1*dfx1(x0);
d[1] = -1*dfx2(x0);
grad[0] = dfx1(x0);
grad[1] = dfx2(x0);
x0[0] = x0[0] + t*d[0];
x0[1] = x0[1] + t*d[1];
t *= beta;
count += 1
x.append(f(x0));
return t
t = backtrack(x0, dfx1, dfx2, t, alpha, beta,count)
print("\nfinal step size :", t)
print(np.log(x))
print(f(x0))
</code></pre>
|
<p><strong>Update for new code</strong></p>
<p>OK, your numbers now change too much!</p>
<p>When writing these routines stepping through the code with a debugger is really useful to check that the code is doing what you want. In this case you'll see that on the second pass through the loop <code>x0 = [-1.32e+170, 3.96e+170]</code>. Taking the exponential of that is what is causing the problem. If you don't have a debugger, try printing some values.</p>
<p>What can you do to fix it? One fix is to reduce <code>t</code>. Starting with <code>t=1e-3</code> stops the problem.</p>
<p>However, I suspect that you have more issues in store. I don't know the technique you are trying to implement, but I suspect that <code>t</code> shouldn't just reduce by a fixed ratio each step, and the <code>np.dot(...)</code> term with anti-parallel vectors also looks suspect.</p>
<p>If the implementation is not the point of this, is there a library function that does what you want? For example, if you just want to minimise <code>f</code> from the starting point <code>1, 1</code> see the minimise function in SciPy [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html]</p>
<pre><code>import scipy.optimize
import numpy as np
def get_exponentials(x):
a = np.exp(x[0] + 3 * x[1] - 0.1)
b = np.exp(x[0] - 3 * x[1] - 0.1)
c = np.exp(-x[0] - 0.1)
return a, b, c
def dfdx(x):
a, b, c = get_exponentials(x)
return np.array([a + b - c, 3 * (a - b)])
def f(x):
a, b, c = get_exponentials(x)
return a + b + c
scipy.optimize.minimize(f, np.array([1.0, 1.0]), jac=dfdx)
# x = [-3.46573682e-01, -5.02272755e-08]), f = 2.559267
</code></pre>
<p>If line search is particularly important to you there is [https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.line_search.html]:</p>
<pre><code>scipy.optimize.line_search(f, dfdx, np.array([1.0, 1.0]), np.array([-1.0, -1.0]))
# alpha 1.0,
# number of function calls 2,
# number of gradient calls 1,
# new function value 2.7145122541078788,
# old function value 49.85777661748123,
# new gradient [0.90483742, 0.]
</code></pre>
<p>Finally - you said initially that you weren't happy with the lambdas - there is no need to use them at all. They are not often used to generate named functions. See [https://stackoverflow.com/questions/134626/which-is-more-preferable-to-use-lambda-functions-or-nested-functions-def] for more of a discussion.</p>
| 320
|
gradient descent implementation
|
Implementing gradient descent with Scala and Breeze - error : could not find implicit value for parameter op:
|
https://stackoverflow.com/questions/37624288/implementing-gradient-descent-with-scala-and-breeze-error-could-not-find-imp
|
<p>I'm attempting to apply a gradient descent implementation in Scala and breeze based on Octave from : <a href="https://stackoverflow.com/questions/10591343/gradient-descent-implementation-in-octave">Gradient Descent implementation in octave</a> </p>
<p>The octave code I'm attempting to re-write is :</p>
<pre><code>theta = theta -((1/m) * ((X * theta) - y)' * X)' * alpha;
</code></pre>
<p>I've come up with :</p>
<pre><code> val xv = DenseVector[Double](1.0, 1.0)
val yv = DenseVector[Double](1.0, 1.0)
val mymatrix : DenseMatrix[Double] = DenseMatrix( (1.0,2.0) , (3.0,4.0) )
val myvalue = (mymatrix - ((1 / m) * (( (xv * mymatrix - yv).t * xv).t * .0001)
</code></pre>
<p>but im receiving a compile time error : </p>
<pre><code>Multiple markers at this line:
◾could not find implicit value for parameter op: breeze.linalg.operators.OpSub.Impl2[breeze.linalg.DenseMatrix[Double],breeze.linalg.DenseVector[Double],That]
◾not enough arguments for method -: (implicit op: breeze.linalg.operators.OpSub.Impl2[breeze.linalg.DenseMatrix[Double],breeze.linalg.DenseVector[Double],That])That. Unspecified value parameter op.
</code></pre>
<p>Have I implemented gradient descent correctly using Scala and Breeze ?</p>
<p>It seems I need to provide an implicit for <code>-</code> operator ?</p>
|
<pre><code> val myvalue = (mymatrix - ((1 / m) * (( (xv * mymatrix - yv).t * xv).t * .0001)
</code></pre>
<p><code>xv</code> is Vector and the <code>mymatrix</code> is the <strong>Matrix</strong> which is <strong>unsupported</strong>.
that is the error u are encountering</p>
| 321
|
gradient descent implementation
|
Why I'm getting a huge cost in Stochastic Gradient Descent Implementation?
|
https://stackoverflow.com/questions/64791138/why-im-getting-a-huge-cost-in-stochastic-gradient-descent-implementation
|
<p>I've run into some problems while trying to implement Stochastic Gradient Descent, and basically what is happening is that my cost is growing like crazy and I don't have a clue why.</p>
<p>MSE implementation:</p>
<pre><code>def mse(x,y,w,b):
predictions = x @ w
summed = (np.square(y - predictions - b)).mean(0)
cost = summed / 2
return cost
</code></pre>
<p>Gradients:</p>
<pre><code>def grad_w(y,x,w,b,n_samples):
return -y @ x / n_samples + x.T @ x @ w / n_samples + b * x.mean(0)
def grad_b(y,x,w,b,n_samples):
return -y.mean(0) + x.mean(0) @ w + b
</code></pre>
<p>SGD Implementation:</p>
<pre><code>def stochastic_gradient_descent(X,y,w,b,learning_rate=0.01,iterations=500,batch_size =100):
length = len(y)
cost_history = np.zeros(iterations)
n_batches = int(length/batch_size)
for it in range(iterations):
cost =0
indices = np.random.permutation(length)
X = X[indices]
y = y[indices]
for i in range(0,length,batch_size):
X_i = X[i:i+batch_size]
y_i = y[i:i+batch_size]
w -= learning_rate*grad_w(y_i,X_i,w,b,length)
b -= learning_rate*grad_b(y_i,X_i,w,b,length)
cost = mse(X_i,y_i,w,b)
cost_history[it] = cost
if cost_history[it] <= 0.0052: break
return w, cost_history[:it]
</code></pre>
<p>Random Variables:</p>
<pre><code>w_true = np.array([0.2, 0.5,-0.2])
b_true = -1
first_feature = np.random.normal(0,1,1000)
second_feature = np.random.uniform(size=1000)
third_feature = np.random.normal(1,2,1000)
arrays = [first_feature,second_feature,third_feature]
x = np.stack(arrays,axis=1)
y = x @ w_true + b_true + np.random.normal(0,0.1,1000)
w = np.asarray([0.0,0.0,0.0], dtype='float64')
b = 1.0
</code></pre>
<p>After running this:</p>
<pre><code>theta,cost_history = stochastic_gradient_descent(x,y,w,b)
print('Final cost/MSE: {:0.3f}'.format(cost_history[-1]))
</code></pre>
<p>I Get that:</p>
<pre><code>Final cost/MSE: 3005958172614261248.000
</code></pre>
<p>And here is the <a href="https://i.sstatic.net/EbRVR.png" rel="nofollow noreferrer">plot</a></p>
|
<p>Here are a few suggestions:</p>
<ul>
<li>your learning rate is too big for the training: changing it to something like 1e-3 should be fine.</li>
<li>your update part could be slightly modified as follows:</li>
</ul>
<pre><code>def stochastic_gradient_descent(X,y,w,b,learning_rate=0.01,iterations=500,batch_size =100):
length = len(y)
cost_history = np.zeros(iterations)
n_batches = int(length/batch_size)
for it in range(iterations):
cost =0
indices = np.random.permutation(length)
X = X[indices]
y = y[indices]
for i in range(0,length,batch_size):
X_i = X[i:i+batch_size]
y_i = y[i:i+batch_size]
w -= learning_rate*grad_w(y_i,X_i,w,b,len(X_i)) # the denominator should be the actual batch size
b -= learning_rate*grad_b(y_i,X_i,w,b,len(X_i))
cost += mse(X_i,y_i,w,b)*len(X_i) # add batch loss
cost_history[it] = cost/length # this is a running average of your batch losses, which is statistically more stable
if cost_history[it] <= 0.0052: break
return w, b, cost_history[:it]
</code></pre>
<p>The final results:</p>
<pre><code>w_true = np.array([0.2, 0.5, -0.2])
b_true = -1
first_feature = np.random.normal(0,1,1000)
second_feature = np.random.uniform(size=1000)
third_feature = np.random.normal(1,2,1000)
arrays = [first_feature,second_feature,third_feature]
x = np.stack(arrays,axis=1)
y = x @ w_true + b_true + np.random.normal(0,0.1,1000)
w = np.asarray([0.0,0.0,0.0], dtype='float64')
b = 0.0
theta,bias,cost_history = stochastic_gradient_descent(x,y,w,b,learning_rate=1e-3,iterations=3000)
print("Final epoch cost/MSE: {:0.3f}".format(cost_history[-1]))
print("True final cost/MSE: {:0.3f}".format(mse(x,y,theta,bias)))
print(f"Final coefficients:\n{theta,bias}")
</code></pre>
<p><a href="https://i.sstatic.net/goM6N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/goM6N.png" alt="enter image description here" /></a></p>
| 322
|
gradient descent implementation
|
Issue with gradient descent implementation of linear regression
|
https://stackoverflow.com/questions/41150457/issue-with-gradient-descent-implementation-of-linear-regression
|
<p>I am taking <a href="https://www.coursera.org/learn/ml-regression" rel="nofollow noreferrer">this Coursera class</a> on machine learning / linear regression. Here is how they describe the gradient descent algorithm for solving for the estimated OLS coefficients:</p>
<p><a href="https://i.sstatic.net/QB9TY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QB9TY.png" alt="enter image description here"></a></p>
<p>So they use <code>w</code> for the coefficients, <code>H</code> for the design matrix (or features as they call it), and <code>y</code> for the dependent variable. And their convergence criteria is the usual of the norm of the gradient of RSS being less than tolerance epsilon; that is, their definition of "not converged" is:</p>
<p><a href="https://i.sstatic.net/FW1wH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FW1wH.png" alt="enter image description here"></a></p>
<p>I am having trouble getting this algorithm to converge and was wondering if I was overlooking something in my implementation. Below is the code. Please note that I also ran the sample dataset I use in it (<code>df</code>) through the <a href="http://statsmodels.sourceforge.net/devel/regression.html" rel="nofollow noreferrer">statsmodels regression library</a>, just to see that a regression could converge and to get coefficient values to tie out with. It did and they were:</p>
<pre><code>Intercept 4.344435
x1 4.387702
x2 0.450958
</code></pre>
<p>Here is my implementation. At each iteration, it prints the norm of the gradient of RSS:</p>
<pre><code>import numpy as np
import numpy.linalg as LA
import pandas as pd
from pandas import DataFrame
# First define the grad function: grad(RSS) = -2H'(y-Hw)
def grad_rss(df, var_name_y, var_names_h, w):
# Set up feature matrix H
H = DataFrame({"Intercept" : [1 for i in range(0,len(df))]})
for var_name_h in var_names_h:
H[var_name_h] = df[var_name_h]
# Set up y vector
y = df[var_name_y]
# Calculate the gradient of the RSS: -2H'(y - Hw)
result = -2 * np.transpose(H.values) @ (y.values - H.values @ w)
return result
def ols_gradient_descent(df, var_name_y, var_names_h, epsilon = 0.0001, eta = 0.05):
# Set all initial w values to 0.0001 (not related to our choice of epsilon)
w = np.array([0.0001 for i in range(0, len(var_names_h) + 1)])
# Iteration counter
t = 0
# Basic algorithm: keep subtracting eta * grad(RSS) from w until
# ||grad(RSS)|| < epsilon.
while True:
t = t + 1
grad = grad_rss(df, var_name_y, var_names_h, w)
norm_grad = LA.norm(grad)
if norm_grad < epsilon:
break
else:
print("{} : {}".format(t, norm_grad))
w = w - eta * grad
if t > 10:
raise Exception ("Failed to converge")
return w
# ##########################################
df = DataFrame({
"y" : [20,40,60,80,100] ,
"x1" : [1,5,7,9,11] ,
"x2" : [23,29,60,85,99]
})
# Run
ols_gradient_descent(df, "y", ["x1", "x2"])
</code></pre>
<p>Unfortunately this does not converge, and in fact prints a norm that is exploding with each iteration:</p>
<pre><code>1 : 44114.31506051333
2 : 98203544.03067812
3 : 218612547944.95386
4 : 486657040646682.9
5 : 1.083355358314664e+18
6 : 2.411675439503567e+21
7 : 5.368670935963926e+24
8 : 1.1951287949674022e+28
9 : 2.660496151835357e+31
10 : 5.922574875391406e+34
11 : 1.3184342751414824e+38
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
......
Exception: Failed to converge
</code></pre>
<p>If I increase the maximum number of iterations enough, it doesn't converge, but just blows out to infinity. </p>
<p>Is there an implementation error here, or am I misinterpreting the explanation in the class notes?</p>
<h2>Updated w/ Answer</h2>
<p>As @Kant suggested, the <code>eta</code> needs to updated at each iteration. The course itself had some sample formulas for this but none of them helped in the convergence. <a href="https://en.wikipedia.org/wiki/Gradient_descent#Description" rel="nofollow noreferrer">This section of the Wikipedia page about gradient descent</a> mentions the <a href="http://www.math.ucla.edu/~wotaoyin/math164/slides/wotao_yin_optimization_lec09_Baizilai_Borwein_method.pdf" rel="nofollow noreferrer">Barzilai-Borwein approach</a> as a good way of updating the <code>eta</code>. I implemented it and altered my code to update the <code>eta</code> with it at each iteration, and the regression converged successfully. Below is my translation of the Wikipedia version of the formula to the variables used in regression, as well as code that implements it. Again, this code is called in the loop of my original <code>ols_gradient_descent</code> to update the <code>eta</code>.</p>
<p><a href="https://i.sstatic.net/90N26.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/90N26.png" alt="enter image description here"></a></p>
<pre><code>def eta_t (w_t, w_t_minus_1, grad_t, grad_t_minus_1):
delta_w = w_t - w_t_minus_1
delta_grad = grad_t - grad_t_minus_1
eta_t = (delta_w.T @ delta_grad) / (LA.norm(delta_grad))**2
return eta_t
</code></pre>
|
<p>Try decreasing the value of eta. Gradient descent can diverge if eta is too high.</p>
| 323
|
gradient descent implementation
|
How to implement stochastic gradient descent
|
https://stackoverflow.com/questions/55880425/how-to-implement-stochastic-gradient-descent
|
<p>In stochastic gradient descent, we often consider the objective function as a sum of a finite number of functions: </p>
<pre><code> f(x)=∑fi(x) where i = 1 : n
</code></pre>
<p>At each iteration, rather than computing the gradient <code>∇f(x)</code>, stochastic gradient descent randomly samples <code>i</code> at uniform and computes <code>∇fi(x)</code> instead. </p>
<p>The insight is that stochastic gradient descent uses <code>∇fi(x)</code> as an unbiased estimator of <code>∇f(x)</code>.</p>
<p>We update <code>x</code> as : <code>x:=x−η∇fi(x)</code> where <code>η</code> is the learning step. </p>
<p>I found difficulties implementing this in R for an optimization problem.</p>
<pre><code>stoc_grad<-function(){
# set up a stepsize
alpha = 0.1
# set up a number of iteration
iter = 30
# define the objective function f(x) = sqrt(2+x)+sqrt(1+x)+sqrt(3+x)
objFun = function(x) return(sqrt(2+x)+sqrt(1+x)+sqrt(3+x))
# define the gradient of f(x) = sqrt(2+x)+sqrt(1+x)+sqrt(3+x)
gradient_1 = function(x) return(1/2*sqrt(2+x))
gradient_2 = function(x) return(1/2*sqrt(3+x))
gradient_3 = function(x) return(1/2*sqrt(1+x))
x = 1
# create a vector to contain all xs for all steps
x.All = numeric(iter)
# gradient descent method to find the minimum
for(i in seq_len(iter)){
x = x - alpha*gradient_1(x)
x = x - alpha*gradient_2(x)
x = x - alpha*gradient_3(x)
x.All[i] = x
print(x)
}
# print result and plot all xs for every iteration
print(paste("The minimum of f(x) is ", objFun(x), " at position x = ", x, sep = ""))
plot(x.All, type = "l")
}
</code></pre>
<p>Algorithm pseudo-code :
<a href="https://i.sstatic.net/4KPmN.png" rel="nofollow noreferrer">Find pseudo-code here</a></p>
<p>In fact , I want to test this algorithm for optimization of a test function like Three-hump camel function. </p>
<p><a href="https://en.wikipedia.org/wiki/Test_functions_for_optimization" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Test_functions_for_optimization</a></p>
<p>Other example : </p>
<p><a href="https://i.sstatic.net/nJm09.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>It seems there is a lot of confusion going on here for you. Here are the two main things I see wrong so far, in order of importance:</p>
<ol>
<li>Stochastic gradient descent is used when you have a lot of data, for which evaluating the objective function for all training observations at each iteration is computationally expensive. That is not the problem you're working on. See a great, short primer <a href="http://deeplearning.stanford.edu/tutorial/supervised/OptimizationStochasticGradientDescent/" rel="nofollow noreferrer">here</a></li>
<li>When your parameter has bounded support, as here where x ≥ -1, you're going to have problems unless you guard against propagation of <code>NaN</code>s.</li>
</ol>
<p>Here is a gradient descent implementation that will work for your problem (I've added code comments on the important changes):</p>
<pre><code># Having the number of iterations, step size, and start value be parameters the
# user can alter (with sane default values) I think is a better approach than
# hard coding them in the body of the function
grad<-function(iter = 30, alpha = 0.1, x_init = 1){
# define the objective function f(x) = sqrt(2+x)+sqrt(1+x)+sqrt(3+x)
objFun = function(x) return(sqrt(2+x)+sqrt(1+x)+sqrt(3+x))
# define the gradient of f(x) = sqrt(2+x)+sqrt(1+x)+sqrt(3+x)
# Note we don't split up the gradient here
gradient <- function(x) {
result <- 1 / (2 * sqrt(2 + x))
result <- result + 1 / (2 * sqrt(1 + x))
result <- result + 1 / (2 * sqrt(3 + x))
return(result)
}
x <- x_init
# create a vector to contain all xs for all steps
x.All = numeric(iter)
# gradient descent method to find the minimum
for(i in seq_len(iter)){
# Guard against NaNs
tmp <- x - alpha * gradient(x)
if ( !is.nan(suppressWarnings(objFun(tmp))) ) {
x <- tmp
}
x.All[i] = x
print(x)
}
# print result and plot all xs for every iteration
print(paste("The minimum of f(x) is ", objFun(x), " at position x = ", x, sep = ""))
plot(x.All, type = "l")
}
</code></pre>
<p>As I said before, we know the analytical solution to your minimization problem: x = -1. So, let's see how it works:</p>
<pre><code>grad()
[1] 0.9107771
[1] 0.8200156
[1] 0.7275966
...
[1] -0.9424109
[1] -0.9424109
[1] "The minimum of f(x) is 2.70279857718352 at position x = -0.942410938107257"
</code></pre>
<p><a href="https://i.sstatic.net/dxitQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/dxitQ.png" alt="enter image description here"></a></p>
| 324
|
gradient descent implementation
|
Gradient descent impementation python - contour lines
|
https://stackoverflow.com/questions/50723432/gradient-descent-impementation-python-contour-lines
|
<p>As a self study exercise I am trying to implement gradient descent on a linear regression problem from scratch and plot the resulting iterations on a contour plot. </p>
<p>My gradient descent implementation gives the correct result (tested with Sklearn) however the gradient descent plot doesn't seem to be <strong>perpendicular</strong> to the contour lines. Is this expected or did I get something wrong in my code / understanding? </p>
<h3>Algorithm</h3>
<p><a href="https://i.sstatic.net/lxwSs.png" rel="noreferrer"><img src="https://i.sstatic.net/lxwSs.png" alt="enter image description here"></a></p>
<h3>Cost function and gradient descent</h3>
<pre><code>import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
def costfunction(X,y,theta):
m = np.size(y)
#Cost function in vectorized form
h = X @ theta
J = float((1./(2*m)) * (h - y).T @ (h - y));
return J;
def gradient_descent(X,y,theta,alpha = 0.0005,num_iters=1000):
#Initialisation of useful values
m = np.size(y)
J_history = np.zeros(num_iters)
theta_0_hist, theta_1_hist = [], [] #For plotting afterwards
for i in range(num_iters):
#Grad function in vectorized form
h = X @ theta
theta = theta - alpha * (1/m)* (X.T @ (h-y))
#Cost and intermediate values for each iteration
J_history[i] = costfunction(X,y,theta)
theta_0_hist.append(theta[0,0])
theta_1_hist.append(theta[1,0])
return theta,J_history, theta_0_hist, theta_1_hist
</code></pre>
<h3>Plot</h3>
<pre><code>#Creating the dataset (as previously)
x = np.linspace(0,1,40)
noise = 1*np.random.uniform( size = 40)
y = np.sin(x * 1.5 * np.pi )
y_noise = (y + noise).reshape(-1,1)
X = np.vstack((np.ones(len(x)),x)).T
#Setup of meshgrid of theta values
T0, T1 = np.meshgrid(np.linspace(-1,3,100),np.linspace(-6,2,100))
#Computing the cost function for each theta combination
zs = np.array( [costfunction(X, y_noise.reshape(-1,1),np.array([t0,t1]).reshape(-1,1))
for t0, t1 in zip(np.ravel(T0), np.ravel(T1)) ] )
#Reshaping the cost values
Z = zs.reshape(T0.shape)
#Computing the gradient descent
theta_result,J_history, theta_0, theta_1 = gradient_descent(X,y_noise,np.array([0,-6]).reshape(-1,1),alpha = 0.3,num_iters=1000)
#Angles needed for quiver plot
anglesx = np.array(theta_0)[1:] - np.array(theta_0)[:-1]
anglesy = np.array(theta_1)[1:] - np.array(theta_1)[:-1]
%matplotlib inline
fig = plt.figure(figsize = (16,8))
#Surface plot
ax = fig.add_subplot(1, 2, 1, projection='3d')
ax.plot_surface(T0, T1, Z, rstride = 5, cstride = 5, cmap = 'jet', alpha=0.5)
ax.plot(theta_0,theta_1,J_history, marker = '*', color = 'r', alpha = .4, label = 'Gradient descent')
ax.set_xlabel('theta 0')
ax.set_ylabel('theta 1')
ax.set_zlabel('Cost function')
ax.set_title('Gradient descent: Root at {}'.format(theta_result.ravel()))
ax.view_init(45, 45)
#Contour plot
ax = fig.add_subplot(1, 2, 2)
ax.contour(T0, T1, Z, 70, cmap = 'jet')
ax.quiver(theta_0[:-1], theta_1[:-1], anglesx, anglesy, scale_units = 'xy', angles = 'xy', scale = 1, color = 'r', alpha = .9)
plt.show()
</code></pre>
<h3>Surface and contour plots</h3>
<p><a href="https://i.sstatic.net/9VjDM.png" rel="noreferrer"><img src="https://i.sstatic.net/9VjDM.png" alt="enter image description here"></a></p>
<h3>Comments</h3>
<p>My understanding is that the gradient descent follow contour lines perpendicularly. Is this not the case ? Thanks</p>
|
<p>The problem with the contour graph is that the scales of theta0 and theta1 are different. Just add "plt.axis('equal')" to the contour plot instructions and you will see that the gradient descent is in fact perpendicular to the contour lines.</p>
<p><a href="https://i.sstatic.net/Gwzeq.png" rel="nofollow noreferrer">Contour graph with same scales in both axis</a></p>
| 325
|
gradient descent implementation
|
What is wrong with my gradient descent implementation for Linear regression in Python
|
https://stackoverflow.com/questions/69071325/what-is-wrong-with-my-gradient-descent-implementation-for-linear-regression-in-p
|
<p>I've been trying to refresh my fundamental knowledge of stats, so I tried to implement simple linear regression in python using gradient descent.</p>
<p>The code correctly minimizes the MSE, however it does so by making a straight horizontal line across the dataset
<a href="https://i.sstatic.net/6lR4c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6lR4c.png" alt="enter image description here" /></a></p>
<p>I'm not sure if this is a hyper-parameter issue, if it is I can't find good values for it, or its because an error in code.</p>
<p>I have been suspecting that this is due to the error simply being train_y - pred_y
so the errors cancel each other out on each side.</p>
<p>I tried to use absolute error or mse error however I'm not sure how the derivative would work in this case, or if this is the right way to go.</p>
<p>The first function is the gradient descent implementation the rest is for the animation.</p>
<pre><code>
def gradientDescent(train_X,train_y,lr,epochs,init_slope=1,init_intercept=0.1):
if len(train_X) != len(train_y):
raise Exception("train_X and train_Y must be the same length.")
train_X = np.array(train_X)
train_y = np.array(train_y)
n = len(train_X)
slope = init_slope
intercept = init_intercept
for e in range(epochs):
pred_y = slope * train_X + intercept
errors = (train_y - pred_y)
mse = errors.mean()
slope -= lr * (-2/n) * np.sum(train_X * errors)
intercept -= lr * (-2/n) * np.sum(errors)
return mse, slope, intercept
def generateLine(slope,intercept,start,end):
x = [i for i in range(int(start),int(end))]
y = [slope*xi + intercept for xi in x]
return x, y
import matplotlib.pyplot as plt
import matplotlib.animation as animation
from sklearn import datasets
n_samples = 10
n_outliers = 2
train_X, train_y, coef = datasets.make_regression(n_samples=n_samples, n_features=1,
n_informative=1, noise=20,
coef=True, random_state=0)
lr = 0.03
epochs = 1000
fig, ax = plt.subplots()
line, = ax.plot([0], [0])
plt.plot(train_X,train_y,'o')
slope=0
intercept = 0
def animate(i):
global slope, intercept
if i == 1:
mse, slope, intercept = gradientDescent(train_X,train_y,lr=lr, epochs = 1,init_slope= np.random.random(), init_intercept=np.random.random())
mse, slope, intercept = gradientDescent(train_X,train_y,lr=lr,epochs=1,init_slope = slope, init_intercept = intercept)
line_x, line_y = generateLine(slope, intercept,min(min(train_X),min(train_y)),max(max(train_X),max(train_y)))
line.set_xdata(line_x)
line.set_ydata(line_y)
if i % 10 == 0:
print(f'Epoch: {i}, MSE = {mse:.6f}')
return line,
def init():
line.set_ydata([0])
return line,
ani = animation.FuncAnimation(fig, animate, frames=range(1, epochs), init_func=init, interval=100, blit=True)
plt.show()
</code></pre>
|
<p>I found the answer, the issue was the shape of the train_X was (10,1) so when I multiplied with the errors to find the derivative of the slope, it produced the wrong result.</p>
<p>I fixed it by flattening the train_X</p>
| 326
|
gradient descent implementation
|
How to implement multivariate linear stochastic gradient descent algorithm in tensorflow?
|
https://stackoverflow.com/questions/36031324/how-to-implement-multivariate-linear-stochastic-gradient-descent-algorithm-in-te
|
<p>I started with simple implementation of single variable linear gradient descent but don't know to extend it to multivariate stochastic gradient descent algorithm ?</p>
<p>Single variable linear regression </p>
<pre><code>import tensorflow as tf
import numpy as np
# create random data
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data * 0.5
# Find values for W that compute y_data = W * x_data
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
y = W * x_data
# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y - y_data))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# Before starting, initialize the variables
init = tf.initialize_all_variables()
# Launch the graph.
sess = tf.Session()
sess.run(init)
# Fit the line.
for step in xrange(2001):
sess.run(train)
if step % 200 == 0:
print(step, sess.run(W))
</code></pre>
|
<p>You have two part in your question:</p>
<ul>
<li>How to change this problem to a higher dimension space.</li>
<li>How to change from the batch gradient descent to a stochastic gradient descent.</li>
</ul>
<p>To get a higher dimensional setting, you can define your linear problem <code>y = <x, w></code>. Then, you just need to change the dimension of your Variable <code>W</code> to match the one of <code>w</code> and replace the multiplication <code>W*x_data</code> by a scalar product <code>tf.matmul(x_data, W)</code> and your code should run just fine.</p>
<p>To change the learning method to a stochastic gradient descent, you need to abstract the input of your cost function by using <code>tf.placeholder</code>.<br>
Once you have defined <code>X</code> and <code>y_</code> to hold your input at each step, you can construct the same cost function. Then, you need to call your step by feeding the proper mini-batch of your data.</p>
<p>Here is an example of how you could implement such behavior and it should show that <code>W</code> quickly converges to <code>w</code>.</p>
<pre><code>import tensorflow as tf
import numpy as np
# Define dimensions
d = 10 # Size of the parameter space
N = 1000 # Number of data sample
# create random data
w = .5*np.ones(d)
x_data = np.random.random((N, d)).astype(np.float32)
y_data = x_data.dot(w).reshape((-1, 1))
# Define placeholders to feed mini_batches
X = tf.placeholder(tf.float32, shape=[None, d], name='X')
y_ = tf.placeholder(tf.float32, shape=[None, 1], name='y')
# Find values for W that compute y_data = <x, W>
W = tf.Variable(tf.random_uniform([d, 1], -1.0, 1.0))
y = tf.matmul(X, W, name='y_pred')
# Minimize the mean squared errors.
loss = tf.reduce_mean(tf.square(y_ - y))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
# Before starting, initialize the variables
init = tf.initialize_all_variables()
# Launch the graph.
sess = tf.Session()
sess.run(init)
# Fit the line.
mini_batch_size = 100
n_batch = N // mini_batch_size + (N % mini_batch_size != 0)
for step in range(2001):
i_batch = (step % n_batch)*mini_batch_size
batch = x_data[i_batch:i_batch+mini_batch_size], y_data[i_batch:i_batch+mini_batch_size]
sess.run(train, feed_dict={X: batch[0], y_: batch[1]})
if step % 200 == 0:
print(step, sess.run(W))
</code></pre>
<p>Two side notes: </p>
<ul>
<li><p>The implementation below is called a mini-batch gradient descent as at each step, the gradient is computed using a subset of our data of size <code>mini_batch_size</code>. This is a variant from the stochastic gradient descent that is usually used to stabilize the estimation of the gradient at each step. The stochastic gradient descent can be obtained by setting <code>mini_batch_size = 1</code>.</p></li>
<li><p>The dataset can be shuffle at every epoch to get an implementation closer to the theoretical consideration. Some recent work also consider only using one pass through your dataset as it prevent over-fitting. For a more mathematical and detailed explanation, you can see <a href="http://research.microsoft.com/pubs/192769/tricks-2012.pdf" rel="noreferrer">Bottou12</a>. This can be easily change according to your problem setup and the statistic property your are looking for.</p></li>
</ul>
| 327
|
gradient descent implementation
|
Implementing a linear regression using gradient descent
|
https://stackoverflow.com/questions/58996556/implementing-a-linear-regression-using-gradient-descent
|
<p>I'm trying to implement a linear regression with gradient descent as explained in this article (<a href="https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931" rel="nofollow noreferrer">https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931</a>).
I've followed to the letter the implementation, yet my results overflow after a few iterations.
I'm trying to get this result approximately: y = -0.02x + 8499.6.</p>
<p>The code:</p>
<pre><code>package main
import (
"encoding/csv"
"fmt"
"strconv"
"strings"
)
const (
iterations = 1000
learningRate = 0.0001
)
func computePrice(m, x, c float64) float64 {
return m * x + c
}
func computeThetas(data [][]float64, m, c float64) (float64, float64) {
N := float64(len(data))
dm, dc := 0.0, 0.0
for _, dataField := range data {
x := dataField[0]
y := dataField[1]
yPred := computePrice(m, x, c)
dm += (y - yPred) * x
dc += y - yPred
}
dm *= -2/N
dc *= -2/N
return m - learningRate * dm, c - learningRate * dc
}
func main() {
data := readXY()
m, c := 0.0, 0.0
for k := 0; k < iterations; k++ {
m, c = computeThetas(data, m, c)
}
fmt.Printf("%.4fx + %.4f\n", m, c)
}
func readXY() ([][]float64) {
file := strings.NewReader(data)
reader := csv.NewReader(file)
records, err := reader.ReadAll()
if err != nil {
panic(err)
}
records = records[1:]
size := len(records)
data := make([][]float64, size)
for i, v := range records {
val1, err := strconv.ParseFloat(v[0], 64)
if err != nil {
panic(err)
}
val2, err := strconv.ParseFloat(v[1], 64)
if err != nil {
panic(err)
}
data[i] = []float64{val1, val2}
}
return data
}
var data = `km,price
240000,3650
139800,3800
150500,4400
185530,4450
176000,5250
114800,5350
166800,5800
89000,5990
144500,5999
84000,6200
82029,6390
63060,6390
74000,6600
97500,6800
67000,6800
76025,6900
48235,6900
93000,6990
60949,7490
65674,7555
54000,7990
68500,7990
22899,7990
61789,8290`
</code></pre>
<p>And here it can be worked on in the GO playground:
<a href="https://play.golang.org/p/2CdNbk9_WeY" rel="nofollow noreferrer">https://play.golang.org/p/2CdNbk9_WeY</a></p>
<p>What do I need to fix to get the correct result ?</p>
|
<blockquote>
<p>Why would a formula work on one data set and not another one?</p>
</blockquote>
<p>In addition to sascha's remarks, here's another way to look at problems of this application of gradient descent: The algorithm offers no guarantee that an iteration yields a better result than the previous, so it doesn't necessarily converge to a result, because:</p>
<ul>
<li>The gradients <code>dm</code> and <code>dc</code> in axes <code>m</code> and <code>c</code> are handled indepently from each other; <code>m</code> is updated in the descending direction according to <code>dm</code>, and <code>c</code> at the same time is updated in the descending direction according to <code>dc</code> — but, with certain curved surfaces z = f(m, c), the gradient in a direction between axes <code>m</code> and <code>c</code> can have the opposite sign compared to <code>m</code> and <code>c</code> on their own, so, while updating any one of <code>m</code> or <code>c</code> would converge, updating both moves away from the optimum.</li>
<li>However, more likely the failure reason in this case of linear regression to a point cloud is the entirely arbitrary magnitude of the update to <code>m</code> and <code>c</code>, determined by the product of an obscure <em>learning rate</em> and the gradient. It is quite possible that such an update oversteps a minimum for the target function, even that this is repeated with higher amplitude in each iteration.</li>
</ul>
| 328
|
gradient descent implementation
|
stochastic gradient descent algorithm implementation in matlab
|
https://stackoverflow.com/questions/43016224/stochastic-gradient-descent-algorithm-implementation-in-matlab
|
<p>I implemented stochastic gradient descent algorithm matlab but unable to get proper result. First, I shuffled the training set and update each weight using one sample at a time. Is there any mistake ?</p>
<pre><code>function [theta, J] = gradientdescent(x, y,theta,alpha,epochs)
m = length(y);
J = zeros(epochs, 1);
for e = 1:epochs
n = randperm(size(x,1));
input=x(n,:);
target=y(n);
htheta = input * theta;
for i=1:m
for j = 1:size(theta, 1)
theta(j) = theta(j) - alpha * (htheta(i) - target(i)) * input(i,j);
end
end
J(e) = costfunction(x, y, theta);
end
end
</code></pre>
| 329
|
|
gradient descent implementation
|
Issue implementing gradient descent
|
https://stackoverflow.com/questions/46555883/issue-implementing-gradient-descent
|
<p>I'm implementing gradient descent from scratch and I've got a segment of code which is giving me trouble.</p>
<pre><code>temp = theta_new[j]
theta_new[j] = theta_new[j] - alpha*deriv
theta_old[j] = temp
</code></pre>
<p>It's not changing <code>theta_new[j]</code>. If I print <code>theta_new[j]</code> just after the assignment of <code>theta_new[j]</code> then it gets changed, but somehow the third line in which I assign <code>theta_old[j]</code> reverts <code>theta_new[j]</code> back to initial value. I assume this has something to do with how arrays are referenced, but I can't wrap my head around it.</p>
|
<p>You should use <code>deep copy</code> to copy objects and create a new memory value and a new pointer :</p>
<pre><code>import copy
theta_old = copy.deepcopy(theta_new)
theta_new[j] = theta_new[j] - alpha*deriv
...
</code></pre>
<p><a href="https://docs.python.org/3/library/copy.html" rel="nofollow noreferrer">https://docs.python.org/3/library/copy.html</a></p>
| 330
|
gradient descent implementation
|
vector implementation of Gradient Descent with multiple theta
|
https://stackoverflow.com/questions/38341783/vector-implementation-of-gradient-descent-with-multiple-theta
|
<p>I am trying to implement gradient descent in octave for function having multiple theta value . Do you have any sample so?</p>
|
<p>It was already answered in other question 4 years ago:
<a href="https://stackoverflow.com/questions/10591343/gradient-descent-implementation-in-octave">Gradient Descent implementation in octave</a></p>
<p>Use the code from there and probably update it a bit to match your specifics.</p>
| 331
|
gradient descent implementation
|
How to implement mini-batch gradient descent in python?
|
https://stackoverflow.com/questions/38157972/how-to-implement-mini-batch-gradient-descent-in-python
|
<p>I have just started to learn deep learning. I found myself stuck when it came to gradient descent. I know how to implement batch gradient descent. I know how it works as well how mini-batch and stochastic gradient descent works in theory. But really can't understand how to implement in code.</p>
<pre><code>import numpy as np
X = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1] ])
y = np.array([[0,1,1,0]]).T
alpha,hidden_dim = (0.5,4)
synapse_0 = 2*np.random.random((3,hidden_dim)) - 1
synapse_1 = 2*np.random.random((hidden_dim,1)) - 1
for j in xrange(60000):
layer_1 = 1/(1+np.exp(-(np.dot(X,synapse_0))))
layer_2 = 1/(1+np.exp(-(np.dot(layer_1,synapse_1))))
layer_2_delta = (layer_2 - y)*(layer_2*(1-layer_2))
layer_1_delta = layer_2_delta.dot(synapse_1.T) * (layer_1 * (1-layer_1))
synapse_1 -= (alpha * layer_1.T.dot(layer_2_delta))
synapse_0 -= (alpha * X.T.dot(layer_1_delta))
</code></pre>
<p>This is the sample code from ANDREW TRASK's blog. It's small and easy to understand. This code implements batch gradient descent but I would like to implement mini-batch and stochastic gradient descent in this sample. How could I do this? What I have to add/modify in this code in order to implement mini-batch and stochastic gradient descent respectively? Your help will help me a lot. Thanks in advance.( I know this sample code has few examples, whereas I need large dataset to split into mini-batches. But I would like to know how can I implement it)</p>
|
<p>This function returns the mini-batches given the inputs and targets:</p>
<pre><code>def iterate_minibatches(inputs, targets, batchsize, shuffle=False):
assert inputs.shape[0] == targets.shape[0]
if shuffle:
indices = np.arange(inputs.shape[0])
np.random.shuffle(indices)
for start_idx in range(0, inputs.shape[0] - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield inputs[excerpt], targets[excerpt]
</code></pre>
<p>and this tells you how to use that for training:</p>
<pre><code>for n in xrange(n_epochs):
for batch in iterate_minibatches(X, Y, batch_size, shuffle=True):
x_batch, y_batch = batch
l_train, acc_train = f_train(x_batch, y_batch)
l_val, acc_val = f_val(Xt, Yt)
logging.info('epoch ' + str(n) + ' ,train_loss ' + str(l_train) + ' ,acc ' + str(acc_train) + ' ,val_loss ' + str(l_val) + ' ,acc ' + str(acc_val))
</code></pre>
<p>Obviously you need to define the f_train, f_val and other functions yourself given the optimisation library (e.g. Lasagne, Keras) you are using.</p>
| 332
|
gradient descent implementation
|
How to implement naive batch gradient descent?
|
https://stackoverflow.com/questions/45562063/how-to-implement-naive-batch-gradient-descent
|
<p>everyone. I have a question about the implement of gradient descent. I have found several optimizers, like ada_grad, adam, sgd and so on, they're perfect. But I'm attempting to implement the naive gradient method batch gradient descent that is with the fix learning rate and acts on the whole examples in each batch. How to do it? Wait for your help. Thanks very much.</p>
|
<p>How about using large enough batch-size and SGD? It is equivalent to simple Gradient Descent method.</p>
| 333
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.