markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The mass matrix should be the same as the trace of the 2nd shape integrals:
(shape_integral_2[0][0] + shape_integral_2[1][1] + shape_integral_2[2][2] - me).expand()
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Stiffness matrix Differentiate the shape functions:
B = Matrix(np.zeros((4, 12))) B[0, :] = S[0, :].diff(xi, 1) / l B[1, :] = S[1, :].diff(xi, 2) / l**2 B[2, :] = S[2, :].diff(xi, 2) / l**2 B[3, :] = S[3, :].diff(xi, 1) / l B.simplify() B[:, :6] B[:, 6:] titles = ['x-defl', 'y-defl', 'z-defl', 'torsion'] for i in range(4): sympy.plot(*([xx.subs(l, 2) for xx in B[i,:] if xx != 0] + [(xi, 0, 1)]), title=titles[i])
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Define the stiffness distribution (linear):
EA1, EA2, EIy1, EIy2, EIz1, EIz2, GJ1, GJ2 = symbols('EA_1, EA_2, EIy_1, EIy_2, EIz_1, EIz_2, GJ_1, GJ_2') EA = (1 - xi)*EA1 + xi*EA2 EIy = (1 - xi)*EIy1 + xi*EIy2 EIz = (1 - xi)*EIz1 + xi*EIz2 GJ = (1 - xi)*GJ1 + xi*GJ2 EA, EIy, EIz, GJ
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Note that $EI_y$ refers to the stiffness for bending in the $y$-direction, not about the $y$ axis. Try and simplify results by using the average values where they come up
Ex, Ey, Ez, Gx = symbols('E_x, E_y, E_z, G_x') def sym_ke(): # Note the order -- y deflections depend on EIz etc E = Matrix(np.diag([EA, EIz, EIy, GJ])) integrand = B.T * E * B ke = integrand.applyfunc( lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).factor() #.subs((EA1+EA2), 2*Ex) #.expand().factor() ) return ke ke = sym_ke() def simplify_ke(ke, EI=False): result = ke.applyfunc( lambda xxx: xxx.subs((EA1+EA2), 2*Ex).subs((GJ1+GJ2), 2*Gx)) if EI: result = result.applyfunc( lambda xxx: xxx.subs((EIy1+EIy2), 2*Ey).subs((EIz1+EIz2), 2*Ez)) return result kem = simplify_ke(ke, True) kem[:, :6] kem[:, 6:]
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Special case: uniform stiffness.
kem.subs({EA1: Ex, EA2: Ex, EIy1: Ey, EIy2: Ey, EIz1: Ez, EIz2: Ez})
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
This is the same as Rao2004 p.326 Stress stiffening Now derive the stress stiffening (e.g. centrifugal stiffening) matrix. Ref Cook1989. Need the matrix $\boldsymbol{G}$ such that $$ \begin{bmatrix} u_{,x} \ v_{,x} \ w_{,x} \ \end{bmatrix} = \boldsymbol{Gq} $$
# G = Matrix(np.zeros((3, 12))) # G[0, :] = S[0, :].diff(xi, 1) / l # G[1, :] = S[1, :].diff(xi, 1) / l # G[2, :] = S[2, :].diff(xi, 1) / l G = S[:3, :].diff(xi, 1) / l G.simplify() G[:, :6] G[:, 6:] def sym_ks_axial_force(): # Unit axial force (absorbing area from integral), block matrix 3 times #smat3 = Matrix(9, 9, lambda i, j: 1 if i == j and i % 3 == 0 else 0) #integrand = G.T * smat3 * G integrand = G.T * G ks = integrand.applyfunc( lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).factor() #.subs((EA1+EA2), 2*Ex) #.expand().factor() ) return ks ks = sym_ks_axial_force() ks * 30*l
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
This agrees with Cook1989, p.434, for the transverse directions. Generalised forces Define the force distribution (linear):
fx1, fx2, fy1, fy2, fz1, fz2 = symbols('f_x1, f_x2, f_y1, f_y2, f_z1, f_z2') fx = (1 - xi)*fx1 + xi*fx2 fy = (1 - xi)*fy1 + xi*fy2 fz = (1 - xi)*fz1 + xi*fz2 f = Matrix([fx, fy, fz]) f # Shape functions for applied force -- linear SF = Matrix(np.zeros((3, 12))) SF[0, 0 ] = x2 # x SF[0, 6 ] = xi SF[1, 1 ] = x2 # y SF[1, 7 ] = xi SF[2, 2 ] = x2 # z SF[2, 8 ] = xi SF
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
First shape integral for applied forces:
shape_integral_F1 = SF.applyfunc( lambda xxx: l * sympy.integrate(xxx, (xi, 0, 1)).expand().simplify() ) shape_integral_F1
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Second shape integral for applied forces:
shape_integral_F2 = [ [l * (S[i, :].T * SF[j, :]).applyfunc( lambda xxx: sympy.integrate(xxx, (xi, 0, 1)).expand().simplify()) for j in range(3)] for i in range(3) ]
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
The generalised nodal forces are given by the trace of this (the other parts of it are used to find the moments on the whole body directly...):
F = shape_integral_F2[0][0] + shape_integral_F2[1][1] + shape_integral_F2[2][2]
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Special case -- constant force:
F * Matrix([fx1, fy1, fz1, 0, 0, 0, fx1, fy1, fz1, 0, 0, 0])
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Another example -- a uniform distributed force in the z direction:
F * Matrix([0, 0, fz1, 0, 0, 0, 0, 0, fz1, 0, 0, 0]) / (l/12*fz1)
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Output Write a Python file with functions to calculate all of these:
def numpy_array_str(expr): return str(expr) \ .replace('Matrix([', 'array([\n') \ .replace('], [', '],\n[') \ .replace(']])', ']\n])') def numpy_array_str_2x2(arr): return ',\n'.join(['[{}]'.format(',\n'.join([numpy_array_str(arr[i][j]) for j in range(3)])) for i in range(3)]) import datetime code = """ # Automatically generated from SymPy in notebook # {date} from __future__ import division from numpy import array def mass(l, rho_1, rho_2): return {mass} def S1(l, rho_1, rho_2): return {S1} def S2(l, rho_1, rho_2): return [ {S2} ] def K(l, E_x, G_x, EIy_1, EIy_2, EIz_1, EIz_2): return {K} def Ks(l): return {Ks} def F1(l): return {F1} def F2(l): return [ {F2} ] """.format( date=datetime.datetime.now(), mass=mass, S1=numpy_array_str(shape_integral_1), S2=numpy_array_str_2x2(shape_integral_2), K=numpy_array_str(simplify_ke(ke)), Ks=numpy_array_str(ks), F1=numpy_array_str(shape_integral_F1), F2=numpy_array_str_2x2(shape_integral_F2) ) with open('../beamfe/tapered_beam_element_integrals.py', 'wt') as f: f.write(code)
theory/FE element matrices.ipynb
ricklupton/beamfe
mit
Peak finding Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should: Properly handle local maxima at the endpoints of the input array. Return a Numpy array of integer indices. Handle any Python iterable as input.
def find_peaks(a): """Find the indices of the local maxima in a sequence.""" a = np.array(a) s = np.sign(np.diff(a)) d = np.diff(s) ind = [i for i in range(len(d)) if d[i] == 2] if s[-1] == 1: ind.append(len(a)-1) return(np.array(ind)) p1 = find_peaks([2,0,1,0,2,0,1]) assert np.allclose(p1, np.array([0,2,4,6])) p2 = find_peaks(np.array([0,1,2,3])) assert np.allclose(p2, np.array([3])) p3 = find_peaks([3,2,1,0]) assert np.allclose(p3, np.array([0])) find_peaks([2,2,2,1,2,2,2])
assignments/assignment07/AlgorithmsEx02.ipynb
aschaffn/phys202-2015-work
mit
Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following: Convert that string to a Numpy array of integers. Find the indices of the local maxima in the digits of $\pi$. Use np.diff to find the distances between consequtive local maxima. Visualize that distribution using an appropriately customized histogram.
from sympy import pi, N pi_digits_str = str(N(pi, 10001))[2:] pi_int = np.array(list(pi_digits_str), dtype="int") pks = find_peaks(pi_int) pks_diff = np.diff(pks) plt.hist(pks_diff, bins = range(0,max(pks_diff)+1)) min(pks_diff), max(pks_diff) pks assert True # use this for grading the pi digits histogram
assignments/assignment07/AlgorithmsEx02.ipynb
aschaffn/phys202-2015-work
mit
This is an iPython Notebook. You can write whatever Python code you like here - output like the print will be shown below the cell, and the final result is also shown (the result of a + 200). Note - your Python code is running on a server I've set up (which has everything you need), not on your local machine. Exercise - save the notebook (do this regularly), by pressing Ctrl+s (or the save icon) Hint - if you are struggling what to write at any point, try pressing Tab - iPython should try to offer some sensible completions. If you want to know what a function does, try Shift+Tab to bring up documentation. 1. Numpy Next we'll import the libraries we need...
%matplotlib inline import dlt import numpy as np import chainer as C
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
Now we'll learn how to use these libraries to create deep learning functions (later, in the full tutorial, we'll use this to train a handwriting recognizer). Here are two ways to create a numpy array:
a = np.array([1, 2, 3, 4, 5], dtype=np.int32) print("a =", a) print("a.shape =", a.shape) print() b = np.zeros((2, 3), dtype=np.float32) print("b =", b) print("b.shape =", b.shape)
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
A np.array is a multidimensional array - a very flexible thing, it can be: - 0-dimensional (a number, like 5) - 1-dimensional (a vector, like a above) - 2-dimensional (a matrix, like b above) - N-dimensional (...) It can also contain either whole numbers (np.int32) or real numbers (np.float32). OK, I've done a bit much now - time for you... Exercise - create the following numpy arrays, and print out the shape:
# EXERCISE # 1. an array scalar containing the integer 5 # 2. a (10, 20) array of zeros # 3. a (3, 3) array of different numbers (hint: use a list-of-lists)
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
Now we just need a few ways of working with these arrays - here are some examples of things that you can do:
x = np.array([[1, 2, 3], [4, 5, 6]], dtype=np.int32) print("x =\n%s" % x) print() # Indexing print("x[0, 1] =", x[0, 1]) # 0th row, 1st column print("x[1, 1] =", x[1, 1]) # 1st row, 1st column print() # Slicing print("x[0, :] =", x[0, :]) # 0th row, all columns print("x[:, 2] =", x[:, 2]) # 2nd column, all rows print("x[1, :] =", x[1, :]) # 1st row, all columns print("x[1, 0:2] =", x[1, 0:2]) # 1st row, first two columns print() # Other numpy functions (there are very many more...) print("np.argmax(x[0, :]) =", np.argmax(x[0, :])) # Find the index of the maximum element in the 0th row
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
I won't explain all of this in detail, but have a play around with arrays, see what you can do with the above operations. Exercise - try to use your numpy operations to find the following with M:
M = np.arange(900, dtype=np.float32).reshape(45, 20) print(M.shape) # EXERCISE # 1. print out row number 0 (hint, it should be shape (20,)) # 2. print out row number 34 # 3. select column 15, print out the shape # 4. select rows 30-40 inclusive, columns 5-8 inclusive, print out the shape (hint: should be (11, 4))
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
2. Chainer We'll use numpy to get data in & out of Chainer, which is our deep learning library, but Chainer will do most of the data processing. Here is how you get some data into Chainer, use a linear operation to change its shape, and get the result back out again:
a = C.Variable(np.zeros((10, 20), dtype=np.float32)) print("a.data.shape =", a.data.shape) transformation = C.links.Linear(20, 30) b = transformation(a) print("b.data.shape =", b.data.shape) c = C.functions.tanh(b) print("c.data.shape =", c.data.shape)
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
This may not seem particularly special, but this is the heart of a deep learning function. Take an input array, make various transformations that mess around with the shape, and produce an output array. Some concepts: - A Variable holds an array - this is some data going through the function - A Link contains some parameters (these start random), which process an input Variable, and produce an output Variable. - A Function is a Link without any parameters (like sin, cos, tan, tanh, max... so many more...) Exercise - use Chainer to calculate the following:
# EXERCISE # 1. Create an array, shape (2, 3) of various float numbers, put it in a variable a = None # your array here # 2. Print out tanh(a) (for the whole array) # 3. Create a linear link of shape (3, 5) - this means it takes (N, 3) and produces (N, 5) mylink = None # your link here # 4. Use your link to transform `a`, then take the tanh, check the shape of the result # 5. Uncomment the following; what happens when you re-run the code? # print("W =", mylink.W.data)
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
If you can do all of this, you're ready to create a deep learning function. In the last step, you may have noticed something interesting - the parameters inside the link change every time it is re-created. This is because deep learning functions start off random! Random functions don't sound too useful, so later we're going to learn how to "teach" them to be useful functions. 3. Plotting curves We've provided a very simple log plotting library, dlt.Log, demonstrated below:
log = dlt.Log() for i in range(100): # The first argument "loss" says which plot to put the value on # The second argument "train" gives it a name on that plot # The third argument is the y-value log.add("loss", "train", i) log.add("loss", "valid", 2 * i) log.show()
examples/Intro.ipynb
DouglasOrr/DeepLearnTute
mit
Fitting Lines to Data We'll cover very basic line fitting, largely ignoring the subtleties of the statistics in favor of showing you how to perform simple fits of models to data.
# These import commands set up the environment so we have access to numpy and pylab functions import numpy as np import pylab as pl # Data Fitting # First, we'll generate some fake data to use x = np.linspace(0,10,50) # 50 x points from 0 to 10 # Remember, you can look at the help for linspace too: # help(np.linspace) # y = m x + b y = 2.5 * x + 1.2 # let's plot that pl.clf() pl.plot(x,y) # looks like a simple line. But we want to see the individual data points pl.plot(x,y,marker='s') # We need to add noise first noise = pl.randn(y.size) # Like IDL, python has a 'randn' function that is centered at 0 with a standard deviation of 1. # IDL's 'randomu' is 'pl.rand' instead # What's y.size? print y.size print len(y)
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
y.size is the number of elements in y, just like len(y) or, in IDL, n_elements(y)
# We can add arrays in python just like in IDL noisy_flux = y + noise # We'll plot it too, but this time without any lines # between the points, and we'll use black dots # ('k' is a shortcut for 'black', '.' means 'point') pl.clf() # clear the figure pl.plot(x,noisy_flux,'k.') # We need labels, of course pl.xlabel("Time") pl.ylabel("Flux")
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Now we're onto the fitting stage. We're going to fit a function of the form $$y = mx + b$$ which is the same as $$f(x) = p[1]x + p[0]$$ to the data. This is called "linear regression", but it is also a special case of a more general concept: this is a first-order polynomial. "First Order" means that the highest exponent of x in the equation is 1
# We'll use polyfit to find the values of the coefficients. The third # parameter is the "order" p = np.polyfit(x,noisy_flux,1) # help(polyfit) if you want to find out more # print our fit parameters. They are not exact because there's noise in the data! # note that this is an array! print p print type(p) # you can ask python to tell you what type a variable is # Great! We've got our fit. Let's overplot the data and the fit now pl.clf() # clear the figure pl.plot(x,noisy_flux,'k.') # repeated from above pl.plot(x,p[0]*x+p[1],'r-') # A red solid line pl.xlabel("Time") # labels again pl.ylabel("Flux") # Cool, but there's another (better) way to do this. We'll use the polyval # function instead of writing out the m x + b equation ourselves pl.clf() # clear the figure pl.plot(x,noisy_flux,'k.') # repeated from above pl.plot(x,np.polyval(p,x),'r-') # A red solid line pl.xlabel("Time") # labels again pl.ylabel("Flux") # help(polyval) if you want to find out more
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Let's do the same thing with a noisier data set. I'm going to leave out most of the comments this time.
noisy_flux = y+noise*10 p = polyfit(x,noisy_flux,1) print p # plot it pl.clf() # clear the figure pl.plot(x,noisy_flux,'k.') # repeated from above pl.plot(x,np.polyval(p,x),'r-',label="Best fit") # A red solid line pl.plot(x,2.5*x+1.2,'b--',label="Input") # a blue dashed line showing the REAL line pl.legend(loc='best') # make a legend in the best location pl.xlabel("Time") # labels again pl.ylabel("Flux")
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Despite the noisy data, our fit is still pretty good! One last plotting trick, then we'll move on.
pl.clf() # clear the figure pl.errorbar(x,noisy_flux,yerr=10,marker='.',color='k',linestyle='none') # errorbar requires some extras to look nice pl.plot(x,np.polyval(p,x),'r-',label="Best fit") # A red solid line pl.plot(x,2.5*x+1.2,'b--',label="Input") # a blue dashed line showing the REAL line pl.legend(loc='best') # make a legend in the best location pl.xlabel("Time") # labels again pl.ylabel("Flux")
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Curve Fitting We'll now move on to more complicated curves. What if the data looks more like a sine curve? We'll create "fake data" in basically the same way as above.
# this time we want our "independent variable" to be in radians x = np.linspace(0,2*np.pi,50) y = np.sin(x) pl.clf() pl.plot(x,y) # We'll make it noisy again noise = pl.randn(y.size) noisy_flux = y + noise pl.plot(x,noisy_flux,'k.') # no clear this time
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
That looks like kind of a mess. Let's see how well we can fit it. The function we're trying to fit has the form: $$f(x) = A * sin(x - B)$$ where $A$ is a "scale" parameter and $B$ is the side-to-side offset (or the "delay" if the x-axis is time). For our data, they are $A=1$ and $B=0$ respectively, because we made $y=sin(x)$
# curve_fit is the function we need for this, but it's in another package called scipy from scipy.optimize import curve_fit # we need to know what it does: help(curve_fit)
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Look at the returns: Returns ------- popt : array Optimal values for the parameters so that the sum of the squared error of ``f(xdata, *popt) - ydata`` is minimized pcov : 2d array The estimated covariance of popt. The diagonals provide the variance of the parameter estimate. So the first set of returns is the "best-fit parameters", while the second set is the "covariance matrix"
def sinfunc(x,a,b): return a*np.sin(x-b) fitpars, covmat = curve_fit(sinfunc,x,noisy_flux) # The diagonals of the covariance matrix are variances # variance = standard deviation squared, so we'll take the square roots to get the standard devations! # You can get the diagonals of a 2D array easily: variances = covmat.diagonal() std_devs = np.sqrt(variances) print fitpars,std_devs # Let's plot our best fit, see how well we did # These two lines are equivalent: pl.plot(x, sinfunc(x, fitpars[0], fitpars[1]), 'r-') pl.plot(x, sinfunc(x, *fitpars), 'r-')
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Again, this is pretty good despite the noisiness. Fitting a Power Law Power laws occur all the time in physis, so it's a good idea to learn how to use them. What's a power law? Any function of the form: $$f(t) = a t^b$$ where $x$ is your independent variable, $a$ is a scale parameter, and $b$ is the exponent (the power). When fitting power laws, it's very useful to take advantage of the fact that "a power law is linear in log-space". That means, if you take the log of both sides of the equation (which is allowed) and change variables, you get a linear equation! $$\ln(f(t)) = \ln(a t^b) = \ln(a) + b \ln(t)$$ We'll use the substitutions $y=\ln(f(t))$, $A=\ln(a)$, and $x=\ln(t)$, so that $$y=a+bx$$ which looks just like our linear equation from before (albeit with different letters for the fit parameters). We'll now go through the same fitting exercise as before, but using powerlaws instead of lines.
t = np.linspace(0.1,10) a = 1.5 b = 2.5 z = a*t**b pl.clf() pl.plot(t,z) # Change the variables # np.log is the natural log y = np.log(z) x = np.log(t) pl.clf() pl.plot(x,y) pl.ylabel("log(z)") pl.xlabel("log(t)")
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
It's a straight line. Now, for our "fake data", we'll add the noise before transforming from "linear" to "log" space
noisy_z = z + pl.randn(z.size)*10 pl.clf() pl.plot(t,z) pl.plot(t,noisy_z,'k.') noisy_y = np.log(noisy_z) pl.clf() pl.plot(x,y) pl.plot(x,noisy_y,'k.') pl.ylabel("log(z)") pl.xlabel("log(t)")
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Note how different this looks from the "noisy line" we plotted earlier. Power laws are much more sensitive to noise! In fact, there are some data points that don't even show up on this plot because you can't take the log of a negative number. Any points where the random noise was negative enough that the curve dropped below zero ended up being "NAN", or "Not a Number". Luckily, our plotter knows to ignore those numbers, but polyfit doesnt.
print noisy_y # try to polyfit a line pars = np.polyfit(x,noisy_y,1) print pars
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
In order to get around this problem, we need to mask the data. That means we have to tell the code to ignore all the data points where noisy_y is nan. My favorite way to do this is to take advantage of a curious fact: $1=1$, but nan!=nan
print 1 == 1 print np.nan == np.nan
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
So if we find all the places were noisy_y != noisy_y, we can get rid of them. Or we can just use the places where noisy_y equals itself.
OK = noisy_y == noisy_y print OK
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
This OK array is a "boolean mask". We can use it as an "index array", which is pretty neat.
print "There are %i OK values" % (OK.sum()) masked_noisy_y = noisy_y[OK] masked_x = x[OK] print "masked_noisy_y has length",len(masked_noisy_y) # now polyfit again pars = np.polyfit(masked_x,masked_noisy_y,1) print pars # cool, it worked. But the fit looks a little weird! fitted_y = polyval(pars,x) pl.plot(x, fitted_y, 'r--')
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
The noise seems to have affected our fit.
# Convert bag to linear-space to see what it "really" looks like fitted_z = np.exp(fitted_y) pl.clf() pl.plot(t,z) pl.plot(t,noisy_z,'k.') pl.plot(t,fitted_z,'r--') pl.xlabel('t') pl.ylabel('z')
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
That's pretty bad. A "least-squares" approach, as with curve_fit, is probably going to be the better choice. However, in the absence of noise (i.e., on your homework), this approach should work
def powerlaw(x,a,b): return a*(x**b) pars,covar = curve_fit(powerlaw,t,noisy_z) pl.clf() pl.plot(t,z) pl.plot(t,noisy_z,'k.') pl.plot(t,powerlaw(t,*pars),'r--') pl.xlabel('t') pl.ylabel('z')
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Tricks with Arrays We need to cover a few syntactic things comparing IDL and python. In IDL, if you wanted the maximum value in an array, you would do: maxval = max(array, location_of_max) In python, it's more straightforward: location_of_max = array.argmax() or location_of_max = np.argmax(array) Now, say we want to determine the location of the maximum of a number of different functions. The functions we'll use are: sin(x) sin$^2$(x) sin$^3$(x) sin(x)cos(x) We'll define these functions, then loop over them.
# sin(x) is already defined def sin2x(x): """ sin^2 of x """ return np.sin(x)**2 def sin3x(x): """ sin^3 of x """ return np.sin(x)**3 def sincos(x): """ sin(x)*cos(x) """ return np.sin(x)*np.cos(x) list_of_functions = [np.sin, sin2x, sin3x, sincos] # we want 0-2pi for these functions t = np.linspace(0,2*np.pi) # this is the cool part: we can make a variable function for fun in list_of_functions: # the functions know their own names (in a "secret hidden variable" called __name__) print "The maximum of ",fun.__name__," is ", fun(t).max() # OK, but we wanted the location of the maximum.... for fun in list_of_functions: print "The location of the maximum of ",fun.__name__," is ", fun(t).argmax() # well, that's not QUITE what we want, but it's close # We want to know the value of t, not the index! for fun in list_of_functions: print "The location of the maximum of ",fun.__name__," is ", t[fun(t).argmax()] # Finally, what if we want to store all that in an array? # Well, here's a cool trick: you can sort of invert the for loop # This is called a "list comprehension": maxlocs = [ t[fun(t).argmax()] for fun in list_of_functions ] print maxlocs # Confused? OK. Try this one: print range(6) print [ii**2 for ii in range(6)]
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
Further info on IPython Notebooks | Overview | link | |--------------------------------------|------------------------------------------------------------------------------------| | Blog of IPython creator | http://blog.fperez.org/2012/09/blogging-with-ipython-notebook.html | | Blog of an avid IPython user | http://www.damian.oquanta.info/index.html | | Turning notebook into a presentation | https://www.youtube.com/watch?v=rBS6hmiK-H8 | | Tutorial on IPython & SciPy | https://github.com/esc/scipy2013-tutorial-numpy-ipython | | IPython notebooks gallery | https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks |
from IPython.display import YouTubeVideo YouTubeVideo("xe_ATRmw0KM", width=600, height=400, theme="light", color="blue") from IPython.display import YouTubeVideo YouTubeVideo("zG8FYPFU9n4", width=600, height=400, theme="light", color="blue")
howto/00-intro_ipython.ipynb
vkuznet/rep
apache-2.0
1. I want to make sure my Plate ID is a string. Can't lose the leading zeroes!
df.dtypes #dtype: Data type for data or columns print("The data type is",(type(df['Plate ID'][0])))
homework 11/11-homework-data/.ipynb_checkpoints/zhao_shengying_homework 11-checkpoint.ipynb
sz2472/foundations-homework
mit
2. I don't think anyone's car was built in 0AD. Discard the '0's as NaN.
df['Vehicle Year'] = df['Vehicle Year'].replace("0","NaN") #str.replace(old, new[, max]) df.head()
homework 11/11-homework-data/.ipynb_checkpoints/zhao_shengying_homework 11-checkpoint.ipynb
sz2472/foundations-homework
mit
3. I want the dates to be dates! Read the read_csv documentation to find out how to make pandas automatically parse dates.
# Function to use for converting a sequence of string columns to an array of datetime instances: dateutil.parser.parser type(df['Issue Date'][0]) def to_dates(date): yourdate = dateutil.parser.parse(date) return yourdate df['Issue Date Converted'] = df['Issue Date'].apply(to_dates) #DataFrame.apply(func):function to apply to each column/row df['Issue Date Converted'].head()
homework 11/11-homework-data/.ipynb_checkpoints/zhao_shengying_homework 11-checkpoint.ipynb
sz2472/foundations-homework
mit
4. "Date first observed" is a pretty weird column, but it seems like it has a date hiding inside. Using a function with .apply, transform the string (e.g. "20140324") into a Python date. Make the 0's show up as NaN.
df['Date First Observed'].tail() import numpy as np #numpy object def pydate(num): num = str(num) #to work with dateutil.parser.parse():it has to be a string print(num) if num == "0": print("replacing 0") return np.NaN #if number==0,replace 0 with NaN else: print("parsing date") yourdate = dateutil.parser.parse(num)#recognize the string as a time object strf = yourdate.strftime("%Y-%B-%d")#strftime turns a time object into a date and time format print(strf) return strf df['Date First Observed Converted'] = df['Date First Observed'].apply(pydate) df
homework 11/11-homework-data/.ipynb_checkpoints/zhao_shengying_homework 11-checkpoint.ipynb
sz2472/foundations-homework
mit
5. "Violation time" is... not a time. Make it a time.
df['Violation Time'].head() type(df['Violation Time'][0]) def str_to_time(time_str): s = str(time_str).replace("P"," PM").replace("A"," AM") #str(time_str) because str.replace() x = s[:2] + ":" + s[2:] return x str_to_time("1239P") df['Violation Time Converted'] = df['Violation Time'].apply(str_to_time) df['Violation Time Converted']
homework 11/11-homework-data/.ipynb_checkpoints/zhao_shengying_homework 11-checkpoint.ipynb
sz2472/foundations-homework
mit
Load and check data
base = os.path.join('gsc-dsnn-2019-10-11-G-reproduce') exps = [ os.path.join(base, exp) for exp in [ # 'gsc-Static', 'gsc-SET', 'gsc-WeightedMag', ] ] paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps] df = load_many(paths) # replace hebbian prine df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True) df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True) display(df.head(5)['hebbian_prune_perc']) display(df.head(5)['weight_prune_perc']) df.iloc[200:205] df.columns df.shape df.iloc[1] df.groupby('model')['model'].count()
projects/archive/dynamic_sparse/notebooks/mcaporale/2019-10-10-ExperimentAnalysis-GSCSparser.ipynb
mrcslws/nupic.research
agpl-3.0
## Analysis Experiment Details
# Did any trials failed? df[df["epochs"]<30]["epochs"].count() # Removing failed or incomplete trials df_origin = df.copy() df = df_origin[df_origin["epochs"]>=30] df.shape # which ones failed? # failed, or still ongoing? df_origin['failed'] = df_origin["epochs"]<30 df_origin[df_origin['failed']]['epochs'] # helper functions def mean_and_std(s): return "{:.3f} ± {:.3f}".format(s.mean(), s.std()) def round_mean(s): return "{:.0f}".format(round(s.mean())) stats = ['min', 'max', 'mean', 'std'] def agg(columns, filter=None, round=3): if filter is None: return (df.groupby(columns) .agg({'val_acc_max_epoch': round_mean, 'val_acc_max': stats, 'model': ['count']})).round(round) else: return (df[filter].groupby(columns) .agg({'val_acc_max_epoch': round_mean, 'val_acc_max': stats, 'model': ['count']})).round(round)
projects/archive/dynamic_sparse/notebooks/mcaporale/2019-10-10-ExperimentAnalysis-GSCSparser.ipynb
mrcslws/nupic.research
agpl-3.0
Does improved weight pruning outperforms regular SET
agg(['model']) agg(['on_perc']) def model_name(row): if row['model'] == 'DSNNWeightedMag': return 'DSNN' elif row['model'] == 'SET': return 'SET' elif row['model'] == 'SparseModel': return 'Static' assert False, "This should cover all cases. Got {} h - {} w - {}".format(row['model'], row['hebbian_prune_perc'], row['weight_prune_perc']) df['model2'] = df.apply(model_name, axis=1) fltr = (df['model2'] != 'Sparse') & (df['lr_scheduler'] == "MultiStepLR") agg(['on_perc', 'model2'], filter=fltr) # translate model names rcParams['figure.figsize'] = 16, 8 # d = { # 'DSNNWeightedMag': 'DSNN', # 'DSNNMixedHeb': 'SET', # 'SparseModel': 'Static', # } # df_plot = df.copy() # df_plot['model'] = df_plot['model'].apply(lambda x, i: model_name(x, i)) # sns.scatterplot(data=df_plot, x='on_perc', y='val_acc_max', hue='model') sns.lineplot(data=df, x='on_perc', y='val_acc_max', hue='model') plt.errorbar(x=[0.02, 0.04], y=[0.75, 0.85], yerr=[0.1, 0.01], color='k', marker='.', lw=0) rcParams['figure.figsize'] = 16, 8 filter = df['model'] != 'Static' plt.errorbar( x=[0.02, 0.04], y=[85, 95], yerr=[1, 1], color='k', marker='*', lw=0, elinewidth=2, capsize=2, markersize=10, ) sns.lineplot(data=df[filter], x='on_perc', y='val_acc_max_epoch', hue='model2') plt.errorbar( x=[0.02, 0.04], y=[0.85, 0.95], yerr=[0.01, 0.01], color='k', marker='.', lw=0, elinewidth=1, capsize=1, ) sns.lineplot(data=df, x='on_perc', y='val_acc_last', hue='model2')
projects/archive/dynamic_sparse/notebooks/mcaporale/2019-10-10-ExperimentAnalysis-GSCSparser.ipynb
mrcslws/nupic.research
agpl-3.0
Product of 4 consecutive numbers is always 1 less than a perfect square <p> <center>Shubhanshu Mishra (<a href="https://shubhanshu.com">shubhanshu.com</a>)</center> ![Twitter Follow](https://img.shields.io/twitter/follow/TheShubhanshu?style=social) ![YouTube Channel Subscribers](https://img.shields.io/youtube/channel/subscribers/UCZpSoW1pm0jk-jUaGwVWzLA?style=social) </p> For every $n \in \mathbb{Z}$, we can have 4 consecutive numbers as follows: $ n, n+1, n+2, n+3 $ We can complete the proof, if we can show that there exists a $k \in \mathbb{Z}$, such that the following equation holds: $ \begin{equation} n(n+1)(n+2)*(n+3) = (k^2 - 1) \end{equation} $
i_max = 4 nums = np.arange(0, 50)+1 consecutive_nums = np.stack([ np.roll(nums, -i) for i in range(i_max) ], axis=1)[:-i_max+1] n_prods = consecutive_nums.prod(axis=1) df = pd.DataFrame(consecutive_nums, columns=[f"n{i+1}" for i in range(i_max)]) df["prod"] = n_prods df["k"] = np.sqrt(n_prods+1).astype(int) df["k^2"] = df["k"]**2 df["k^2 - 1"] = df["k^2"] - 1 df fig, ax = plt.subplots(1,3, figsize=(18, 6)) ax[0].plot("n1", "prod", "bo-", data=df) ax[0].set_xlabel("n", fontsize=20) ax[0].set_ylabel(f"$y = \prod_{{i=0}}^{{i={i_max}}} (n+i)$", fontsize=20) ax[1].plot(df["k"], df["prod"], "ko-") ax[1].set_xlabel("$k = \sqrt{y + 1}$", fontsize=20) ax[1].set_title("$y = k^2 - 1$", fontsize=20) ax[2].plot(df["n1"], df["k"], "ko-") ax[2].set_ylabel("$k = \sqrt{y + 1}$", fontsize=20) ax[2].set_xlabel("$n$", fontsize=20) fig.tight_layout()
Slide Notebooks/Product of consecutive numbers.ipynb
napsternxg/ipython-notebooks
apache-2.0
Let us look at the right hand side of the equation first, i.e. $k^2 - 1$. This can be rewritten as $\textbf{(k-1)*(k+1)}$ Now, this is where a hint lies. What the right hand side means that it is a product of two integers ($k-1$ and $k+1$) which differ by 2. We can see that this is the case: $ \begin{equation} (k+1) - (k-1) \ = k + 1 - k - (-1) \ = k - k + 1 - (-1) \ = 0 + 1 + 1 \ = 2 \ \end{equation} $ So, if we can somehow show that the left hand side of the original equation, i.e. $n(n+1)(n+2)*(n+3)$: can be represented as a product of two numbers which differ by 2, then we are done, as these numbers can then be mapped to $k-1$ and $k+1$ for some $k \in \mathbb{Z}$. We can group the numbers $\textbf{n, n+1, n+2, n+3}$ into pairs, with the hope of getting $k-1$ and $k+1$. We can utilize following facts to choose the two pairs: The difference of the products should be constant, and hence independent of $n$ Knowing that product of two factors of type $(n+i)(n+j) = n^2 + (i+j)n + i*j$, We can observe that $i+j$ will be same for numbers which are equidistant from the middle of all numbers. Now we can select our pair of numbers. The first pair is $n$ and $(n+3)$, and their product is $\textbf{n * (n+3)}$ which can be expanded as $\color{red}{\textbf{n^2 + 3n}}$ And, the second pair $(n+1)$ and $(n+2)$, and their product is $\textbf{(n+1)*(n+2)}$ which can be expanded as $\color{red}{\textbf{n^2 + 3n}} + \textbf{2}$ Based on the above pairing we can immediately see that the difference of these pair products is as follows: $ \begin{equation} [(n+1)*(n+2)] - [n * (n+3)]\ = [\color{red}{n^2 + 3n} + 2] - [\color{red}{n^2 + 3n}]\ = n^2 + 3n + 2 - n^2 - 3n\ = (n^2 -n^2) + (3n - 3n) + 2\ = 0 + 0 + 2\ = 2 \end{equation} $ Hence, based on the above simplification, we can map: $(\color{red}{n^2 + 3n} + 2) \rightarrow (k+1)$, and $(\color{red}{n^2 + 3n}) \rightarrow (k-1)$. Now, if we choose $\color{blue}{\textbf{k = (n^2 + 3n + 1)}}$, the following equations hold: $n^2 + 3n + 2 = \color{blue}{(n^2 + 3n + 1)} + 1 = \color{blue}{k} + 1$ $n^2 + 3n = \color{blue}{(n^2 + 3n + 1)} - 1 = \color{blue}{k} - 1$ Hence, we have proved the following: $ \begin{equation} \forall n \in \mathbb{Z}, \ \exists k \in \mathbb{Z} \ n(n+1)(n+2)(n+3) \ = [(n+3)n][(n+1)(n+2)]\ = [\color{red}{n^2 + 3n}][\color{red}{n^2 + 3n} + 2]\ = [\color{blue}{(n^2 + 3n + 1)} - 1][\color{blue}{(n^2 + 3n + 1)} + 1]\ = [\color{blue}{k} - 1]*[\color{blue}{k} + 1]\ = (k^2 - 1) \end{equation} $ And this equation can be solved by choosing $\color{blue}{\textbf{k = (n^2 + 3n + 1)}}$. Hence, proved.
df["k = n^2 + 3n + 1"] = (df["n1"]**2 + 3*df["n1"] + 1) df fig, ax = plt.subplots(1,3, figsize=(12, 6)) ax[0].plot("n1", "prod", "bo-", data=df) ax[0].set_xlabel("n", fontsize=20) ax[0].set_ylabel(f"$y = \prod_{{i=0}}^{{i={i_max}}} (n+i)$", fontsize=20) ax[1].plot(df["k"], df["prod"], "ko-") ax[1].set_xlabel("$k = \sqrt{y + 1}$", fontsize=20) ax[1].set_title("$y = k^2 - 1$", fontsize=20) ax[2].plot(df["n1"], df["k"], "ko-", label="$k = \sqrt{y + 1}$") ax[2].plot(df["n1"], df["k = n^2 + 3n + 1"], "r--", label="$k = n^2 + 3n + 1$") ax[2].legend(fontsize=14) ax[2].set_ylabel("$k = \sqrt{y + 1}$", fontsize=20) ax[2].set_xlabel("$n$", fontsize=20) fig.tight_layout()
Slide Notebooks/Product of consecutive numbers.ipynb
napsternxg/ipython-notebooks
apache-2.0
More videos to come <p> <center>Shubhanshu Mishra (<a href="https://shubhanshu.com">shubhanshu.com</a>)</center> ![Twitter Follow](https://img.shields.io/twitter/follow/TheShubhanshu?style=social) ![YouTube Channel Subscribers](https://img.shields.io/youtube/channel/subscribers/UCZpSoW1pm0jk-jUaGwVWzLA?style=social) </p>
fig, ax = plt.subplots(1,3, figsize=(12, 6)) fig.patch.set_facecolor('white') ax[0].plot("n1", "prod", "bo-", data=df) ax[0].set_xlabel("n", fontsize=20) ax[0].set_ylabel(f"$y = \prod_{{i=0}}^{{i={i_max}}} (n+i)$", fontsize=20) ax[1].plot(df["k"], df["prod"], "ko-") ax[1].set_xlabel("$k = \sqrt{y + 1}$", fontsize=20) ax[1].set_title("$y = k^2 - 1$", fontsize=20) ax[2].plot(df["n1"], df["k"], "ko-", label="$k = \sqrt{y + 1}$") ax[2].plot(df["n1"], df["k = n^2 + 3n + 1"], "r--", label="$k = n^2 + 3n + 1$") ax[2].legend(fontsize=14) ax[2].set_ylabel("$k = \sqrt{y + 1}$", fontsize=20) ax[2].set_xlabel("$n$", fontsize=20) fig.suptitle(f"Product of 4 consecutive integers is 1 less than a perfect square.", fontsize=20) fig.tight_layout()
Slide Notebooks/Product of consecutive numbers.ipynb
napsternxg/ipython-notebooks
apache-2.0
Related works P. Erdös. J. L. Selfridge. "The product of consecutive integers is never a power." Illinois J. Math. 19 (2) 292 - 301, June 1975. https://doi.org/10.1215/ijm/1256050816 Visual Proof
nums = np.arange(10,10+4) A = np.zeros((nums[0], nums[-1])) A[:, nums[0]:] = 1 sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4) nums = np.arange(10,10+4) A = np.zeros((nums[1], nums[2])) A[:, nums[0]:] = 2 A[nums[0]:, :] = 3 A[nums[0]:, nums[0]:] = 1 sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4) import matplotlib.animation as animation from IPython.display import HTML fig, ax = plt.subplots(1,1) frames = [] nums = np.arange(10,10+4) A = np.zeros((nums[1], nums[-1])) im = ax.pcolormesh(A, cmap="inferno", vmin=0, vmax=4) title = ax.set_title(f"Start") ax.invert_yaxis() ax.set_xticks(np.arange(A.shape[1])) ax.set_yticks(np.arange(A.shape[0])) ax.grid(which="major", color="w", linestyle='-', linewidth=3) def init(): im.set_array(A) title.set_text("") return im, title def animate(i): text = "" if i == 0: A[:, nums[0]:] = 4 A[nums[0]:, :] = 4 text = "$n * n$" if i == 1: A[:, nums[0]:] = 2 A[nums[0]:, ] = 4 text = "$n * (n+3)$" if i == 2: A[:, nums[0]:] = 2 A[:, nums[2]:] = 3 A[nums[0]:, ] = 4 text = "$n * (n+3)$" if i == 3: A[:, nums[2]:] = 4 A[nums[0]:, :] = 3 A[nums[0]:, nums[0]:] = 4 A[nums[0]:, nums[0]:nums[2]] = 4 text = "$(n+1) * (n+2)$" if i == 4: A[nums[0]:, nums[0]:nums[2]] = 1 text = "$n * (n+3) = (n+1)*(n+2) - 2$" # print(A) im.set_array(A) title.set_text(f"Step: {i} | {text}") return im, title # ax = sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4) fig.tight_layout() ani = animation.FuncAnimation(fig,animate,frames=5,interval=2000,blit=True,repeat=True) HTML(ani.to_html5_video()) # frames # ax.cla() nums = np.arange(10,10+4) A = np.zeros((nums[1], nums[-1])) A[:, nums[0]:] = 2 A[nums[0]:, ] = 4 sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4) plt.show() # plt.pause(1) A[:, nums[2]:] = 4 sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4) plt.show() # plt.pause(1) A[nums[0]:, :] = 2 A[nums[0]:, nums[0]:] = 4 A[nums[0]:, nums[0]:nums[2]] = 1 sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4) plt.show() # plt.pause(1) nums = np.arange(10,10+4) A = np.zeros((nums[1], nums[-1])) A[:, nums[0]:] = 2 A[nums[0]:, :] = 3 A[nums[0]:, nums[0]:] = 1 A[:, nums[2]:] = 4 sns.heatmap(A, linewidth=2, cbar=False, vmin=0, vmax=4)
Slide Notebooks/Product of consecutive numbers.ipynb
napsternxg/ipython-notebooks
apache-2.0
Server Local : Runing soun local area. Program Python
# !/usr/bin/python2 import time import BaseHTTPServer import os import random import string import requests from urlparse import parse_qs, urlparse HOST_NAME = '0.0.0.0' PORT_NUMBER = 9999 # A variável MP3_DIR será construida tendo como base o diretório HOME do usuário + Music/Campainha # (e.g: /home/usuario/Music/Campainha) MP3_DIR = os.path.join(os.getenv('HOME'), 'Music', 'Campainha') VALID_CHARS = set(string.ascii_letters + string.digits + '_.') CHAVE_THINGSPEAK = 'XYZ11ZYX99XYZ1XX' # Salva o arquivo de log no diretório do usuário (e.g: /home/usuário/campainha.log) ARQUIVO_LOG = os.path.join(os.getenv('HOME'), 'campainha.log') def filtra(mp3): if not mp3.endswith('.mp3'): return False for c in mp3: if not c in VALID_CHARS: return False return True def log(msg, output_file=None): if output_file is None: output_file = open(ARQUIVO_LOG, 'a') output_file.write('%s: %s\n' % (time.asctime(), msg)) output_file.flush() class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler): def do_GET(s): s.send_header("Content-type", "text/plain") query = urlparse(s.path).query if not query: s.send_response(404) s.end_headers() s.wfile.write('Not found') return components = dict(qc.split('=') for qc in query.split('&')) if not 'bateria' in components: s.send_response(404) s.end_headers() s.wfile.write('Not found') return s.send_response(200) s.end_headers() s.wfile.write('Tocou') s.wfile.flush() log("Atualizando thingspeak") r = requests.post('https://api.thingspeak.com/update', data={'api_key': CHAVE_THINGSPEAK, 'field1': components['bateria']}) log("Thingspeak retornou: %d" % r.status_code) log("Tocando MP3") mp3s = [f for f in os.listdir(MP3_DIR) if filtra(f)] mp3 = random.choice(mp3s) os.system("mpv " + os.path.join(MP3_DIR, mp3)) if __name__ == '__main__': server_class = BaseHTTPServer.HTTPServer httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler) log("Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER)) try: httpd.serve_forever() except KeyboardInterrupt: pass httpd.server_close() log("Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER))
dev/checkpoint/2017-05-25-estevesdouglas-compartilhando-notebook.ipynb
EstevesDouglas/UNICAMP-FEEC-IA369Z
gpl-3.0
Export database from dashaboard about device IoT Arquivo csv
import numpy as np import csv with open('database.csv', 'rb') as csvfile: spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|') for row in spamreader: print ', '.join(row)
dev/checkpoint/2017-05-25-estevesdouglas-compartilhando-notebook.ipynb
EstevesDouglas/UNICAMP-FEEC-IA369Z
gpl-3.0
Method Conclusion
References
dev/checkpoint/2017-05-25-estevesdouglas-compartilhando-notebook.ipynb
EstevesDouglas/UNICAMP-FEEC-IA369Z
gpl-3.0
자료 안내: 여기서 다루는 내용은 아래 사이트의 내용을 참고하여 생성되었음. https://github.com/rouseguy/intro2stats 안내사항 오늘 다루는 내용은 pandas 모듈의 소개 정도로 이해하고 넘어갈 것을 권장한다. 아래 내용은 엑셀의 스프레드시트지에 담긴 데이터를 분석하여 평균 등을 어떻게 구하는가를 알고 있다면 어렵지 않게 이해할 수 있는 내용이다. 즉, 넘파이의 어레이에 대한 기초지식과 엑셀에 대한 기초지식을 활용하면 내용을 기본적으로 이해할 수 있을 것이다. 좀 더 자세한 설명이 요구된다면 아래 사이트의 설명을 미리 읽으면 좋다(5.2절 내용까지면 충분함). 하지만, 아래 내용을 엑셀의 기능과 비교하면서 먼저 주욱 훑어 볼 것을 권장한다. http://sinpong.tistory.com/category/Python%20for%20data%20analysis 평균(Average) 구하기 오늘의 주요 예제 미국에서 거래되는 담배(식물)의 도매가격 데이터를 분석하여, 거래된 도매가의 평균을 구한다. 평균값(Mean) 중앙값(Median) 최빈값(Mode) 평균에 대한 보다 자세한 설명은 첨부된 강의노트 참조: GongSu21-Averages.pdf 사용하는 주요 모듈 아래는 통계분석에서 기본적으로 사용되는 모듈들이다. pandas: 통계분석 전용 모듈 numpy 모듈을 바탕으로 하여 통계분석에 특화된 모듈임. 마이크로소프트의 엑셀처럼 작동하는 기능을 지원함 datetime: 날짜와 시간을 적절하게 표시하도록 도와주는 기능을 지원하는 모듈 scipy: 수치계산, 공업수학 등을 지원하는 모듈 팬더스(Pandas) 소개 pandas란? 빠르고 쉬운 데이터 분석 도구를 지원하는 파이썬 모듈 numpy를 기본적으로 활용함. pandas의 기능 데이터 정렬 등 다양한 연산 기능 지원 강력한 색인 및 슬라이싱 기능 시계열(time series) 기능 지원 결측치(누락된 데이터) 처리 SQL과 같은 DB의 관계연산 기능 지원 주의: pandas 모듈의 기능에 대한 보다 자세한 설명은 다음 시간에 다룬다. 여기서는 pandas 모듈을 어떻게 활용하는지에 대한 감을 잡기만 하면 된다.
import numpy as np import pandas as pd from datetime import datetime as dt from scipy import stats
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
데이터 불러오기 및 처리 오늘 사용할 데이터는 다음과 같다. 미국 51개 주(State)별 담배(식물) 도매가격 및 판매일자: Weed_price.csv 아래 그림은 미국의 주별 담배(식물) 판매 데이터를 담은 Weed_Price.csv 파일를 엑셀로 읽었을 때의 일부를 보여준다. 실제 데이터량은 22899개이며, 아래 그림에는 5개의 데이터만을 보여주고 있다. * 주의: 1번줄은 테이블의 열별 목록(column names)을 담고 있다. * 열별 목록: State, HighQ, HighQN, MedQ, MedQN, LowQ, LowQN, date <p> <table cellspacing="20"> <tr> <td> <img src="img/weed_price.png" style="width:600"> </td> </tr> </table> </p> csv 파일 불러오기 pandas 모듈의 read_csv 함수 활용 read_csv 함수의 리턴값은 DataFrame 이라는 특수한 자료형임 엑셀의 위 그림 모양의 스프레드시트(spreadsheet)라고 생각하면 됨. 언급한 세 개의 csv 파일을 pandas의 read_csv 함수를 이용하여 불러들이자. 주의: Weed_Price.csv 파일을 불러들일 때, parse_dates라는 키워드 인자가 사용되었다. * parse_dates 키워드 인자: 날짜를 읽어들일 때 다양한 방식을 사용하도록 하는 기능을 갖고 있다. * 여기서 값을 [-1]로 준 것은 소스 데이터에 있는 날짜 데이터를 변경하지 말고 그대로 불러오라는 의미이다. * 위 엑셀파일에서 볼 수 있듯이, 마지막 열에 포함된 날짜표시는 굳이 변경을 요하지 않는다.
prices_pd = pd.read_csv("data/Weed_Price.csv", parse_dates=[-1])
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
read_csv 함수의 리턴값은 DataFrame 이라는 자료형이다.
type(prices_pd)
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
DataFrame 자료형 자세한 설명은 다음 시간에 추가될 것임. 우선은 아래 사이트를 참조할 수 있다는 정도만 언급함. (5.2절 내용까지면 충분함) http://sinpong.tistory.com/category/Python%20for%20data%20analysis DataFrame 자료형과 엑셀의 스프레드시트 비교하기 불러 들인 Weed_Price.csv 파일의 상위 다섯 줄을 확인해보면, 앞서 엑셀파일 그림에서 본 내용과 일치한다. 다만, 행과 열의 목록이 조금 다를 뿐이다. * 엑셀에서는 열 목록이 A, B, C, ..., H로 되어 있으며, 소스 파일의 열 목록은 1번 줄로 밀려 있다. * 엑셀에서의 행 목록은 1, 2, 3, ... 으로 되어 있다. 하지만 read_csv 파일은 좀 다르게 불러 들인다. * 열 목록은 소스 파일의 열 목록을 그대로 사용한다. * 행 목록은 0, 1, 2, ... 으로 되어 있다. 데이터 파일의 상위 몇 줄을 불러들이기 위해서는 DataFrame 자료형의 head 메소드를 활용한다. 인자값을 주지 않으면 상위 5줄을 보여준다.
prices_pd.head()
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
인자를 주면 원하는 만큼 보여준다.
prices_pd.head(10)
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
파일이 매우 많은 수의 데이터를 포함하고 있을 경우, 맨 뒷쪽 부분을 확인하고 싶으면 tail 메소드를 활용한다. 사용법은 head 메소드와 동일하다. 아래 명령어를 통해 Weed_Price.csv 파일에 22899개의 데이터가 저장되어 있음을 확인할 수 있다.
prices_pd.tail()
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
결측치 존재 여부 위 결과를 보면 LowQ 목록에 NaN 이라는 기호가 포함되어 있다. NaN은 Not a Number, 즉, 숫자가 아니다라는 의미이며, 데이터가 애초부터 존재하지 않았거나 누락되었음을 의미한다. DataFrame의 dtypes DataFrame 자료형의 dtypes 속성을 이용하면 열별 목록에 사용된 자료형을 확인할 수 있다. Weed_Price.csv 파일을 읽어 들인 prices_pd 변수에 저장된 DataFrame 값의 열별 목록에 사용된 자료형을 보여준다. 주의: * numpy의 array 자료형의 dtype 속성은 하나의 자료형만을 담고 있다. * 열별 목록에는 하나의 자료형 값들만 올 수 있다. 즉, 열 하나하나가 넘파이의 array에 해당한다고 볼 수 있다. * State 목록에 사용된 object 라는 dtype은 문자열이 저장된 위치를 가리키는 포인터를 의미한다. * 문자열의 길이를 제한할 수 없기 때문에 문자열을 어딘가에 저장하고 포인터가 그 위치를 가리키며, 필요에 따라 포인터 정보를 이용하여 저장된 문자열을 확인한다. * 마지막 줄에 표시된 "dtype: object"의 의미는 복잡한 데이터들의 자료형이라는 의미로 이해하면 됨.
prices_pd.dtypes
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
정렬 및 결측치 채우기 정렬하기 주별로, 날짜별로 데이터를 정렬한다.
prices_pd.sort_values(['State', 'date'], inplace=True)
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
결측치 채우기 평균을 구하기 위해서는 결측치(누락된 데이터)가 없어야 한다. 여기서는 이전 줄의 데이터를 이용하여 채우는 방식(method='ffill')을 이용한다. 주의: 앞서 정렬을 먼저 한 이유는, 결측치가 있을 경우 가능하면 동일한 주(State), 비슷한 시점에서 거래된 가격을 사용하고자 함이다.
prices_pd.fillna(method='ffill', inplace=True)
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
정렬된 데이터의 첫 부분은 아래와 같이 알라바마(Alabama) 주의 데이터만 날짜별로 순서대로 보인다.
prices_pd.head()
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
정렬된 데이터의 끝 부분은 아래와 같이 요밍(Wyoming) 주의 데이터만 날짜별로 순서대로 보인다. 이제 결측치가 더 이상 존재하지 않는다.
prices_pd.tail()
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
데이터 분석하기: 평균(Average) 캘리포니아 주를 대상으로해서 담배(식물) 도매가의 평균(average)을 구해본다. 평균값(Mean) 평균값 = 모든 값들의 합을 값들의 개수로 나누기 $X$: 데이터에 포함된 값들을 대변하는 변수 $n$: 데이터에 포함된 값들의 개수 $\Sigma\, X$: 데이터에 포함된 모든 값들의 합 $$\text{평균값}(\mu) = \frac{\Sigma\, X}{n}$$ 먼저 마스크 인덱스를 이용하여 캘리포니아 주의 데이터만 추출해야 한다.
california_pd = prices_pd[prices_pd.State == "California"].copy(True)
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
캘리포니아 주에서 거래된 첫 5개의 데이터를 확인해보자.
california_pd.head(20)
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
HighQ 열 목록에 있는 값들의 총합을 구해보자. 주의: sum() 메소드 활용을 기억한다.
ca_sum = california_pd['HighQ'].sum() ca_sum
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
HighQ 열 목록에 있는 값들의 개수를 확인해보자. 주의: count() 메소드 활용을 기억한다.
ca_count = california_pd['HighQ'].count() ca_count
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
이제 캘리포니아 주에서 거래된 HighQ의 담배가격의 평균값을 구할 수 있다.
# 캘리포니아 주에서 거래된 상품(HighQ) 담배(식물) 도매가의 평균값 ca_mean = ca_sum / ca_count ca_mean
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
중앙값(Median) 캘리포니아 주에서 거래된 HighQ의 담배가격의 중앙값을 구하자. 중앙값 = 데이터를 크기 순으로 정렬하였을 때 가장 가운데에 위치한 수 데이터의 크기 n이 홀수일 때: $\frac{n+1}{2}$번 째 위치한 데이터 데이터의 크기 n이 짝수일 때: $\frac{n}{2}$번 째와 $\frac{n}{2}+1$번 째에 위치한 데이터들의 평균값 여기서는 데이터의 크기가 449로 홀수이다.
ca_count
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
따라서 중앙값은 $\frac{\text{ca_count}-1}{2}$번째에 위치한 값이다. 주의: 인덱스는 0부터 출발한다. 따라서 중앙값이 하나 앞으로 당겨진다.
ca_highq_pd = california_pd.sort_values(['HighQ']) ca_highq_pd.head()
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
인덱스 로케이션 함수인 iloc 함수를 활용한다. 주의: iloc 메소드는 인덱스 번호를 사용한다. 위 표에서 보여주는 인덱스 번호는 Weed_Price.csv 파일을 처음 불러왔을 때 사용된 인덱스 번호이다. 하지만 ca_high_pd 에서는 참고사항으로 사용될 뿐이며, iloc 함수에 인자로 들어가는 인덱스는 다시 0부터 세는 것으로 시작한다. 따라서 아래 코드처럼 기존의 참고용 인덱스를 사용하면 옳은 답을 구할 수 없다.
# 캘리포니아에서 거래된 상품(HighQ) 담배(식물) 도매가의 중앙값 ca_median = ca_highq_pd.HighQ.iloc[int((ca_count-1)/ 2)] ca_median
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
최빈값(Mode) 캘리포니아 주에서 거래된 HighQ의 담배가격의 최빈값을 구하자. 최빈값 = 가장 자주 발생한 데이터 주의: value_counts() 메소드 활용을 기억한다.
# 캘리포니아 주에서 가장 빈번하게 거래된 상품(HighQ) 담배(식물)의 도매가 ca_mode = ca_highq_pd.HighQ.value_counts().index[0] ca_mode
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
연습문제 연습 지금까지 구한 평균값, 중앙값, 최빈값을 구하는 함수가 이미 DataFrame과 Series 자료형의 메소드로 구현되어 있다. 아래 코드들을 실행하면서 각각의 코드의 의미를 확인하라.
california_pd.mean() california_pd.mean().HighQ california_pd.median() california_pd.mode() california_pd.mode().HighQ california_pd.HighQ.mean() california_pd.HighQ.median() california_pd.HighQ.mode()
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
연습 캘리포니아 주에서 2013년, 2014년, 2015년에 거래된 HighQ의 담배(식물) 도매가격의 평균을 각각 구하라. 힌트: california_pd.iloc[0]['date'].year 견본답안1 2014년에 거래된 도매가의 평균값을 아래와 같이 계산할 수 있다. sum 변수: 2014년도에 거래된 도매가의 총합을 담는다. count 변수: 2014년도의 거래 횟수를 담는다.
sum = 0 count = 0 for index in np.arange(len(california_pd)): if california_pd.iloc[index]['date'].year == 2014: sum += california_pd.iloc[index]['HighQ'] count += 1 sum/count
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
견본답안2 아래와 같은 방식을 이용하여 인덱스 정보를 구하여 슬라이싱 기능을 활용할 수도 있다. 슬라이싱을 활용하여 연도별 평균을 구하는 방식은 본문 내용과 동일한 방식을 따른다.
years = np.arange(2013, 2016) year_starts = [0] for yr in years: for index in np.arange(year_starts[-1], len(california_pd)): if california_pd.iloc[index]['date'].year == yr: continue else: year_starts.append(index) break year_starts
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
year_starts에 담긴 숫자들의 의미는 다음과 같다. 0번줄부터 2013년도 거래가 표시된다. 5번줄부터 2014년도 거래가 표시된다. 369번줄부터 2015년도 거래가 표시된다.
california_pd.iloc[4] california_pd.iloc[5] california_pd.iloc[368] california_pd.iloc[369]
previous/y2017/W09-numpy-averages/GongSu21_Statistics_Averages.ipynb
liganega/Gongsu-DataSci
gpl-3.0
1. Importing the dependent time series data In this codeblock a time series of groundwater levels is imported using the read_csv function of pandas. As pastas expects a pandas Series object, the data is squeezed. To check if you have the correct data type (a pandas Series object), you can use type(oseries) as shown below. The following characteristics are important when importing and preparing the observed time series: - The observed time series are stored as a pandas Series object. - The time step can be irregular.
# Import groundwater time seriesm and squeeze to Series object gwdata = pd.read_csv('../data/head_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the oseries is: %s' % type(gwdata)) # Plot the observed groundwater levels gwdata.plot(style='.', figsize=(10, 4)) plt.ylabel('Head [m]'); plt.xlabel('Time [years]');
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
2. Import the independent time series Two explanatory series are used: the precipitation and the potential evaporation. These need to be pandas Series objects, as for the observed heads. Important characteristics of these time series are: - All series are stored as pandas Series objects. - The series may have irregular time intervals, but then it will be converted to regular time intervals when creating the time series model later on. - It is preferred to use the same length units as for the observed heads.
# Import observed precipitation series precip = pd.read_csv('../data/rain_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the precip series is: %s' % type(precip)) # Import observed evaporation series evap = pd.read_csv('../data/evap_nb1.csv', parse_dates=['date'], index_col='date', squeeze=True) print('The data type of the evap series is: %s' % type(evap)) # Calculate the recharge to the groundwater recharge = precip - evap print('The data type of the recharge series is: %s' % type(recharge)) # Plot the time series of the precipitation and evaporation plt.figure() recharge.plot(label='Recharge', figsize=(10, 4)) plt.xlabel('Time [years]') plt.ylabel('Recharge (m/year)');
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
3. Create the time series model In this code block the actual time series model is created. First, an instance of the Model class is created (named ml here). Second, the different components of the time series model are created and added to the model. The imported time series are automatically checked for missing values and other inconsistencies. The keyword argument fillnan can be used to determine how missing values are handled. If any nan-values are found this will be reported by pastas.
# Create a model object by passing it the observed series ml = ps.Model(gwdata, name="GWL") # Add the recharge data as explanatory variable sm = ps.StressModel(recharge, ps.Gamma, name='recharge', settings="evap") ml.add_stressmodel(sm)
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
4. Solve the model The next step is to compute the optimal model parameters. The default solver uses a non-linear least squares method for the optimization. The python package scipy is used (info on scipy's least_squares solver can be found here). Some standard optimization statistics are reported along with the optimized parameter values and correlations.
ml.solve()
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
5. Plot the results The solution can be plotted after a solution has been obtained.
ml.plot()
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
6. Advanced plotting There are many ways to further explore the time series model. pastas has some built-in functionalities that will provide the user with a quick overview of the model. The plots subpackage contains all the options. One of these is the method plots.results which provides a plot with more information.
ml.plots.results(figsize=(10, 6))
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
7. Statistics The stats subpackage includes a number of statistical functions that may applied to the model. One of them is the summary method, which gives a summary of the main statistics of the model.
ml.stats.summary()
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
8. Improvement: estimate evaporation factor In the previous model, the recharge was estimated as precipitation minus potential evaporation. A better model is to estimate the actual evaporation as a factor (called the evaporation factor here) times the potential evaporation. First, new model is created (called ml2 here so that the original model ml does not get overwritten). Second, the RechargeModel object with a Linear recharge model is created, which combines the precipitation and evaporation series and adds a parameter for the evaporation factor f. The RechargeModel object is added to the model, the model is solved, and the results and statistics are plotted to the screen. Note that the new model gives a better fit (lower root mean squared error and higher explained variance), but that the Akiake information criterion indicates that the addition of the additional parameter does not improve the model signficantly (the Akaike criterion for model ml2 is higher than for model ml).
# Create a model object by passing it the observed series ml2 = ps.Model(gwdata) # Add the recharge data as explanatory variable ts1 = ps.RechargeModel(precip, evap, ps.Gamma, name='rainevap', recharge=ps.rch.Linear(), settings=("prec", "evap")) ml2.add_stressmodel(ts1) # Solve the model ml2.solve() # Plot the results ml2.plot() # Statistics ml2.stats.summary()
examples/notebooks/01_basic_model.ipynb
pastas/pastas
mit
First let us look at what the underdamped spectral density looks like:
def plot_spectral_density(): """ Plot the underdamped spectral density """ w = np.linspace(0, 5, 1000) J = lam**2 * gamma * w / ((w0**2 - w**2)**2 + (gamma**2) * (w**2)) fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8)) axes.plot(w, J, 'r', linewidth=2) axes.set_xlabel(r'$\omega$', fontsize=28) axes.set_ylabel(r'J', fontsize=28) plot_spectral_density()
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
The correlation functions are now very oscillatory, because of the Lorentzian peak in the spectral density. So next, let us plot the correlation functions themselves:
def Mk(t, k, gamma, w0, beta): """ Calculate the Matsubara terms for a given t and k. """ Om = np.sqrt(w0**2 - (gamma / 2)**2) Gamma = gamma / 2. ek = 2 * pi * k / beta return ( (-2 * lam**2 * gamma / beta) * ek * exp(-ek * abs(t)) / (((Om + 1.0j * Gamma)**2 + ek**2) * ((Om - 1.0j * Gamma)**2 + ek**2)) ) def c(t, Nk, lam, gamma, w0, beta): """ Calculate the correlation function for a vector of times, t. """ Om = np.sqrt(w0**2 - (gamma / 2)**2) Gamma = gamma / 2. Cr = ( coth(beta * (Om + 1.0j * Gamma) / 2) * np.exp(1.0j * Om * t) + coth(beta * (Om - 1.0j * Gamma) / 2) * np.exp(-1.0j * Om * t) ) Ci = np.exp(-1.0j * Om *t) - np.exp(1.0j * Om * t) return ( (lam**2 / (4 * Om)) * np.exp(-Gamma * np.abs(t)) * (Cr + Ci) + np.sum([Mk(t, k, gamma=gamma, w0=w0, beta=beta) for k in range(1, Nk + 1)], 0) ) def plot_correlation_function(): """ Plot the underdamped correlation function. """ Nk = 3 t = np.linspace(0, 20, 1000) corr = c(t, Nk=3, lam=lam, gamma=gamma, w0=w0, beta=beta) fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8)) axes.plot(t, np.real(corr), '-', color="black", label="Re[C(t)]") axes.plot(t, np.imag(corr), '-', color="red", label="Im[C(t)]") axes.set_xlabel(r't', fontsize=28) axes.set_ylabel(r'C', fontsize=28) axes.legend(loc=0, fontsize=12) plot_correlation_function()
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
It is useful to look at what the Matsubara contributions do to this spectral density. We see that they modify the real part around $t=0$:
def plot_matsubara_correlation_function_contributions(): """ Plot the underdamped correlation function. """ t = np.linspace(0, 20, 1000) M_Nk3 = np.sum([Mk(t, k, gamma=gamma, w0=w0, beta=beta) for k in range(1, 2 + 1)], 0) M_Nk5 = np.sum([Mk(t, k, gamma=gamma, w0=w0, beta=beta) for k in range(1, 100 + 1)], 0) fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8)) axes.plot(t, np.real(M_Nk3), '-', color="black", label="Re[M(t)] Nk=2") axes.plot(t, np.real(M_Nk5), '--', color="red", label="Re[M(t)] Nk=100") axes.set_xlabel(r't', fontsize=28) axes.set_ylabel(r'M', fontsize=28) axes.legend(loc=0, fontsize=12) plot_matsubara_correlation_function_contributions()
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
Solving for the dynamics as a function of time: Next we calculate the exponents using the Matsubara decompositions. Here we split them into real and imaginary parts. The HEOM code will optimize these, and reduce the number of exponents when real and imaginary parts have the same exponent. This is clearly the case for the first term in the vkAI and vkAR lists.
ckAR, vkAR, ckAI, vkAI = underdamped_matsubara_params(lam=lam, gamma=gamma, T=T, nk=Nk)
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
Having created the lists which specify the bath correlation functions, we create a BosonicBath from them and pass the bath to the HEOMSolver class. The solver constructs the "right hand side" (RHS) determinining how the system and auxiliary density operators evolve in time. This can then be used to solve for dynamics or steady-state. Below we create the bath and solver and then solve for the dynamics by calling .run(rho0, tlist).
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14) with timer("RHS construction time"): bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI) HEOMMats = HEOMSolver(Hsys, bath, NC, options=options) with timer("ODE solver time"): resultMats = HEOMMats.run(rho0, tlist) plot_result_expectations([ (resultMats, P11p, 'b', "P11 Mats"), (resultMats, P12p, 'r', "P12 Mats"), ]);
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
In practice, one would not perform this laborious expansion for the underdamped correlation function, because QuTiP already has a class, UnderDampedBath, that can construct this bath for you. Nevertheless, knowing how to perform this expansion will allow you to construct your own baths for other spectral densities. Below we show how to use this built-in functionality:
# Compare to built-in under-damped bath: with timer("RHS construction time"): bath = UnderDampedBath(Q, lam=lam, gamma=gamma, w0=w0, T=T, Nk=Nk) HEOM_udbath = HEOMSolver(Hsys, bath, NC, options=options) with timer("ODE solver time"): result_udbath = HEOM_udbath.run(rho0, tlist) #normal 115 plot_result_expectations([ (result_udbath, P11p, 'b', "P11 (UnderDampedBath)"), (result_udbath, P12p, 'r', "P12 (UnderDampedBath)"), ]);
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
We can compare these results to those of the Bloch-Redfield solver in QuTiP:
UD = ( f" 2* {lam}**2 * {gamma} / ( {w0}**4 * {beta}) if (w==0) else" f" 2* ({lam}**2 * {gamma} * w / (({w0}**2 - w**2)**2 + {gamma}**2 * w**2)) * ((1 / (exp(w * {beta}) - 1)) + 1)" ) options = Options(nsteps=15000, store_states=True, rtol=1e-12, atol=1e-12) with timer("ODE solver time"): resultBR = brmesolve(Hsys, rho0, tlist, a_ops=[[sigmaz(), UD]], options=options) plot_result_expectations([ (resultMats, P11p, 'b', "P11 Mats"), (resultMats, P12p, 'r', "P12 Mats"), (resultBR, P11p, 'g--', "P11 Bloch Redfield"), (resultBR, P12p, 'g--', "P12 Bloch Redfield"), ]);
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
Lastly, let us calculate the analytical steady-state result and compare all of the results: The thermal state of a reaction coordinate (treating the environment as a single damped mode) should, at high temperatures and small gamma, tell us the steady-state:
dot_energy, dot_state = Hsys.eigenstates() deltaE = dot_energy[1] - dot_energy[0] gamma2 = gamma wa = w0 # reaction coordinate frequency g = lam / np.sqrt(2 * wa) # coupling NRC = 10 Hsys_exp = tensor(qeye(NRC), Hsys) Q_exp = tensor(qeye(NRC), Q) a = tensor(destroy(NRC), qeye(2)) H0 = wa * a.dag() * a + Hsys_exp # interaction H1 = (g * (a.dag() + a) * Q_exp) H = H0 + H1 energies, states = H.eigenstates() rhoss = 0 * states[0] * states[0].dag() for kk, energ in enumerate(energies): rhoss += (states[kk] * states[kk].dag() * exp(-beta * energies[kk])) rhoss = rhoss / rhoss.norm() P12RC = tensor(qeye(NRC), basis(2,0) * basis(2,1).dag()) P12RC = expect(rhoss, P12RC) P11RC = tensor(qeye(NRC), basis(2,0) * basis(2,0).dag()) P11RC = expect(rhoss, P11RC) # XXX: Decide what to do with this cell matplotlib.rcParams['figure.figsize'] = (7, 5) matplotlib.rcParams['axes.titlesize'] = 25 matplotlib.rcParams['axes.labelsize'] = 30 matplotlib.rcParams['xtick.labelsize'] = 28 matplotlib.rcParams['ytick.labelsize'] = 28 matplotlib.rcParams['legend.fontsize'] = 28 matplotlib.rcParams['axes.grid'] = False matplotlib.rcParams['savefig.bbox'] = 'tight' matplotlib.rcParams['lines.markersize'] = 5 matplotlib.rcParams['font.family'] = 'STIXgeneral' matplotlib.rcParams['mathtext.fontset'] = 'stix' matplotlib.rcParams["font.serif"] = "STIX" matplotlib.rcParams['text.usetex'] = False # XXX: Decide what to do with this cell fig, axes = plt.subplots(1, 1, sharex=True, figsize=(12, 7)) plt.yticks([P11RC, 0.6, 1.0], [0.38, 0.6, 1]) plot_result_expectations([ (resultBR, P11p, 'y-.', "Bloch-Redfield"), (resultMats, P11p, 'b', "Matsubara $N_k=3$"), ], axes=axes) axes.plot(tlist, [P11RC for t in tlist], color='black', linestyle="-.",linewidth=2, label="Thermal state") axes.set_xlabel(r'$t \Delta$',fontsize=30) axes.set_ylabel(r'$\rho_{11}$',fontsize=30) axes.locator_params(axis='y', nbins=4) axes.locator_params(axis='x', nbins=4) axes.legend(loc=0) fig.tight_layout() # fig.savefig("figures/fig3.pdf") from qutip.ipynbtools import version_table version_table()
examples/heom/heom-1c-spin-bath-model-underdamped-sd.ipynb
qutip/qutip-notebooks
lgpl-3.0
Hooray, we did it Now we need to figure out how well it actually did
def get_test_detector_plane(row): # Find location of nans, get the first one # Then divide by 6 (6 values per detector plane) plane = np.where(np.isnan(row.values))[0][0]/6 return int(plane) def get_vals_at_plane(row, plane): cols = [i + str(int(plane)) for i in ['x','y','px','py','pz']] return row[cols].values def get_vals_at_eval_plane(X_test, X_pred): X = X_pred.copy() X['eval_plane'] = X_test.apply(get_test_detector_plane, axis=1) retvals = X.loc[X_test.index.values].apply(lambda x: get_vals_at_plane(x, x['eval_plane']), axis=1) return retvals eval_planes = X_test.apply(get_test_detector_plane, axis=1) get_vals_at_plane(X_test.loc[15], 7)
jlab-ml-lunch-2/notebooks/01-Recommender-System.ipynb
BDannowitz/polymath-progression-blog
gpl-2.0
Make a recommender class, a la sklearn Should have fit, predict methods
import logging from jlab import COLS from sklearn.preprocessing import StandardScaler class DetectorRecommender(object): def __init__(self, k=20): self.logger = logging.getLogger(__name__) self.k = k self.planes = 27 self.kinematics = ["x", "y", "px", "py", "pz"] self.cols = COLS self.X_train = pd.DataFrame(columns=self.cols) self.X_test = pd.DataFrame(columns=self.cols) self.scaler = StandardScaler() def fit(self, df): """SVD isn't really 'trained', but... """ self.X_train = df.copy(deep=True) def predict(self, df): # Make a copy, index it from 0 to N self.logger.debug("Making a copy") self.X_test = df.copy(deep=True).reset_index(drop=True) # For each track, figure out which detector plane we'll evaluate self.logger.debug("Determining evaluation planes") eval_planes = self.X_test.apply(self.get_eval_detector_plane, axis=1) # Combine with the training set, shuffle it, and fill missing values self.logger.debug("Combining train and test sets for SVD") X = (pd.concat([self.X_test, self.X_train], axis=0) .reset_index(drop=True) .sample(replace=False, frac=1.0)) # Fill with the mean values of each column self.logger.debug("Filling with mean values") X = X.fillna(X.mean()) # Normalize the values self.logger.debug("Applying standardscaler") X_norm_values = self.scaler.fit_transform(X) X_norm = pd.DataFrame(X_norm_values, columns=X.columns, index=X.index) # Single-value Decomposition self.logger.debug("Making predictions") X_pred_norm = self.fit_predict_svds(X_norm) # Extract our test tracks X_pred_norm = X_pred_norm.loc[self.X_test.index, :].sort_index() # Un-normalize them X_pred_values = self.scaler.inverse_transform(X_pred_norm) X_pred = pd.DataFrame(X_pred_values, columns=X_pred_norm.columns, index=X_pred_norm.index) self.logger.debug("De-normalized. Extracting pred values.") # Extract just the non-z kinematic values for the eval planes det_eval_values = self.extract_values_at_eval_planes(X_pred, eval_planes) return det_eval_values def fit_predict_svds(self, X): U, sigma, Vt = svds(X, k=self.k) sigma = np.diag(sigma) X_pred = pd.DataFrame(np.dot(np.dot(U, sigma), Vt), columns=X.columns, index=X.index) return X_pred def extract_values_at_eval_planes(self, pred, planes): X = pred.copy(deep=True) X['eval_plane'] = planes retvals = X.apply(lambda x: self.get_vals_at_plane(x, x['eval_plane']), axis=1) retvals_df = pd.DataFrame(retvals.values.tolist(), columns=self.kinematics) return retvals_df def get_vals_at_plane(self, row, plane): cols = [i + str(int(plane)) for i in self.kinematics] return row[cols].values def get_eval_detector_plane(self, row): # Find location of nans, get the first one # Then divide by 6 (6 values per detector plane) plane = np.where(np.isnan(row.values))[0][0]/6 return int(plane) logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)-12s - %(levelname)-8s - %(message)s') predictor = DetectorRecommender() predictor.fit(X_train) X_pred = predictor.predict(X_test) X_pred.head() mean_squared_error(X_true, X_pred)
jlab-ml-lunch-2/notebooks/01-Recommender-System.ipynb
BDannowitz/polymath-progression-blog
gpl-2.0
Tune the one hyperparameter we have
for k in range(5,15): predictor = DetectorRecommender(k=k) predictor.fit(X_train) X_pred = predictor.predict(X_test) print(k, mean_squared_error(X_true, X_pred))
jlab-ml-lunch-2/notebooks/01-Recommender-System.ipynb
BDannowitz/polymath-progression-blog
gpl-2.0
Optimal performance at k=7
predictor = DetectorRecommender(k=7) predictor.fit(X_train) X_pred = predictor.predict(X_test) print(mean_squared_error(X_true, X_pred))
jlab-ml-lunch-2/notebooks/01-Recommender-System.ipynb
BDannowitz/polymath-progression-blog
gpl-2.0