markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Line plot of sunspot data Download the .txt data for the "Yearly mean total sunspot number [1700 - now]" from the SILSO website. Upload the file to the same directory as this notebook.
import os assert os.path.isfile('yearssn.dat')
assignments/assignment04/MatplotlibEx01.ipynb
sthuggins/phys202-2015-work
mit
Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
data = np.loadtxt("yearssn.dat") a= np.array(data) a years = a[:,0] years ssc = a[:,1] ssc assert len(year)==315 assert year.dtype==np.dtype(float) assert len(ssc)==315 assert ssc.dtype==np.dtype(float)
assignments/assignment04/MatplotlibEx01.ipynb
sthuggins/phys202-2015-work
mit
Make a line plot showing the sunspot count as a function of year. Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data.
plt.plot(years, ssc) plt.figsize=(10,8) plt.xlim(1700,2015) #plot is scaled from 1700 to 2015 so that the data fill the graph. assert True # leave for grading
assignments/assignment04/MatplotlibEx01.ipynb
sthuggins/phys202-2015-work
mit
Describe the choices you have made in building this visualization and how they make it effective. YOUR ANSWER HERE Now make 4 subplots, one for each century in the data set. This approach works well for this dataset as it allows you to maintain mild slopes while limiting the overall width of the visualization. Perform similar customizations as above: Customize your plot to follow Tufte's principles of visualizations. Adjust the aspect ratio/size so that the steepest slope in your plot is approximately 1. Customize the box, grid, spines and ticks to match the requirements of this data.
plt.subplots(2, 2) for i in range(1700, 1800): for j in range(1800,1900): for k in range(1900,2000): plt.plot(data) plt.tight_layout() assert True # leave for grading
assignments/assignment04/MatplotlibEx01.ipynb
sthuggins/phys202-2015-work
mit
First we draw M samples randomly from the input space.
M = 1000 #This is the number of data points to use #Sample the input space according to the distributions in the table above Rb1 = np.random.uniform(50, 150, (M, 1)) Rb2 = np.random.uniform(25, 70, (M, 1)) Rf = np.random.uniform(.5, 3, (M, 1)) Rc1 = np.random.uniform(1.2, 2.5, (M, 1)) Rc2 = np.random.uniform(.25, 1.2, (M, 1)) beta = np.random.uniform(50, 300, (M, 1)) #the input matrix x = np.hstack((Rb1, Rb2, Rf, Rc1, Rc2, beta))
tutorials/test_functions/otl_circuit/otlcircuit_example.ipynb
paulcon/active_subspaces
mit
Now we normalize the inputs, linearly scaling each to the interval $[-1, 1]$.
#Upper and lower limits for inputs xl = np.array([50, 25, .5, 1.2, .25, 50]) xu = np.array([150, 70, 3, 2.5, 1.2, 300]) #XX = normalized input matrix XX = ac.utils.misc.BoundedNormalizer(xl, xu).normalize(x)
tutorials/test_functions/otl_circuit/otlcircuit_example.ipynb
paulcon/active_subspaces
mit
Compute gradients to approximate the matrix on which the active subspace is based.
#output values (f) and gradients (df) f = circuit(XX) df = circuit_grad(XX)
tutorials/test_functions/otl_circuit/otlcircuit_example.ipynb
paulcon/active_subspaces
mit
Now we use our data to compute the active subspace.
#Set up our subspace using the gradient samples ss = ac.subspaces.Subspaces() ss.compute(df=df, nboot=500)
tutorials/test_functions/otl_circuit/otlcircuit_example.ipynb
paulcon/active_subspaces
mit
We use plotting utilities to plot eigenvalues, subspace error, components of the first 2 eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values).
#Component labels in_labels = ['Rb1', 'Rb2', 'Rf', 'Rc1', 'Rc2', 'beta'] #plot eigenvalues, subspace errors ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br) ac.utils.plotters.subspace_errors(ss.sub_br) #manually make the subspace 2D for the eigenvector and 2D summary plots ss.partition(2) #Compute the active variable values y = XX.dot(ss.W1) #Plot eigenvectors, sufficient summaries ac.utils.plotters.eigenvectors(ss.W1, in_labels=in_labels) ac.utils.plotters.sufficient_summary(y, f)
tutorials/test_functions/otl_circuit/otlcircuit_example.ipynb
paulcon/active_subspaces
mit
In the previous chapters we used simulation to predict the effect of an infectious disease in a susceptible population and to design interventions that would minimize the effect. In this chapter we use analysis to investigate the relationship between the parameters, beta and gamma, and the outcome of the simulation. Nondimensionalization The figures in Section [sweepframe]{reference-type="ref" reference="sweepframe"} suggest that there is a relationship between the parameters of the SIR model, beta and gamma, that determines the outcome of the simulation, the fraction of students infected. Let's think what that relationship might be. When beta exceeds gamma, that means there are more contacts (that is, potential infections) than recoveries during each day (or other unit of time). The difference between beta and gamma might be called the "excess contact rate\", in units of contacts per day. As an alternative, we might consider the ratio beta/gamma, which is the number of contacts per recovery. Because the numerator and denominator are in the same units, this ratio is dimensionless, which means it has no units. Describing physical systems using dimensionless parameters is often a useful move in the modeling and simulation game. It is so useful, in fact, that it has a name: nondimensionalization (see http://modsimpy.com/nondim). So we'll try the second option first. Exploring the results Suppose we have a SweepFrame with one row for each value of beta and one column for each value of gamma. Each element in the SweepFrame is the fraction of students infected in a simulation with a given pair of parameters. We can print the values in the SweepFrame like this:
beta_array = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0 , 1.1] gamma_array = [0.2, 0.4, 0.6, 0.8] frame = sweep_parameters(beta_array, gamma_array) frame.head() for gamma in frame.columns: column = frame[gamma] for beta in column.index: frac_infected = column[beta] print(beta, gamma, frac_infected)
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
This is the first example we've seen with one for loop inside another: Each time the outer loop runs, it selects a value of gamma from the columns of the DataFrame and extracts the corresponding column. Each time the inner loop runs, it selects a value of beta from the column and selects the corresponding element, which is the fraction of students infected. In the example from the previous chapter, frame has 4 columns, one for each value of gamma, and 11 rows, one for each value of beta. So these loops print 44 lines, one for each pair of parameters. The following function encapulates the previous loop and plots the fraction infected as a function of the ratio beta/gamma:
from matplotlib.pyplot import plot def plot_sweep_frame(frame): for gamma in frame.columns: series = frame[gamma] for beta in series.index: frac_infected = series[beta] plot(beta/gamma, frac_infected, 'o', color='C1', alpha=0.4) plot_sweep_frame(frame) decorate(xlabel='Contact number (beta/gamma)', ylabel='Fraction infected')
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
The results fall on a single curve, at least approximately. That means that we can predict the fraction of students who will be infected based on a single parameter, the ratio beta/gamma. We don't need to know the values of beta and gamma separately. Contact number From Section xxx, recall that the number of new infections in a given day is $\beta s i N$, and the number of recoveries is $\gamma i N$. If we divide these quantities, the result is $\beta s / \gamma$, which is the number of new infections per recovery (as a fraction of the population). When a new disease is introduced to a susceptible population, $s$ is approximately 1, so the number of people infected by each sick person is $\beta / \gamma$. This ratio is called the "contact number\" or "basic reproduction number\" (see http://modsimpy.com/contact). By convention it is usually denoted $R_0$, but in the context of an SIR model, this notation is confusing, so we'll use $c$ instead. The results in the previous section suggest that there is a relationship between $c$ and the total number of infections. We can derive this relationship by analyzing the differential equations from Section xxx: $$\begin{aligned} \frac{ds}{dt} &= -\beta s i \ \frac{di}{dt} &= \beta s i - \gamma i\ \frac{dr}{dt} &= \gamma i\end{aligned}$$ In the same way we divided the contact rate by the infection rate to get the dimensionless quantity $c$, now we'll divide $di/dt$ by $ds/dt$ to get a ratio of rates: $$\frac{di}{ds} = -1 + \frac{1}{cs}$$ Dividing one differential equation by another is not an obvious move, but in this case it is useful because it gives us a relationship between $i$, $s$ and $c$ that does not depend on time. From that relationship, we can derive an equation that relates $c$ to the final value of $s$. In theory, this equation makes it possible to infer $c$ by observing the course of an epidemic. Here's how the derivation goes. We multiply both sides of the previous equation by $ds$: $$di = \left( -1 + \frac{1}{cs} \right) ds$$ And then integrate both sides: $$i = -s + \frac{1}{c} \log s + q$$ where $q$ is a constant of integration. Rearranging terms yields: $$q = i + s - \frac{1}{c} \log s$$ Now let's see if we can figure out what $q$ is. At the beginning of an epidemic, if the fraction infected is small and nearly everyone is susceptible, we can use the approximations $i(0) = 0$ and $s(0) = 1$ to compute $q$: $$q = 0 + 1 + \frac{1}{c} \log 1$$ Since $\log 1 = 0$, we get $q = 1$. Now, at the end of the epidemic, let's assume that $i(\infty) = 0$, and $s(\infty)$ is an unknown quantity, $s_{\infty}$. Now we have: $$q = 1 = 0 + s_{\infty}- \frac{1}{c} \log s_{\infty}$$ Solving for $c$, we get $$c = \frac{\log s_{\infty}}{s_{\infty}- 1}$$ By relating $c$ and $s_{\infty}$, this equation makes it possible to estimate $c$ based on data, and possibly predict the behavior of future epidemics. Analysis and simulation Let's compare this analytic result to the results from simulation. I'll create an array of values for $s_{\infty}$
from numpy import linspace s_inf_array = linspace(0.0001, 0.999, 31)
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
And compute the corresponding values of $c$:
from numpy import log c_array = log(s_inf_array) / (s_inf_array - 1)
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
To get the total infected, we compute the difference between $s(0)$ and $s(\infty)$, then store the results in a Series:
frac_infected = 1 - s_inf_array
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
We can use make_series to put c_array and frac_infected in a Pandas Series.
frac_infected_series = make_series(c_array, frac_infected)
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
Now we can plot the results:
plot_sweep_frame(frame) frac_infected_series.plot(label='analysis') decorate(xlabel='Contact number (c)', ylabel='Fraction infected')
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
When the contact number exceeds 1, analysis and simulation agree. When the contact number is less than 1, they do not: analysis indicates there should be no infections; in the simulations there are a small number of infections. The reason for the discrepancy is that the simulation divides time into a discrete series of days, whereas the analysis treats time as a continuous quantity. In other words, the two methods are actually based on different models. So which model is better? Probably neither. When the contact number is small, the early progress of the epidemic depends on details of the scenario. If we are lucky, the original infected person, "patient zero", infects no one and there is no epidemic. If we are unlucky, patient zero might have a large number of close friends, or might work in the dining hall (and fail to observe safe food handling procedures). For contact numbers near or less than 1, we might need a more detailed model. But for higher contact numbers the SIR model might be good enough. Estimating contact number Figure xxx shows that if we know the contact number, we can compute the fraction infected. But we can also read the figure the other way; that is, at the end of an epidemic, if we can estimate the fraction of the population that was ever infected, we can use it to estimate the contact number. Well, in theory we can. In practice, it might not work very well, because of the shape of the curve. When the contact number is near 2, the curve is quite steep, which means that small changes in $c$ yield big changes in the number of infections. If we observe that the total fraction infected is anywhere from 20% to 80%, we would conclude that $c$ is near 2. On the other hand, for larger contact numbers, nearly the entire population is infected, so the curve is nearly flat. In that case we would not be able to estimate $c$ precisely, because any value greater than 3 would yield effectively the same results. Fortunately, this is unlikely to happen in the real world; very few epidemics affect anything close to 90% of the population. So the SIR model has limitations; nevertheless, it provides insight into the behavior of infectious disease, especially the phenomenon of herd immunity. As we saw in Chapter xxx, if we know the parameters of the model, we can use it to evaluate possible interventions. And as we saw in this chapter, we might be able to use data from earlier outbreaks to estimate the parameters. Exercises Exercise: If we didn't know about contact numbers, we might have explored other possibilities, like the difference between beta and gamma, rather than their ratio. Write a version of plot_sweep_frame, called plot_sweep_frame_difference, that plots the fraction infected versus the difference beta-gamma. What do the results look like, and what does that imply?
# Solution def plot_sweep_frame_difference(frame): for gamma in frame.columns: column = frame[gamma] for beta in column.index: frac_infected = column[beta] plot(beta - gamma, frac_infected, 'ro', color='C1', alpha=0.4) # Solution plot_sweep_frame_difference(frame) decorate(xlabel='Excess infection rate (infections-recoveries per day)', ylabel='Fraction infected') # Solution # The results don't fall on a line, which means that if we # know the difference between `beta` and `gamma`, # but not their ratio, that's not enough to predict # the fraction infected.
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
Exercise: Suppose you run a survey at the end of the semester and find that 26% of students had the Freshman Plague at some point. What is your best estimate of c? Hint: if you print frac_infected_series, you can read off the answer.
# Solution frac_infected_series # Solution # It looks like the fraction infected is 0.26 when the contact # number is about 1.16
python/soln/chap14.ipynb
AllenDowney/ModSim
gpl-2.0
HW 10.0: Short answer questions What is Apache Spark and how is it different to Apache Hadoop? Fill in the blanks: Spark API consists of interfaces to develop applications based on it in Java, ...... languages (list languages). Using Spark, resource management can be done either in a single server instance or using a framework such as Mesos or ????? in a distributed manner. What is an RDD and show a fun example of creating one and bringing the first element back to the driver program. What is lazy evaluation and give an intuitoive example of lazy evaluation and comment on the massive computational savings to be had from lazy evaluation. Answers Apache Spark is a framework for parallel computations over big data with optimized genaral execution graphs over RDDs. It differs from Apache Hadoop by storing data in-memory instead of writing to disk so it is much faster. Spark also required 2-5 time less code than Hadoop. With Spark you can do read-eval-print loop, while Hadoop cannot. Spark API consists of interfaces to develop applications based on it in Java, <font color='green'>Scala, Python, and R</font> languages. Using Spark, resource management can be done either in a single server instance or using a framework such as Mesos or <font color='green'>YARN</font> in a distributed manner. A Resilient distributed data set (RDD) is a distributed collection of elements, which are automatically distributed across the cluster for parallel computations. RDDs can also be recomputed from the execution graph providing fault tolerance. Lazy evaluation means that transformations are not computed immediately, but only when an action is performed on the trandformed RDD. An example of lazy evaluation is reading the first line of a file. If creation of an RDD from a text file were not computed lazily, then the entire file would be read when the RDD was created. However, with laxy evaluation, if we then perform an action of examining the first line, only the first line needs to be read. Lazy evaluation means that values are only computed if they are required, potentially resulting in significant computational savings.
''' Example of creating an RDD and bringing the first element back to the driver''' import numpy as np dataRDD = sc.parallelize(np.random.random_sample(1000)) data2X= dataRDD.map(lambda x: x*2) dataGreaterThan1 = data2X.filter(lambda x: x > 1.0) print dataGreaterThan1.take(1)
week11/MIDS-W261-2015-HW10-Week11-Adams.ipynb
kradams/MIDS-W261-2015-Adams
mit
HW 10.1: In Spark write the code to count how often each word appears in a text document (or set of documents). Please use this homework document as a the example document to run an experiment. Report the following: provide a sorted list of tokens in decreasing order of frequency of occurence.
def hw10_1(): # create RDD from text file and split at spaces to get words rdd = sc.textFile("HW10-Public/MIDS-MLS-HW-10.txt") words = rdd.flatMap(lambda x: x.strip().split(" ")) # count words and sort sortedcounts = words.map(lambda x: (x, 1)) \ .reduceByKey(lambda x, y: x + y) \ .map(lambda (x,y): (y, x)) \ .sortByKey(False) \ .map(lambda (x,y): (y, x)) for line in mysorted.collect(): print line return None hw10_1()
week11/MIDS-W261-2015-HW10-Week11-Adams.ipynb
kradams/MIDS-W261-2015-Adams
mit
HW 10.1.1 Modify the above word count code to count words that begin with lower case letters (a-z) and report your findings. Again sort the output words in decreasing order of frequency.
def hw10_1_1(): def isloweraz(word): ''' check if the word starts with a lower case letter ''' lowercase = 'abcdefghijklmnopqrstuvwxyz' try: return word[0] in lowercase except IndexError: return False # create RDD from text file rdd = sc.textFile("HW10-Public/MIDS-MLS-HW-10.txt") # get words and filter for those that start with a lowercase letter words = rdd.flatMap(lambda x: x.strip().split(" ")) \ .filter(isloweraz) # count words and sort sortedcounts = words.map(lambda x: (x, 1)) \ .reduceByKey(lambda x, y: x + y) \ .map(lambda (x,y): (y, x)) \ .sortByKey(False) \ .map(lambda (x,y): (y, x)) for line in sortedcounts.collect(): print line return None hw10_1_1()
week11/MIDS-W261-2015-HW10-Week11-Adams.ipynb
kradams/MIDS-W261-2015-Adams
mit
HW 10.2: KMeans a la MLLib Using the MLlib-centric KMeans code snippet below NOTE: kmeans_data.txt is available here https://www.dropbox.com/s/q85t0ytb9apggnh/kmeans_data.txt?dl=0 Run this code snippet and list the clusters that your find and compute the Within Set Sum of Squared Errors for the found clusters. Comment on your findings.
from pyspark.mllib.clustering import KMeans, KMeansModel from numpy import array from math import sqrt # Load and parse the data # NOTE kmeans_data.txt is available here https://www.dropbox.com/s/q85t0ytb9apggnh/kmeans_data.txt?dl=0 data = sc.textFile("HW10-Public/kmeans_data.txt") parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')])) # Build the model (cluster the data) clusters = KMeans.train(parsedData, 2, maxIterations=10, runs=10, initializationMode="random") # Evaluate clustering by computing Within Set Sum of Squared Errors def error(point): center = clusters.centers[clusters.predict(point)] return sqrt(sum([x**2 for x in (point - center)])) WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y) print("Within Set Sum of Squared Error = " + str(WSSSE)) # Save and load model clusters.save(sc, "myModelPath") sameModel = KMeansModel.load(sc, "myModelPath") for i,ctr in enumerate(clusters.centers): print("Cluster %i: %.1f, %.1f, %.1f" % (i, ctr[0],ctr[1],ctr[2])) WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y) print("Within Set Sum of Squared Error = " + str(WSSSE))
week11/MIDS-W261-2015-HW10-Week11-Adams.ipynb
kradams/MIDS-W261-2015-Adams
mit
HW 10.3: Download the following KMeans notebook: https://www.dropbox.com/s/3nsthvp8g2rrrdh/EM-Kmeans.ipynb?dl=0 Generate 3 clusters with 100 (one hundred) data points per cluster (using the code provided). Plot the data. Then run MLlib's Kmean implementation on this data and report your results as follows: -- plot the resulting clusters after 1 iteration, 10 iterations, after 20 iterations, after 100 iterations. -- in each plot please report the Within Set Sum of Squared Errors for the found clusters. Comment on the progress of this measure as the KMEans algorithms runs for more iterations
%matplotlib inline import numpy as np import pylab import json size1 = size2 = size3 = 100 samples1 = np.random.multivariate_normal([4, 0], [[1, 0],[0, 1]], size1) data = samples1 samples2 = np.random.multivariate_normal([6, 6], [[1, 0],[0, 1]], size2) data = np.append(data,samples2, axis=0) samples3 = np.random.multivariate_normal([0, 4], [[1, 0],[0, 1]], size3) data = np.append(data,samples3, axis=0) # Randomlize data data = data[np.random.permutation(size1+size2+size3),] np.savetxt('data.csv',data,delimiter = ',') pylab.plot(samples1[:, 0], samples1[:, 1],'*', color = 'red') pylab.plot(samples2[:, 0], samples2[:, 1],'o',color = 'blue') pylab.plot(samples3[:, 0], samples3[:, 1],'+',color = 'green') pylab.show() ''' Then run MLlib's Kmean implementation on this data and report your results as follows: -- plot the resulting clusters after 1, 10, 20, and 100 iterations -- in each plot please report the Within Set Sum of Squared Errors for the found clusters. Comment on the progress of this measure as the KMeans algorithms runs for more iterations ''' from pyspark.mllib.clustering import KMeans, KMeansModel from numpy import array from math import sqrt # Load and parse the data data = sc.textFile("data.csv") parsedData = data.map(lambda line: array([float(x) for x in line.split(',')])) import numpy as np #Calculate which class each data point belongs to def nearest_centroid(line): x = np.array([float(f) for f in line.split(',')]) closest_centroid_idx = np.sum((x - centroids)**2, axis=1).argmin() return (closest_centroid_idx,(x,1)) #plot centroids and data points for each iteration def plot_iteration(means): pylab.plot(samples1[:, 0], samples1[:, 1], '.', color = 'blue') pylab.plot(samples2[:, 0], samples2[:, 1], '.', color = 'blue') pylab.plot(samples3[:, 0], samples3[:, 1],'.', color = 'blue') pylab.plot(means[0][0], means[0][1],'*',markersize =10,color = 'red') pylab.plot(means[1][0], means[1][1],'*',markersize =10,color = 'red') pylab.plot(means[2][0], means[2][1],'*',markersize =10,color = 'red') pylab.show() from time import time numIters = [1, 10, 20, 100] for i in numIters: clusters = KMeans.train(parsedData, k=3, maxIterations=i, initializationMode = "random") if i==1: print("Centroids after %d iteration:" % i) else: print("Centroids after %d iterations:" % i) for centroid in clusters.centers: print centroid WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y) print("Within Set Sum of Squared Error = " + str(WSSSE)) plot_iteration(clusters.centers)
week11/MIDS-W261-2015-HW10-Week11-Adams.ipynb
kradams/MIDS-W261-2015-Adams
mit
The WSSE decreases with the number of iterations from 1 to 20 iterations. After 20 iterations, the centroids converge and the WSSE is stable. HW 10.4: Using the KMeans code (homegrown code) provided repeat the experiments in HW10.3. Comment on any differences between the results in HW10.3 and HW10.4. Explain.
from numpy.random import rand #Calculate which class each data point belongs to def nearest_centroid(line): x = np.array([float(f) for f in line.split(',')]) closest_centroid_idx = np.sum((x - centroids)**2, axis=1).argmin() return (closest_centroid_idx,(x,1)) def error_p4(line, centroids): point = np.array([float(f) for f in line.split(',')]) closest_centroid_idx = np.sum((point - centroids)**2, axis=1).argmin() center = centroids[closest_centroid_idx] return sqrt(sum([x**2 for x in (point - center)])) K = 3 D = sc.textFile("./data.csv").cache() numIters = [1, 10, 20, 100] for n in numIters: # randomly initialize centroids centroids = rand(3,2)*5 iter_num = 0 for i in range(n): res = D.map(nearest_centroid).reduceByKey(lambda x,y : (x[0]+y[0],x[1]+y[1])).collect() res = sorted(res,key = lambda x : x[0]) #sort based on cluster ID centroids_new = np.array([x[1][0]/x[1][1] for x in res]) #divide by cluster size if np.sum(np.absolute(centroids_new-centroids))<0.01: break iter_num = iter_num + 1 centroids = centroids_new if n==1: print("Centroids after %d iteration:" % n) else: print("Centroids after %d iterations:" % n) print centroids WSSSE = D.map(lambda line: error_p4(line, centroids)).reduce(lambda x, y: x + y) print("Within Set Sum of Squared Error = " + str(WSSSE)) plot_iteration(centroids)
week11/MIDS-W261-2015-HW10-Week11-Adams.ipynb
kradams/MIDS-W261-2015-Adams
mit
Beyond drift-bifurcation
ds = velocity(tau=3.8, delta_t=0.05, R=3e-4, seed=0) v = ds.simulate(1000000, v0=np.zeros(1)) friedrich_method(v, default)
notebooks/advanced/friedrich_coefficients.ipynb
blue-yonder/tsfresh
mit
Before drift-bifurcation
ds = velocity(tau=2./0.3-3.8, delta_t=0.05, R=3e-4, seed=0) v = ds.simulate(1000000, v0=np.zeros(1)) friedrich_method(v, default)
notebooks/advanced/friedrich_coefficients.ipynb
blue-yonder/tsfresh
mit
Fill in your desired scheme, hostname and catalog number.
scheme = 'https' hostname = 'synapse-dev.isrd.isi.edu' catalog_number = 1
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Use DERIVA-Auth to get a credential or use None if your catalog allows anonymous access.
credential = get_credential(hostname)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Get a handle representing your server.
server = DerivaServer(scheme, hostname, credential)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Connect to a catalog (unversioned) Connect to a catalog, and list its schemas.
catalog = server.connect_ermrest(catalog_number) pb = catalog.getPathBuilder() list(pb.schemas)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Get latest snapshot The current catalog handle can return a handle to the latest snapshot.
latest = catalog.latest_snapshot() pb = latest.getPathBuilder() list(pb.schemas)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Print the snaptime of this catalog snapshot.
print(latest.snaptime)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Connect to a catalog snapshot Here we pass the snaptime parameter explicitly in the connect_ermrest method.
snapshot = server.connect_ermrest('1', '2PM-DGYP-56Z4') pb = snapshot.getPathBuilder() list(pb.schemas)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Alternatively, we could pass a "versioned" catalog_id to the connect_ermrest method.
snapshot = server.connect_ermrest('1@2PM-DGYP-56Z4') pb = snapshot.getPathBuilder() list(pb.schemas)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Finally, we can poke around at schemas and tables as they existed at the specified snaptime.
subject = pb.schemas['Zebrafish'].tables['Subject'] print(subject.uri)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
Data may be read from the snapshot. Here, we will see how many subjects existed at that point in time.
e = subject.entities() len(e)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
However, mutation operations on a catalog snapshot are disabled.
try: subject.insert([{'foo': 'bar'}]) except ErmrestCatalogMutationError as e: print(e)
docs/derivapy-catalog-snapshot.ipynb
informatics-isi-edu/deriva-py
apache-2.0
1D Likelihood As a simple and straightforward starting example, we begin with a 1D Gaussian likelihood function.
mean = 2.0; cov = 1.0 rv = mvn(mean,cov) lnlfn = lambda x: rv.logpdf(x) x = np.linspace(-2,6,5000) lnlike = lnlfn(x) plt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\log \mathcal{L}$');
notebook/intervals.ipynb
kadrlica/destools
mit
For this simple likelihood function, we could analytically compute the maximum likelihood estimate and confidence intervals. However, for more complicated likelihoods an analytic solution may not be possible. As an introduction to these cases it is informative to proceed numerically.
# You can use any complicate optimizer that you want (i.e. scipy.optimize) # but for this application we just do a simple array operation maxlike = np.max(lnlike) mle = x[np.argmax(lnlike)] print "Maximum Likelihood Estimate: %.2f"%mle print "Maximum Likelihood Value: %.2f"%maxlike
notebook/intervals.ipynb
kadrlica/destools
mit
To find the 68% confidence intervals, we can calculate the delta-log-likelihood. The test statisitcs (TS) is defined as ${\rm TS} = -2\Delta \log \mathcal{L}$ and is $\chi^2$-distributed. Therefore, the confidence intervals on a single parameter can be read off of a $\chi^2$ table with 1 degree of freedom (dof). | 2-sided Interval | p-value | $\chi^2_{1}$ | Gaussian $\sigma$ | |------|------|------|------| | 68% | 32% | 1.000 | 1.00 | | 90% | 10% | 2.706 | 1.64 | | 95% | 5% | 3.841 | 1.96 | | 99% | 1% | 6.635 | 2.05 |
def interval(x, lnlike, delta=1.0): maxlike = np.max(lnlike) ts = -2 * (lnlike - maxlike) lower = x[np.argmax(ts < delta)] upper = x[len(ts) - np.argmax((ts < delta)[::-1]) - 1] return lower, upper intervals = [(68,1.0), (90,2.706), (95,3.841)] plt.plot(x,lnlike,'-k'); plt.xlabel(r'$x$'); plt.ylabel('$\log \mathcal{L}$'); kwargs = dict(ls='--',color='k') plt.axhline(maxlike - intervals[0][1]/2.,**kwargs) print "Confidence Intervals:" for cl,delta in intervals: lower,upper = interval(x,lnlike,delta) print " %i%% CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle,lower-mle,upper-mle) plt.axvline(lower,**kwargs); plt.axvline(upper,**kwargs);
notebook/intervals.ipynb
kadrlica/destools
mit
These numbers might look familiar. They are the number of standard deviations that you need to go out in the standard normal distribution to contain the requested fraction of the distribution (i.e., 68%, 90%, 95%).
for cl, d in intervals: sigma = stats.norm.isf((100.-cl)/2./100.) print " %i%% = %.2f sigma"%(cl,sigma)
notebook/intervals.ipynb
kadrlica/destools
mit
2D Likelihood Now we extend the example above to a 2D likelihood function. We define the likelihood with the same multivariat_normal function, but now add a second dimension and a covariance between the two dimensions. These parameters are adjustable if would like to play around with them.
mean = [2.0,1.0] cov = [[1,1],[1,2]] rv = stats.multivariate_normal(mean,cov) lnlfn = lambda x: rv.logpdf(x) print "Mean:",rv.mean.tolist() print "Covariance",rv.cov.tolist() xx, yy = np.mgrid[-4:6:.01, -4:6:.01] values = np.dstack((xx, yy)) lnlike = lnlfn(values) fig2 = plt.figure(figsize=(8,6)) ax2 = fig2.add_subplot(111) im = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\log \mathcal{L}$') plt.xlabel('$x$'); plt.ylabel('$y$'); plt.show() # You can use any complicate optimizer that you want (i.e. scipy.optimize) # but for this application we just do a simple array operation maxlike = np.max(lnlike) maxidx = np.unravel_index(np.argmax(lnlike),lnlike.shape) mle_x, mle_y = mle = values[maxidx] print "Maximum Likelihood Estimate:",mle print "Maximum Likelihood Value:",maxlike
notebook/intervals.ipynb
kadrlica/destools
mit
The case now becomes a bit more complicated. If you want to set a confidence interval on a single parameter, you cannot simply projected the likelihood onto the dimension of interest. Doing so would ignore the correlation between the two parameters.
lnlike -= maxlike x = xx[:,maxidx[1]] delta = 2.706 # This is the loglike projected at y = mle[1] = 0.25 plt.plot(x, lnlike[:,maxidx[1]],'-r'); lower,upper = max_lower,max_upper = interval(x,lnlike[:,maxidx[1]],delta) plt.axvline(lower,ls='--',c='r'); plt.axvline(upper,ls='--',c='r') y_max = yy[:,maxidx[1]] # This is the profile likelihood where we maximize over the y-dimension plt.plot(x, lnlike.max(axis=1),'-k') lower,upper = profile_lower,profile_upper = interval(x,lnlike.max(axis=1),delta) plt.axvline(lower,ls='--',c='k'); plt.axvline(upper,ls='--',c='k') plt.xlabel('$x$'); plt.ylabel('$\log \mathcal{L}$') y_profile = yy[lnlike.argmax(axis=0),lnlike.argmax(axis=1)] print "Projected Likelihood (red):\t %.1f [%+.2f,%+.2f]"%(mle[0],max_lower-mle[0],max_upper-mle[0]) print "Profile Likelihood (black):\t %.1f [%+.2f,%+.2f]"%(mle[0],profile_lower-mle[0],profile_upper-mle[0])
notebook/intervals.ipynb
kadrlica/destools
mit
In the plot above we are showing two different 1D projections of the 2D likelihood function. The red curve shows the projected likelihood scanning in values of $x$ and always assuming the value of $y$ that maximized the likelihood. On the other hand, the black curve shows the 1D likelihood derived by scanning in values of $x$ and at each value of $x$ maximizing the value of the likelihood with respect to the $y$-parameter. In other words, the red curve is ignoring the correlation between the two parameters while the black curve is accounting for it. As you can see from the values printed above the plot, the intervals derived from the red curve understimate the analytically derived values, while the intervals on the black curve properly reproduce the analytic estimate. Just to verify the result quoted above, we derive intervals on $x$ at several different confidence levels. We start with the projected likelihood with $y$ fixed at $y_{\rm max}$.
for cl, d in intervals: lower,upper = interval(x,lnlike[:,maxidx[1]],d) print " %s CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle[0],lower-mle[0],upper-mle[0])
notebook/intervals.ipynb
kadrlica/destools
mit
Below are the confidence intervals in $x$ derived from the profile likelihood technique. As you can see, these values match the analytically derived values.
for cl, d in intervals: lower,upper = interval(x,lnlike.max(axis=1),d) print " %s CL: x = %.2f [%+.2f,%+.2f]"%(cl,mle[0],lower-mle[0],upper-mle[0])
notebook/intervals.ipynb
kadrlica/destools
mit
By plotting the likelihood contours, it is easy to see why the profile likelihood technique performs correctly while naively slicing through the likelihood plane does not. The profile likelihood is essentially tracing the ridgeline of the 2D likelihood function, thus intersecting the countour of delta-log-likelihood at it's most distant point. This can be seen from the black lines in the 2D likelihood plot below.
fig2 = plt.figure(figsize=(8,6)) ax2 = fig2.add_subplot(111) im = ax2.contourf(values[:,:,0], values[:,:,1], lnlike ,aspect='auto'); plt.colorbar(im,label='$\log \mathcal{L}$') im = ax2.contour(values[:,:,0], values[:,:,1], lnlike , levels=[-delta/2], colors=['k'], aspect='auto', zorder=10,lw=2); plt.axvline(mle[0],ls='--',c='k'); plt.axhline(mle[1],ls='--',c='k'); plt.axvline(max_lower,ls='--',c='r'); plt.axvline(max_upper,ls='--',c='r') plt.axvline(profile_lower,ls='--',c='k'); plt.axvline(profile_upper,ls='--',c='k') plt.plot(x,y_max,'-r'); plt.plot(x,y_profile,'-k') plt.xlabel('$x$'); plt.ylabel('$y$'); plt.show()
notebook/intervals.ipynb
kadrlica/destools
mit
MCMC Posterior Sampling One way to explore the posterior distribution is through MCMC sampling. This gives an alternative method for deriving confidence intervals. Now, rather than maximizing the likelihood as a function of the other parameter, we marginalize (integrate) over that parameter. This is more computationally intensive, but is more robust in the case of complex likelihood functions.
# Remember, the posterior probability is the likelihood times the prior lnprior = lambda x: 0 def lnprob(x): return lnlfn(x) + lnprior(x) if got_emcee: nwalkers=100 ndim, nwalkers = len(mle), 100 pos0 = [np.random.rand(ndim) for i in range(nwalkers)] sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, threads=2) # This takes a while... sampler.run_mcmc(pos0, 5000) samples = sampler.chain[:, 100:, :].reshape((-1, ndim)) x_samples,y_samples = samples.T for cl in [68,90,95]: x_lower,x_mle,x_upper = np.percentile(x_samples,q=[(100-cl)/2.,50,100-(100-cl)/2.]) print " %i%% CL:"%cl, "x = %.2f [%+.2f,%+.2f]"%(x_mle,x_lower-x_mle,x_upper-x_mle)
notebook/intervals.ipynb
kadrlica/destools
mit
These results aren't perfect since they are suspect to random variations in the sampling, but they are pretty close. Plotting the distribution of samples, we see something very similar to the plots we generated for the likelihood alone (which is good since out prior was flat).
if got_corner: fig = corner.corner(samples, labels=["$x$","$y$"],truths=mle,quantiles=[0.05, 0.5, 0.95],range=[[-4,6],[-4,6]])
notebook/intervals.ipynb
kadrlica/destools
mit
Illustration of CRS effect Leaflet is able to handle several CRS (coordinate reference systems). It means that depending on the data you have, you may need to use the one or the other. Don't worry ; in practice, almost everyone on the web uses EPSG3857 (the default value for folium and Leaflet). But it may be interesting to know the possible values. Let's create a GeoJSON map, and change it's CRS.
import json us_states = os.path.join('data', 'us-states.json') geo_json_data = json.load(open(us_states))
examples/CRS.ipynb
shankari/folium
mit
EPSG3857 ; the standard Provided that our tiles are computed with this projection, this map has the expected behavior.
kw = dict(tiles=None, location=[43, -100], zoom_start=3) m = folium.Map(crs='EPSG3857', **kw) folium.GeoJson(geo_json_data).add_to(m) m.save(os.path.join('results', 'CRS_0.html')) m
examples/CRS.ipynb
shankari/folium
mit
EPSG4326 This projection is a common CRS among GIS enthusiasts according to Leaflet's documentation. And we see it's quite different.
m = folium.Map(crs='EPSG4326', **kw) folium.GeoJson(geo_json_data).add_to(m) m.save(os.path.join('results', 'CRS_1.html')) m
examples/CRS.ipynb
shankari/folium
mit
EPSG3395 The elliptical projection is almost equal to EPSG3857 ; though different.
m = folium.Map(crs='EPSG3395', **kw) folium.GeoJson(geo_json_data).add_to(m) m.save(os.path.join('results', 'CRS_2.html')) m
examples/CRS.ipynb
shankari/folium
mit
Simple At last, Leaflet also give the possibility to use no projection at all. With this, you get flat charts. It can be useful if you want to use folium to draw non-geographical data.
m = folium.Map(crs='Simple', **kw) folium.GeoJson(geo_json_data).add_to(m) m.save(os.path.join('results', 'CRS_3.html')) m
examples/CRS.ipynb
shankari/folium
mit
This little example shows a lot about the Python typing system. The variable a is not statically declared, after all it can contain only one type of data: a memory address. When we assign the number 5 to it, Python stores in a the address of the number 5 (0x83fe540 in my case, but your result will be different). The type() built-in function is smart enough to understand that we are not asking about the type of a (which is always a reference), but about the type of the content. When you store another value in a, the string 'five', Python shamelessly replaces the previous content of the variable with the new address. So, thanks to the reference system, Python type system is both strong and dynamic. The exact definition of those two concepts is not universal, so if you are interested be ready to dive into a broad matter. However, in Python, the meaning of those two words is the following: type system is strong because everything has a well-defined type that you can check with the type() built-in function type system is dynamic since the type of a variable is not explicitly declared, but changes with the content Onward! We just scratched the surface of the whole thing. To explore the subject a little more, try to define the simplest function in Python (apart from an empty function)
def echo(a): return a
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
The function works as expected, just echoes the given parameter
print(echo(5)) print(echo('five'))
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
Pretty straightforward, isn't it? Well, if you come from a statically compiled language such as C or C++ you should be at least puzzled. What is a? I mean: what type of data does it contain? Moreover, how can Python know what it is returning if there is no type specification? Again, if you recall the references stuff everything becomes clear: that function accepts a reference and returns a reference. In other words we just defined a sort of universal function, that does the same thing regardless of the input. This is exactly the problem that polymorphism wants to solve. We want to describe an action regardless of the type of objects, and this is what we do when we talk among humans. When you describe how to move an object by pushing it, you may explain it using a box, but you expect the person you are addressing to be able to repeat the action even if you need to move a pen, or a book, or a bottle. There are two main strategies you can apply to get code that performs the same operation regardless of the input types. The first approach is to cover all cases, and this is a typical approach of procedural languages. If you need to sum two numbers that can be integers, float or complex, you just need to write three sum() functions, one bound to the integer type, the second bound to the float type and the third bound to the complex type, and to have some language feature that takes charge of choosing the correct implementation depending on the input type. This logic can be implemented by a compiler (if the language is statically typed) or by a runtime environment (if the language is dynamically typed) and is the approach chosen by C++. The disadvantage of this solution is that it requires the programmer to forecast all the possible situations: what if I need to sum an integer with a float? What if I need to sum two lists? (Please note that C++ is not so poorly designed, and the operator overloading technique allows to manage such cases, but the base polymorphism strategy of that language is the one exposed here). The second strategy, the one implemented by Python, is simply to require the input objects to solve the problem for you. In other words you ask the data itself to perform the operation, reversing the problem. Instead of writing a bunch of functions that sum all the possible types in every possible combination you just write one function that requires the input data to sum, trusting that they know how to do it. Does it sound complex? It is not. Let's look at the Python implementation of the + operator. When we write c = a + b, Python actually executes c = a.__add__(b). As you can see the sum operation is delegated to the first input variable. So if we write
def sum(a, b): return a + b
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
there is no need to specify the type of the two input variables. The object a (the object contained in the variable a) shall be able to sum with the object b. This is a very beautiful and simple implementation of the polymorphism concept. Python functions are polymorphic simply because they accept everything and trust the input data to be able to perform some actions. Let us consider another simple example before moving on. The built-in len() function returns the length of the input object. For example
l = [1, 2, 3] print(len(l)) s = "Just a sentence" print(len(s))
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
As you can see it is perfectly polymorphic: you can feed both a list or a string to it and it just computes its length. Does it work with any type? let's check
d = {'a': 1, 'b': 2} print(len(d)) i = 5 try: print(len(i)) except TypeError as e: print(e)
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
Ouch! Seems that the len() function is smart enough to deal with dictionaries, but not with integers. Well, after all, the length of an integer is not defined. Indeed this is exactly the point of Python polymorphism: the integer type does not define a length operation. While you blame the len() function, the int type is at fault. The len() function just calls the __len__() method of the input object, as you can see from this code
print(l.__len__()) print(s.__len__()) print(d.__len__()) try: print(i.__len__()) except AttributeError as e: print(e)
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
Very straightforward: the 'int' object does not define any __len__() method. So, to sum up what we discovered until here, I would say that Python polymorphism is based on delegation. In the following sections we will talk about the EAFP Python principle, and you will see that the delegation principle is somehow ubiquitous in this language. Type Hard Another real-life concept that polymorphism wants to bring into a programming language is the ability to walk the class hierarchy, that is to run code on specialized types. This is a complex sentence to say something we are used to do every day, and an example will clarify the matter. You know how to open a door, it is something you learned in your early years. Under an OOP point of view you are an object (sorry, no humiliation intended) which is capable of interacting with a wood rectangle rotating on hinges. When you can open a door, however, you can also open a window, which, after all, is a specialized type of wood-rectangle-with-hinges, hopefully with some glass in it too. You are also able to open the car door, which is also a specialized type (this one is a mix between a standard door and a window). This shows that, once you know how to interact with the most generic type (basic door) you can also interact with specialized types (window, car door) as soon as they act like the ancestor type (e.g. as soon as they rotate on hinges). This directly translates into OOP languages: polymorphism requires that code written for a given type may also be run on derived types. For example, a list (a generic list object, not a Python one) that can contain "numbers" shall be able to accept integers because they are numbers. The list could specify an ordering operation which requires the numbers to be able to compare each other. So, as soon as integers specify a way to compare each other they can be inserted into the list and ordered. Statically compiled languages shall provide specific language features to implement this part of the polymorphism concept. In C++, for example, the language needs to introduce the concept of pointer compatibility between parent and child classes. In Python there is no need to provide special language features to implement subtype polymorphism. As we already discovered Python functions accept any variable without checking the type and rely on the variable itself to provide the correct methods. But you already know that a subtype must provide the methods of the parent type, either redefining them or through implicit delegation, so as you can see Python implements subtype polymorphism from the very beginning. I think this is one of the most important things to understand when working with this language. Python is not really interested in the actual type of the variables you are working with. It is interested in how those variables act, that is it just wants the variable to provide the right methods. So, if you come from statically typed languages, you need to make a special effort to think about acting like instead of being. This is what we called "duck typing". Time to do an example. Let us define a Room class
class Room: def __init__(self, door): self.door = door def open(self): self.door.open() def close(self): self.door.close() def is_open(self): return self.door.is_open()
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
A very simple class, as you can see, just enough to exemplify polymorphism. The Room class accepts a door variable, and the type of this variable is not specified. Duck typing in action: the actual type of door is not declared, there is no "acceptance test" built in the language. Indeed, the incoming variable shall export the following methods that are used in the Room class: open(), close(), is_open(). So we can build the following classes
class Door: def __init__(self): self.status = "closed" def open(self): self.status = "open" def close(self): self.status = "closed" def is_open(self): return self.status == "open" class BooleanDoor: def __init__(self): self.status = True def open(self): self.status = True def close(self): self.status = False def is_open(self): return self.status
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
Both represent a door that can be open or closed, and they implement the concept in two different ways: the first class relies on strings, while the second leverages booleans. Despite being two different types, both act the same way, so both can be used to build a Room object.
door = Door() bool_door = BooleanDoor() room = Room(door) bool_room = Room(bool_door) room.open() print(room.is_open()) room.close() print(room.is_open()) bool_room.open() print(bool_room.is_open()) bool_room.close() print(bool_room.is_open())
notebooks/giordani/Python_3_OOP_Part_4__Polymorphism.ipynb
Heroes-Academy/OOP_Spring_2016
mit
Part i The linear regression function below implements linear regression using the normal equations. We could also use some form of gradient descent to do this.
def linear_regression(X, y): return linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
Here we just load some data and get it into a form we can use.
# Load the data data = np.loadtxt('quasar_train.csv', delimiter=',') wavelengths = data[0] fluxes = data[1] ones = np.ones(fluxes.size) df_ones = pd.DataFrame(ones, columns=['xint']) df_wavelengths = pd.DataFrame(wavelengths, columns=['wavelength']) df_fluxes = pd.DataFrame(fluxes, columns=['flux']) df = pd.concat([df_ones, df_wavelengths, df_fluxes], axis=1) X = pd.concat([df['xint'], df['wavelength']], axis=1) y = df['flux'] x = X['wavelength']
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
Performing linear regression on the first training example
theta = linear_regression(X, y)
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
yields the following parameters:
print('theta = {}'.format(theta))
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
Now we wish to display the results for part i. Evaluate the model
p = np.poly1d([theta[1], theta[0]]) z = np.linspace(x[0], x[x.shape[0]-1])
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
at a set of design points. The data set and the results of linear regression come in the following figure.
fig = plt.figure(1, figsize=(12,10)) plt.xlabel('Wavelength (Angstroms)') plt.ylabel('Flux (Watts/m^2') plt.xticks(np.linspace(x[0], x[x.shape[0]-1], 10)) plt.yticks(np.linspace(-1, 9, 11)) scatter = plt.scatter(x, y, marker='+', color='purple', label='quasar data') reg = plt.plot(z, p(z), color='blue', label='regression line') plt.legend()
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
The following plot displays the results.
plt.show()
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
Part ii For the next part, we perform locally weighted linear regression on the data set with a Gaussian weighting function. We use the parameters that follow.
import homework1_5b as hm1b import importlib as im Xtrain = X.as_matrix() ytrain = y.as_matrix() tau = 5
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
Training the model yields the following results. Here we place the results into the same plot at the data in part i. The figure shows that the weighted linear regression algorithm best fits the data, especially in the region around wavelength ~1225 Angstroms.
W = hm1b.weightM(tau)(Xtrain) m = hm1b.LWLRModel(W, Xtrain, ytrain) z = np.linspace(x[0], x[x.shape[0]-1], 200) fig = plt.figure(1, figsize=(12,10)) plt.xlabel('Wavelength (Angstroms)') plt.ylabel('Flux (Watts/m^2') plt.xticks(np.arange(x[0], x[x.shape[0]-1]+50, step=50)) plt.yticks(np.arange(-1, 9, step=0.5)) plot1 = plt.scatter(x, y, marker='+', color='black', label='quasar data') plot2 = plt.plot(z, p(z), color='blue', label='regression line') plot3 = plt.plot(z, m(z), color='red', label='tau = 5') plt.legend() plt.show()
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
Part III Here we perform the same regression for more values of tau and plot the results.
taus = [1,5,10,100,1000] models = {} for tau in taus: W = hm1b.weightM(tau)(Xtrain) models[tau] = hm1b.LWLRModel(W, Xtrain, ytrain) z = np.linspace(x[0], x[x.shape[0]-1], 200) fig = plt.figure(1, figsize=(12,10)) plt.xlabel('Wavelength (Angstroms)') plt.ylabel('Flux (Watts/m^2') plt.xticks(np.arange(x[0], x[x.shape[0]-1]+50, step=50)) plt.yticks(np.arange(-2, 9, step=0.5)) plot1 = plt.scatter(x, y, marker='+', color='k', label='quasar data') plot4 = plt.plot(z, models[1](z), color='red', label='tau = 1') plot4 = plt.plot(z, models[5](z), color='blue', label='tau = 5') plot5 = plt.plot(z, models[10](z), color='green', label='tau = 10') plot6 = plt.plot(z, models[100](z), color='magenta', label='tau = 100') plot7 = plt.plot(z, models[1000](z), color='cyan', label='tau = 1000') plt.legend() plt.show()
src/homework1/homework1_5b.ipynb
stallmanifold/cs229-machine-learning-stanford-fall-2016
apache-2.0
The first column indicate the iteration or cycle, the second the temperature. The remaining are energy components and other observables. The temperature is an integer. In a different file one an find the conversion to kelvin. We start with some setup...
import numpy as np import glob
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Use glob.glob to get a list of all the input rt files.
files = glob.glob('../../data/profasi/n*/rt')
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We now read all the rt files into memory. If they were too big, we should think of a more efficient way to process them (maybe using memmap or pytables). In our case they are small enough. As these are numeric files, we use loadtxt to automatically generte an array. As different files will generate different arrays, we collect them in a list that we finally transform into an array.
all_enes = [] for filein in files: print("Reading..... ", filein) all_enes.append(np.loadtxt(filein)) all_enes = np.asarray(all_enes)
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
This is the shape of the resulting array:
all_enes.shape
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Now we need to reshape it so that it contains 2 dimensions with all the raws and the 14 columns:
all_enes=all_enes.reshape((-1,all_enes.shape[2]))
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Alternatively, we could have concatenated all the rows into the first loaded array. Here is a way to do that:
all_enes = None for filein in files: print("Reading..... ", filein) if all_enes is not None: all_enes= np.r_[all_enes, np.loadtxt(filein)] else: all_enes= np.loadtxt(filein)
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Now we need to get the temperatures. We know that there are as many temperatures as nodes, so this would work: temperatures = np.arange(len(files)) However, image that for some purpose we did not process all the ni directories, but only a fraction of them. We can still get all the temperatures from the rt files. It corresponds to the 2nd column. A simple way to get the temperatures is with a set.
temperatures = set(all_enes[:,1])
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We can also use np.unique to the the unique values of an array or part of it. A little bit more efficient... (check it with %timeit).
temperatures = np.unique(all_enes[:,1]) %timeit set(all_enes[:,1]) %timeit np.unique(all_enes[:,1])
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Now we eneed to extract the energies from the array based on the temperature value, and create separate sub-arrays. Array elements can be selected with Boolean arrays. This is called fancy indexing. We start by difining an empty array and then fill it in:
ene_temp = np.zeros_like(all_enes) ene_temp = ene_temp.reshape([len(temperatures), -1, all_enes.shape[1]]) for ti in temperatures.astype(int): ene_temp[ti] = all_enes[all_enes[:,1]==ti, :]
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
The last step is to keep only those energy values that are beyond the equilibration point. So we want only to keep data from, a certain point. Let's plot the energy vs. iteration to see how it looks line. We'll plot temperature 5 as this is the lowest temperature (Profasi order from high to low).
import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) plt.plot(ene_temp[5,:,0], ene_temp[5,:,2])
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Ugly, isn't it? The array is not ordered by iteration. The order can be seen here:
plt.plot(ene_temp[5,:,0],'x')
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We have the structures at the correct temperature but still ordered from the 6 replicas that were running. Let's order them with respect to the first column. We cannot use sort here, because we want to use the order of the first column to order all the raw elements. We can get that order with argsort and then apply it to the array. The problem is that the sizes of the sorting order do no agree with the sizes of ene_temp. To solve that we need to do a trick which I pesonally find very cumbersome. The reason is we need to broadcas correctly the dimensions of order into ene_temp. It's simplers to understand if you see that for the first temperatures we want: ene_temp[0, order[0]] ene_temp[1, order[1]] and so on. It would seem that ene_temp[:, order[:]] but this performs the broadcasting in the wrong axis. Instead, we need to transpose the first axis, because what we are actually doing is: ene_temp[[0], order[0]] ene_temp[[1], order[1]] And this can be done creating the vector [[0], [1], [2]... which is done with: np.arange(ene_temp.shape[0])[:, np.newaxis].
order = ene_temp[:, :, 0].argsort() ene_temp = ene_temp[np.arange(ene_temp.shape[0])[:, np.newaxis], order] plt.subplot(2,1,1) plt.plot(ene_temp[5,:,0],'x') plt.subplot(2,1,2) plt.plot(ene_temp[5,:,0], ene_temp[5,:,2])
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Finally we can select and save the submatrix from iteration 2000 onwards:
np.save('energies_temperatures', ene_temp[ene_temp[:,:,0]>2000, :], )
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Advanced Topic: optimizing with numba In the previous section, if there were $N$ temperatures we cycles the all_enes array $N$ times, which is not very efficient. We could potentially make if faster by running this in a single step. We create an empty arrray and fill it in with the correct values. We first time our initial approach:
%%timeit for ti in temperatures.astype(int): ene_temp[ti] = all_enes[all_enes[:,1]==ti, :]
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Now we loop though all the rows and put each row to its corrrect first axis dimension. We need to keep an array of the filled positions.
ene_temp2 = np.zeros_like(ene_temp) filled = np.zeros(len(temperatures), np.int) for row in all_enes: ti = int(row[1]) ene_temp2[ti, filled[ti]] = row filled[ti] +=1
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We check we are still getting the same result, and we time it:
np.all(ene_temp2==ene_temp) %%timeit filled = np.zeros(len(temperatures), np.int) for row in all_enes: ti = int(row[1]) ene_temp2[ti, filled[ti]] = row filled[ti] +=1
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Good (ironically)! Two orders of magnitude slower than the first approach... The reason is that in the previous approach we were using numpy fast looping abilities, whereas now the loops are implemented in pure python and therefore are much slower. This is the typical case where numba can increase the performance of such loops.
import numba
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
We first write a function of our lines. We avoid creating arrays into that function as those cannot be optimized with numba. We test our approach and check that the timings are the same.
def get_temperatures(array_in, array_out, filled): for r in range(array_in.shape[0]): ti = int(array_in[r,1]) for j in range(array_in.shape[1]): array_out[ti, filled[ti], j] = array_in[r,j] filled[ti] +=1 return array_out %%timeit num_temp = len(temperatures) m = all_enes.shape[0] n = all_enes.shape[1] m = m // num_temp ene_temp = np.zeros((num_temp, m,n )) filled = np.zeros(num_temp, np.int) get_temperatures(all_enes, ene_temp, filled)
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Now we can pass this function to numba. The nopython option tell numba not to create object code which is as slow as python code. That is why we created the arrays outside the function. We also check the timings.
numba_get_temperatures = numba.jit(get_temperatures,nopython=True) %%timeit num_temp = len(temperatures) m = all_enes.shape[0] n = all_enes.shape[1] m = m // num_temp ene_temp3 = np.zeros((num_temp, m,n )) filled = np.zeros(num_temp, np.int) numba_get_temperatures(all_enes, ene_temp3, filled)
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Wow! Three orders of magnitude faster than the python and one order faster than our original numpy code (with only 6 temperatures!). But having to declare all arrays outside is ugly. Is there a workaroud? Yes! Numba is clever enough to separate the loops from the array creation, and optimize the loops. This called loop-lifting or loop-jitting. We need to remove the nopython option as part of the code will be object like, but we see that it is as efficient as before. Here we also use a decoratior instead of a function call. The results are exactly the same, it just gives a shorter syntax.
@numba.jit def numba2_get_temperatures(array_in, num_temp): m = all_enes.shape[0] n = all_enes.shape[1] m = m // num_temp array_out = np.zeros((num_temp, m,n )) filled = np.zeros(num_temp, np.int) for r in range(array_in.shape[0]): ti = int(array_in[r,1]) for j in range(array_in.shape[1]): array_out[ti, filled[ti], j] = array_in[r,j] filled[ti] +=1 return array_out %%timeit num_temp = len(temperatures) ene_temp4 = numba2_get_temperatures(all_enes, num_temp) np.all(numba2_get_temperatures(all_enes, num_temp)==ene_temp)
notebooks/extras/Numpy arrays. Data manipulation.ipynb
rcrehuet/Python_for_Scientists_2017
gpl-3.0
Defining the embeddings Now that we have integer ids, we can use the Embedding layer to turn those into embeddings. An embedding layer has two dimensions: the first dimension tells us how many distinct categories we can embed; the second tells us how large the vector representing each of them can be. When creating the embedding layer for movie titles, we are going to set the first value to the size of our title vocabulary (or the number of hashing bins). The second is up to us: the larger it is, the higher the capacity of the model, but the slower it is to fit and serve.
# Turns positive integers (indexes) into dense vectors of fixed size. # TODO movie_title_embedding = tf.keras.layers.Embedding( # Let's use the explicit vocabulary lookup. input_dim=movie_title_lookup.vocab_size(), output_dim=32 )
courses/machine_learning/deepdive2/recommendation_systems/solutions/featurization.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We need to process it before we can use it. While there are many ways in which we can do this, discretization and standardization are two common ones. Standardization Standardization rescales features to normalize their range by subtracting the feature's mean and dividing by its standard deviation. It is a common preprocessing transformation. This can be easily accomplished using the tf.keras.layers.experimental.preprocessing.Normalization layer:
# Feature-wise normalization of the data. # TODO timestamp_normalization = tf.keras.layers.experimental.preprocessing.Normalization() timestamp_normalization.adapt(ratings.map(lambda x: x["timestamp"]).batch(1024)) for x in ratings.take(3).as_numpy_iterator(): print(f"Normalized timestamp: {timestamp_normalization(x['timestamp'])}.")
courses/machine_learning/deepdive2/recommendation_systems/solutions/featurization.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Processing text features We may also want to add text features to our model. Usually, things like product descriptions are free form text, and we can hope that our model can learn to use the information they contain to make better recommendations, especially in a cold-start or long tail scenario. While the MovieLens dataset does not give us rich textual features, we can still use movie titles. This may help us capture the fact that movies with very similar titles are likely to belong to the same series. The first transformation we need to apply to text is tokenization (splitting into constituent words or word-pieces), followed by vocabulary learning, followed by an embedding. The Keras tf.keras.layers.experimental.preprocessing.TextVectorization layer can do the first two steps for us:
# Text vectorization layer. # TODO title_text = tf.keras.layers.experimental.preprocessing.TextVectorization() title_text.adapt(ratings.map(lambda x: x["movie_title"]))
courses/machine_learning/deepdive2/recommendation_systems/solutions/featurization.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's try it out:
# TODO user_model = UserModel() user_model.normalized_timestamp.adapt( ratings.map(lambda x: x["timestamp"]).batch(128)) for row in ratings.batch(1).take(1): print(f"Computed representations: {user_model(row)[0, :3]}")
courses/machine_learning/deepdive2/recommendation_systems/solutions/featurization.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's try it out:
# TODO movie_model = MovieModel() movie_model.title_text_embedding.layers[0].adapt( ratings.map(lambda x: x["movie_title"])) for row in ratings.batch(1).take(1): print(f"Computed representations: {movie_model(row)[0, :3]}")
courses/machine_learning/deepdive2/recommendation_systems/solutions/featurization.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Read in the files
filename1='C:/econdata/GDP.xls' filename2='C:/econdata/PAYEMS.xls' filename3='C:/econdata/CPIAUCSL.xls' gdp = IEtools.FREDxlsRead(filename1) lab = IEtools.FREDxlsRead(filename2) cpi = IEtools.FREDxlsRead(filename3)
IEtools Demo.ipynb
infotranecon/IEtools
mit
Here's a plot of nominal GDP
pl.plot(gdp['interp'].x,gdp['interp'](gdp['interp'].x)) pl.ylabel(gdp['name']+' [G$]') pl.yscale('log') pl.show()
IEtools Demo.ipynb
infotranecon/IEtools
mit
And here is nominal GDP growth
pl.plot(gdp['growth'].x,gdp['growth'](gdp['growth'].x)) pl.ylabel(gdp['name']+' growth [%]') pl.show()
IEtools Demo.ipynb
infotranecon/IEtools
mit