markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The complete data set can be described using the traditional statistical descriptors:
# Calculate some useful statistics showing how the data is distributed data.describe()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 1 Based on the descriptive statistics above, how would you summarize this data for the Board in a few sentences? Step 2: Define the Task You Want to Accomplish The tasks that are possible to accomplish in a machine learning problem depend on how you slice up the dataset into features (inputs) and the target (output). Step 2a: Identify the Inputs For this data, we have a single input or feature -- town population. We have 97 different town populations in our dataset. That's 97 different values for our single input variable. Keep in mind that each value is in 10,000s. So multipy the value you see by 10,000 to get the actual value of the population.
# Here are the input values # Number of columns in our dataset cols = data.shape[1] # Inputs are in the first column - indexed as 0 X = data.iloc[:, 0:cols-1] # Alternatively, X = data['Population'] print("Number of columns in the dataset {}".format(cols)) print("First few inputs\n {}".format(X.head())) # The last few values of X X.tail()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Step 2b: Identify the Output The output is annual restaurant profit. For each value of the input we have a value for the output. Keep in mind that each value is in \$10,000s. So multipy the value you see by \$10,000 to get the actual annual profit for the restaurant. Let's look at some of these output values.
# Here are the output vaues # Outputs are in the second column - indexed as 1 y = data.iloc[:, cols-1:cols] # Alternatively, y = data['Profits'] # See a sample of the outputs y.head() # Last few items of the ouput y.tail()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Once we've identified the inputs and the output, the task is easy to define: given the inputs, predict the output. So in this case, given a town's population, predict the profit a restarurant would generate. Step 3: Define the Model As we saw in the Nuts and Bolts session, a model is a way of transforming the inputs into the output. For every row of the dataset, we use the same weights $w_{0}$ and $w_{1}$ to multiply each corresponding feature value in that row and summing like so: $$(w_{0} * x_{0}) + (w_{1} * x_{1})$$ Depending on the values of the $w$ values and the features, this gives us a value for the output -- $\hat{y}$ -- in that row of the dataset. Step 3a: Define the Features In this case the single feature is the population size (also our only input). Here the features and the inputs are one and the same. We'll find later that this doesn't always have to be so -- in some cases there can be fewer or greater features than the number of inputs. Step 3b: Transform the Features Into An Output Once again, as we saw in the ML-Nuts-and-Bolts notebook, a model is a way of transforming the inputs into the output. For every row of the dataset, we use the same weights $w_{0}$ and $w_{1}$ to multiply each corresponding feature value in that row and summing like so: $$(w_{0} * x_{0}) + (w_{1} * x_{1})$$ Remember from the Nuts and Bolts notebook that $w_{0}$ is called the intercept and the value of $x_{0}$ is 1 for all rows. Depending on the values of the $w$s and the features, this gives us a value for the output -- $\hat{y}$ -- in that row of the dataset. SIDE BAR - Matrix Notation Because our datasets typically have lots of rows, applying a model to a dataset means hundreds, if not millions of equations like the one above -- in fact, one for each row of the dataset. The beautiful thing is matrix notation can express these millions of equations in a compact way using just a single line. To see how matrix notation does this neat trick, first separate the dataset into the input features matrix $X$, the output values matrix $Y$. Specifically, let $$X = \begin{bmatrix} 1 & 6.1101 \ 1 & 5.5277 \ \vdots & \vdots \ 1 & 13.394 \ 1 & 5.4369 \end{bmatrix}$$ and $$Y = \begin{bmatrix} 17.5920 \ 9.1302 \ \vdots \ 9.05510 \ 0.61705 \end{bmatrix}$$ In this simple dataset both $X$ and $Y$ are matrices with 97 rows; $X$ has 2 columns while $Y$ has only 1 column. In matrix notation, their dimensions are written as (97 x 2) and (97 x 1) respectively. The 'x' sign is not a multiplication as we usually take it but a way of saying we have a matrix dimension of (m rows x n columns). Three Rules of Matrices to Keep in Mind (the only ones we'll need) 1. When you add, subtract, or multiply matrices each matrix MUST have the SAME dimensions. This is called "element-wise" addition, subtraction, or multiplication. Otherwise the operation of adding, subtracting, or multiplying a matrix (element-wise) doesn't make sense. 2. When you "multiply" two matrices together (this is more correctly called the "dot product"), say $X$ and $W$, the column dimension of the first MUST be the row dimension of the second. Otherwise the operation of multiplying two matrices doesn't make sense. 3. The transpose of matrix M is written as $M^{T}$. If the dimensions of M are (m x n), then the dimensions of $M^{T}$ are (n x m). The equivalent of writing $$ (w_{0} * x_{0}^{(1)}) + (w_{1} * x_{1}^{(1)}) = y^{(1)} \ (w_{0} * x_{0}^{(2)}) + (w_{1} * x_{1}^{(2)}) = y^{(2)} \ (w_{0} * x_{0}^{(3)}) + (w_{1} * x_{1}^{(3)}) = y^{(3)} \ (w_{0} * x_{0}^{(4)}) + (w_{1} * x_{1}^{(4)}) = y^{(4)} \ \vdots \ (w_{0} * x_{0}^{(96)}) + (w_{1} * x_{1}^{(96)}) = y^{(96)} \ (w_{0} * x_{0}^{(97)}) + (w_{1} * x_{1}^{(97)}) = y^{(97)} \ $$ is to write instead: $$X \cdot W^{T} = Y$$ where X and Y are the matrices above, $\cdot$ is the symbol for "multipying" or taking the dot product of X and Y, and W is a matrix that looks like $$W = \begin{bmatrix} w_{0} \ w_{1} \ \end{bmatrix}$$ It's as simple as that to represent in one line something that would other wise take hundreds of lines! And that's why matrix notation is handy. Step 3c: Clarify the Parameters The parameters are $w_{0}$ and $w_{1}$. EXERCISE 2 Using the matrix rules above, what are the dimensions of the matrix $Y$? Step 4: Define the Penalty for Getting it Wrong As we saw in the description of the model, $w_{0}$ and $w_{1}$ are the parameters of our model. Let's pick a row from the dataset, assume values of -10 and 1 respectively for $w_{0}$ and $w_{1}$, and see what we get for the value of $\hat{y}$. We'll also subtract this from the actual ouput value of the row $y$ and define our penalty as a function of $\hat{y} - y$. [We'll use the power of matrix multiplication to do these calculations without any fuss. If you're interested in the details, have a look at the computePenalty function in the Shared-Functions notebook.] Terminology The penalty always operates on a row of the dataset. When the penalties for every row of the dataset gets summed, that quantity is called the cost. So there's a penalty function that is applied to every row; sum the outputs of the penalty function over the rows of the dataset and you arrive at the cost. Sometimes people refer to the penalty function as the cost function. That's OK, there's not much harm done.
# A Handful of Penalty Functions # Generate the error range x = np.linspace(-10,10,100) [penaltyPlot(x, pen) for pen in penaltyFunctions.keys()];
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 3 Does the penalty we've chosen make sense? Convince yourself of this and write a paragraph explaining why it makes sense.
penalty(X,y,[-10, 1], VPenalty) penalty(X,y,[-10, 1], invertedVPenalty)
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
SIDEBAR - How the Penalty is Usually Written The cost of getting it wrong is defined as a function $J(W)$: $$J(W) = \frac{1}{2m} \sum_{i=1}^{m} (h_{W}x^{(i)}) - y^{(i)})^2$$ What we're saying here: For each input, transform it using $w_{0}$ and $w_{1}$. This will give us a number. Subtract from this number the actual value of the output for that input. This gives us another number. Take this number and square it. This gives us our final result for that particular input. Add each of these final results -- one for each input we have in our dataset -- and divide it by $2m$ -- that is, twice the number of data points in our data set. This last division step is to make the cost of getting it wrong relative to the size of the dataset -- think of it simply as a mathematical convenience. This way of writing things expresses exactly the same thing as writing out a series of equations would. That's why matrix notation is so powerful and widely used -- the matrix notation helps keep things short and sweet. How the Penalty Varies as the $w_{0}$ and $w_{1}$ Values Change
# Visualize what np.meshgrid does when used with plot w0 = np.linspace(1,5,5) w1 = np.linspace(1,5,5) W0, W1 = np.meshgrid(w0,w1) plt.plot(W0,W1, marker='*', color='g', linestyle='none'); # Plot the cost surface # From https://stackoverflow.com/questions/9170838 # See Also: Helpful matplotlib tutorial at # http://jeffskinnerbox.me/notebooks/matplotlib-2d-and-3d-plotting-in-ipython.html # Set up a grid over w0,w1 values w0 = np.linspace(-10,10,50) w1 = np.linspace(-10,10,50) W0, W1 = np.meshgrid(w0,w1) # Get the penalty value for each point on the grid # See the Shared-Functions.ipynb notebook for the list of defined penalty functions # List of penalty functions in dict penaltyFunctions penalties = np.array([penalty(X,y,[w_0,w_1], squaredPenalty) for w_0,w_1 in zip(np.ravel(W0), np.ravel(W1))]) Z = penalties.reshape(W0.shape) # Create the plot from mpl_toolkits.mplot3d import Axes3D fig, ax = plt.subplots(figsize=(12,8)) ax = fig.add_subplot(1,1,1, projection='3d') ax.set_title("Cost Surface for a " + penaltyFunctions[squaredPenalty] + " Function") ax.set_xlabel('w0') ax.set_ylabel('w1') ax.set_zlabel('Cost') p = ax.plot_surface(W0, W1, Z, rstride=4, cstride=4) # Contour Lines fig, ax = plt.subplots(figsize=(12,8)) plt.contour(Z, cmap=cm.RdBu,vmin=abs(Z).min(), vmax=abs(Z).max(), extent=[-10,10,-10,10]) # Heatmap or Colormap fig, ax = plt.subplots(figsize=(12,8)) p = ax.pcolor(W0, W1, Z, cmap=cm.RdBu, vmin=abs(Z).min(), vmax=abs(Z).max()) cb = fig.colorbar(p)
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Step 5: Find the Parameter Values that Minimize the Cost The cost function might have a minimum but how can we possibly find it? We can't use the brute force method of choosing every possible combination of values for $w_{0}$ and $w_{1}$ -- there are an infinite number of combinations and we'll never finish our task. This is where the concept of gradient descent comes in. Imagine starting anywhere on the surface, say at $w_{0}$ at -10 and $w_{1}$ at -10. That's at the left front edge of the surface plot you've just seen. If we took a step in the direction where the slope under our feet is steepest, then we would be one step closer to the "bottom" of the surface. So let's take that step and then take the next step in the direction where the slope under our feet is steepest. That gets us even lower and in the right direction to the bottom. Eventually, after a number of these steps, you'll get to the bottom. Let's look at this a bit more systematically. That's the idea. To make it work, we have to write out an expression for the next set of parameter values to try. And it turns out that for the cost function $J(W)$, there is a well-worked out way to write these values for $w_{0}$ and $w_{1}$ based on the direction of the steepest slope. How To Choose the Next Set of Values for $W$ $$w_{j} := w_{j} - \frac{\alpha}{m} \ [\sum_{i=1}^{m} (h_{W}(x^{(i)}) - y^{(i)}) x_{j}^{(i)}]$$ Implement the Iterative Method of Gradient Descent Here's what we'll do. - First pick initial values for $w_0$ and $w_1$. Pick anything within reason -- let's say 1 and -1 respectively. - Choose a penalty function. We'll choose the squaredPenalty. - For each row of the dataset find $\hat{y}$ for the values of $w_0$ and $w_1$. Find the value of the penalty for $y - \hat{y}$ using the penalty function. - Add up the penalty values for each row. This total is the cost. - Use gradient descent to find the next set of $w_0$ and $w_1$ values that lower the cost. Repeat until the number of iterations specified are complete. - The final values of $w_0$ and $w_1$ are the optimal values. Use these optimal parameter values to make predictions.
# Initialize the parameter values W and pick the penalty function W_init = [1,-1.0] penalty_function = squaredPenalty # Test out the penalty function in the Shared-Functions notebook penalty(X, y, W_init, penalty_function) # Test out the gradientDescent function in the Shared-Functions notebook gradientDescent(X, y, W_init, num_iterations=5)
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Run the iterative gradient descent method to determine the optimal parameter values.
# Set hyper-parameters num_iters = 50 # number of iterations learning_rate = 0.0005 # the learning rate # Run gradient descent and capture the progression # of cost values and the ultimate optimal W values %time W_opt, final_penalty, running_w, running_penalty = gradientDescent(X, y, W_init, num_iters, learning_rate) # Get the optimal W values and the last few cost values W_opt, final_penalty, running_w[-5:], running_penalty[-5:]
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
We can see that the Ws are changing even after 5000 interations...but at the 4th decimal place. Similarly, the penalty is changing (decreasing) in the 100s place. How cost changes as the number of iterations increase
# How the penalty changes as the number of iterations increase fig, ax = plt.subplots(figsize=(8,5)) ax.plot(np.arange(num_iters), running_penalty, 'g') ax.set_xlabel('Number of Iterations') ax.set_ylabel('Cost') ax.set_title('Cost vs. Iterations Over the Dataset for a Specific Learning Rate'); np.array(running_w).flatten() w0 = np.array([param[0].flatten() for param in running_w][1:]).flatten() w1 = np.array([param[1] for param in running_w][1:]).flatten() len(w0), len(w1), len(np.arange(num_iters)) # How the Ws change as the number of iterations increase fig, (ax1,ax2) = plt.subplots(figsize=(14,6), nrows=1, ncols=2, sharey=False) ax1.plot(np.arange(num_iters), w0, 'g') ax1.set_xlabel('Number of Iterations') ax1.set_ylabel(r'$w_{0}$') ax1.set_title(r'$w_{0}$ vs. Iterations Over the Dataset') ax2.plot(np.arange(num_iters), w1, 'y') ax2.set_xlabel('Number of Iterations') ax2.set_ylabel(r'$w_{1}$') ax2.set_title(r'$w_{1}$ vs. Iterations Over the Dataset') fig, ax = plt.subplots(figsize=(14,6)) ax.plot(np.arange(num_iters), w0, 'g', label="w0") ax.plot(np.arange(num_iters), w1, 'y', label="w1") plt.legend()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 4 Experiment with different values of alpha, W, and iters. Write down your observations. Step 6: Use the Model and Optimal Parameter Values to Make Predictions Let's see how our optimal parameter values can be used to make predictions.
W_opt[0,0], W_opt[1,0] # Create 100 equally spaced values going from the minimum value of population # to the maximum value of the population in the dataset. x = np.linspace(data.Population.min(), data.Population.max(), 100) f = (W_opt[0, 0] * 1) + (W_opt[1, 0] * x) fig, ax = plt.subplots(figsize=(8,5)) ax.plot(x, f, 'g', label='Prediction') ax.scatter(data.Population, data.Profit, label='Training Data') ax.legend(loc='upper left') ax.set_xlabel('Population') ax.set_ylabel('Profit') ax.set_title('Predicted Profit vs. Population Size'); # First 5 population values in the dataset X[0:5].values.flatten() # Prediction of profit for the first 5 populations in the dataset #populations = [5, 6, 12, 14, 15] populations = X[0:5].values.flatten() profits = [W_opt[0, 0] + (W_opt[1, 0] * pop * 10000) for pop in populations] #print(profits) print(['${:5,.0f}'.format(profit) for profit in profits])
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Experimenting with Hyperparameters Balancing Learning Rate with Number of Iterations Learning Rate - The Intuition <img src="../Images/gradient-descent-intuition.png" alt="Under- and Overshoot in Learning Size" style="width: 600px;"/>
# How predictions change as the learning rate and the # number of iterations are changed learning_rates = [0.001, 0.009] epochs = [10, 500] # epoch is another way of saying num_iters # All combinations of learning rates and epochs from itertools import permutations combos = [list(zip(epochs, p)) for p in permutations(learning_rates)] combos # get it into the right format to plug into the gradient descent function combos_list = [] for i in range(len(combos)): for j in range(len(combos[i])): combos_list.append([combos[i][j][0], combos[i][j][1]]) combos_list gdResults = [gradientDescent(X, y, \ W_init, combos_list[i][0], combos_list[i][1]) for i in range(len(combos_list))] W_values = [gdResults[i][0] for i in range(len(gdResults))] len(gdResults), len(W_values) # Test it out # From https://stackoverflow.com/questions/31883097/ cmap = plt.get_cmap('jet') plot_colors = cmap(np.linspace(0, 1, len(combos_list))) for i, (combo, color) in enumerate(zip(combos_list, plot_colors), 1): plt.plot(x, np.sin(x)/i, label=combo, c=color) # Create 100 equally spaced values going from the minimum value of population # to the maximum value of the population in the dataset. x = np.linspace(data.Population.min(), data.Population.max(), 100) f_list = [(W_values[i][0] * 1) + (W_values[i][1] * x).T for i in range(len(W_values))] fig, ax = plt.subplots(figsize=(12,8)) #[ax.plot(x, f_list[i], 'r', label=combos_list[i]) for i in range(len(f_list))] for i, (combo, color) in enumerate(zip(combos_list, plot_colors), 1): ax.plot(x, f_list[i-1], label=combo, c=color) ax.scatter(data.Population, data.Profit, label='Dataset') ax.legend(loc='upper left') ax.set_xlabel('Population') ax.set_ylabel('Profit') ax.set_title('Predicted Profit as Number of Iterations and Learning Rate Change');
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 5 In the plot above what do you observe about the way in which the learning rate and the number of interactions determine the "prediction line" that is learned? Exercise 6 Now you can make predictions of profit based on your data. What are the predicted profits for populations of 50,000, 100,000, 160,000, and 180,000? Are the predictions reasonable? Explain why or why not?
# We're using the optimal W values obtained when the learning rate = 0.001 # and the number of iterations = 500 predictions = [(W_values[3][0] * 1) + (W_values[3][1] * pop) for pop in [50000, 100000, 160000, 180000]] # Get into the right form form printing preds = np.array(predictions).squeeze() print(['${:5,.0f}'.format(pred) for pred in preds])
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Learning from Experience Exercise 7 What happens to the optimal values of W if we use just a quarter of the dataset? What happens if we now use half of the training set? How does this relate to Tom Mitchell's definition of machine learning?
# We'll use the values of num_iters and learning_rate defined above print("num_iters: {}".format(num_iters)) print("learning rate: {}".format(learning_rate)) # Vary the size of the dataset dataset_sizes = [2, 5, 10, 25, 50, len(X)] gdResults = [gradientDescent(X[0:dataset_sizes[i]], y[0:dataset_sizes[i]], \ W_init, num_iters, learning_rate) for i in range(len(dataset_sizes))] # This allows us to color our plot lines differently without explicitly specifying colors cmap2 = plt.get_cmap('jet') plot_colors2 = cmap2(np.linspace(0, 1, len(dataset_sizes))) W_values = [gdResults[i][0] for i in range(len(gdResults))] W_values[0] # Create 100 equally spaced values going from the minimum value of population # to the maximum value of the population in the dataset. x = np.linspace(data.Population.min(), data.Population.max(), 100) f_list = [(W_values[i][0] * 1) + (W_values[i][1] * x).T for i in range(len(W_values))] fig, ax = plt.subplots(figsize=(12,8)) #[ax.plot(x, f_list[i], 'y', label=dataset_sizes[i]) for i in range(len(f_list))] for i, (dataset, color) in enumerate(zip(dataset_sizes, plot_colors2), 1): ax.plot(x, f_list[i-1], label=dataset, c=color) ax.scatter(data.Population, data.Profit, label='Dataset') ax.legend(loc='upper left') ax.set_xlabel('Population') ax.set_ylabel('Profit') ax.set_title('Predicted Profit as Dataset Size Changes')
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
OpenStreetMap的OSM文件对象数据分类捡取器 by openthings@163.com, 2016-03-21. 功能: 输出三个单行存储的json文件,可在Spark中使用,通过spark的sc.read.json()直接读入处理。 本工具将osm文件按照tag快速分类,直接转为node/way/relation三个json文件,并按行存储。 说明: Spark默认按行处理,因此处理xml尤其是多行XML比较麻烦,如OpenStreetMap的OSM格式。 对于xml文件较大、无法全部载入内存的情况,需要预处理成行方式,然后在Spark中分布式载入。 后续工作: 1、映射way的nd节点坐标,构建出几何对象的坐标串。 2、每一几何对象数据转为wkt格式,保存到JSON的geometry域。 3、数据表按区域分割后,转为GeoPandas,然后进一步转为shape file。 XML快速转为JSON格式,使用lxml进行流式处理。 流方式递归读取xml结构数据(占用资源少): http://www.ibm.com/developerworks/xml/library/x-hiperfparse/ XML字符串转为json对象支持库 : https://github.com/martinblech/xmltodict xmltodict.parse()会将字段名输出添加@和#,在Spark查询中会引起问题,需要去掉。如下设置即可: xmltodict.parse(elem_data,attr_prefix="",cdata_key="") 编码和错误xml文件恢复,如下: magical_parser = lxml.etree.XMLParser(encoding='utf-8', recover=True) tree = etree.parse(StringIO(your_xml_string), magical_parser) #or pass in an open file object 先将element转为string,然后生成dict,再用json.dump()产生json字符串。 elem_data = etree.tostring(elem) elem_dict = xmltodict.parse(elem_data) elem_jsonStr = json.dumps(elem_dict) 可以使用json.loads(elem_jsonStr)创建出可编程的json对象。
import os import time import json from pprint import * import lxml from lxml import etree import xmltodict, sys, gc from pymongo import MongoClient gc.enable() #Enable Garbadge Collection # 将指定tag的对象提取,写入json文件。 def process_element(elem): elem_data = etree.tostring(elem) elem_dict = xmltodict.parse(elem_data,attr_prefix="",cdata_key="") #print(elem_dict) if (elem.tag == "node"): elem_jsonStr = json.dumps(elem_dict["node"]) fnode.write(elem_jsonStr + "\n") elif (elem.tag == "way"): elem_jsonStr = json.dumps(elem_dict["way"]) fway.write(elem_jsonStr + "\n") elif (elem.tag == "relation"): elem_jsonStr = json.dumps(elem_dict["relation"]) frelation.write(elem_jsonStr + "\n") # 遍历所有对象,然后调用process_element处理。 # 迭代处理,func为迭代的element处理函数。 def fast_iter(context, func_element, maxline): placement = 0 try: for event, elem in context: placement += 1 if (maxline > 0): # 最多的转换对象限制,大数据调试时使用于抽样检查。 print(etree.tostring(elem)) if (placement >= maxline): break func_element(elem) #处理每一个元素,调用process_element. elem.clear() while elem.getprevious() is not None: del elem.getparent()[0] except Exception as ex: print(time.strftime(ISOTIMEFORMAT),", Error:",ex) del context
geospatial/openstreetmap/osm-extract2json.ipynb
supergis/git_notebook
gpl-3.0
执行osm的xml到json转换,一次扫描提取为三个文件。 context = etree.iterparse(osmfile,tag=["node","way"])的tag参数必须要给值,否则取出来的sub element全部是none。 使用了3个打开的全局文件:fnode、fway、frelation
#maxline = 0 #抽样调试使用,最多转换的对象,设为0则转换文件的全部。 def transform(osmfile,maxline = 0): ISOTIMEFORMAT="%Y-%m-%d %X" print(time.strftime( ISOTIMEFORMAT),", Process osm XML...",osmfile," =>MaxLine:",maxline) global fnode global fway global frelation fnode = open(osmfile + "_node.json","w+") fway = open(osmfile + "_way.json","w+") frelation = open(osmfile + "_relation.json","w+") context = etree.iterparse(osmfile,tag=["node","way","relation"]) fast_iter(context, process_element, maxline) fnode.close() fway.close() frelation.close() print(time.strftime( ISOTIMEFORMAT),", OSM to JSON, Finished.")
geospatial/openstreetmap/osm-extract2json.ipynb
supergis/git_notebook
gpl-3.0
执行转换。
# 需要处理的osm文件名,自行修改。 osmfile = '../data/osm/muenchen.osm' transform(osmfile,0)
geospatial/openstreetmap/osm-extract2json.ipynb
supergis/git_notebook
gpl-3.0
Data preparation Load raw data
filename = 'FIWT_Exp050_20150612160930.dat.npz' def loadData(): # Read and parse raw data global exp_data exp_data = np.load(filename) # Select colums global T_cmp, da1_cmp, da2_cmp, da3_cmp , da4_cmp T_cmp = exp_data['data33'][:,0] da1_cmp = exp_data['data33'][:,3] da2_cmp = exp_data['data33'][:,5] da3_cmp = exp_data['data33'][:,7] da4_cmp = exp_data['data33'][:,9] global T_rig, phi_rig T_rig = exp_data['data44'][:,0] phi_rig = exp_data['data44'][:,2] loadData() text_loadData()
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Input $\delta_T$ and focused time ranges
# Pick up focused time ranges time_marks = [ [28.4202657857,88.3682684612,"ramp cmp1 u"], [90.4775063395,150.121612502,"ramp cmp1 d"], [221.84785848,280.642700604,"ramp cmp2 u"], [283.960460465,343.514106772,"ramp cmp2 d"], [345.430891556,405.5707005,"ramp cmp3 u"], [408.588176529,468.175897568,"ramp cmp3 d"], [541.713519906,600.757388552,"ramp cmp4 u"], [604.040332533,663.913079475,"ramp cmp4 d"], [677.817484542,778.026124493,"ramp cmp1/3 d"], [780.09665253,879.865190551,"ramp cmp1/3 u"], [888.075513916,987.309008866,"ramp cmp2/4 d"], [989.75879072,1088.99649253,"ramp cmp2/4 u"], [1101.9320102,1221.45089264,"ramp cmpall u"], [1240.32935658,1360.72513252,"ramp cmpall d"], ] # Decide DT,U,Z and their processing method DT=0.2 process_set = { 'U':[(T_cmp, da1_cmp,0), (T_cmp, da2_cmp,0), (T_cmp, da3_cmp,0), (T_cmp, da4_cmp,0),], 'Z':[(T_rig, phi_rig,1),], 'cutoff_freq': 1 #Hz } U_names = ['$\delta_{a1,cmp} \, / \, ^o$', '$\delta_{a2,cmp} \, / \, ^o$', '$\delta_{a3,cmp} \, / \, ^o$', '$\delta_{a4,cmp} \, / \, ^o$'] Y_names = Z_names = ['$\phi_{a,rig} \, / \, ^o$'] display_data_prepare()
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Define dynamic model to be estimated $$\left{\begin{matrix}\begin{align} M_{x,rig} &= M_{x,a} + M_{x,f} + M_{x,cg} = 0 \ M_{x,a} &= \frac{1}{2} \rho V^2S_cb_c C_{la,cmp}\delta_{a,cmp} \ M_{x,f} &= -F_c \, sign(\dot{\phi}{rig}) \ M{x,cg} &= -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right ) \end{align}\end{matrix}\right.$$
%%px --local #update common const parameters in all engines angles = range(-40,41,5) angles[0] -= 1 angles[-1] += 1 del angles[angles.index(0)] angles_num = len(angles) #problem size Nx = 0 Nu = 4 Ny = 1 Npar = 4*angles_num+1 #reference S_c = 0.1254 #S_c(m2) b_c = 0.7 #b_c(m) g = 9.81 #g(m/s2) #static measurement m_T = 9.585 #m_T(kg) l_z_T = 0.0416 #l_z_T(m) V = 25 #V(m/s) #previous estimations F_c = 0.06 #F_c(N*m) Clda_cmp = -0.2966 #Clda_cmp(1/rad) kFbrk = 1.01 shuffle = 0.01 #other parameters qbarSb = 0.5*1.225*V*V*S_c*b_c _m_T_l_z_T_g = -(m_T*l_z_T)*g def obs(Z,T,U,params): s = T.size k1 = np.array(params[0:angles_num]) k2 = np.array(params[angles_num:angles_num*2]) k3 = np.array(params[angles_num*2:angles_num*3]) k4 = np.array(params[angles_num*3:angles_num*4]) phi0 = params[-1] Clda1 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k1*angles,assume_sorted=True) Clda2 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k2*angles,assume_sorted=True) Clda3 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k3*angles,assume_sorted=True) Clda4 = scipy.interpolate.interp1d(angles, Clda_cmp*0.00436332313*k4*angles,assume_sorted=True) moments_a = qbarSb*(Clda1(U[1:s,0])+Clda2(U[1:s,1]) +Clda3(U[1:s,2])+Clda4(U[1:s,3])) phi = Z[0,0]*0.0174533 phi_rslt = [phi] for t,m_a in itertools.izip(T[1:],moments_a): moments_cg = _m_T_l_z_T_g*math.sin(phi-phi0) if moments_cg + m_a > F_c*kFbrk: k = ( F_c-m_a)/_m_T_l_z_T_g if k > 1: k = 1 elif k < -1: k = -1 phi = phi0+math.asin(k) elif moments_cg + m_a < -F_c*kFbrk: k = (-F_c-m_a)/_m_T_l_z_T_g if k > 1: k = 1 elif k < -1: k = -1 phi = phi0+math.asin(k) else : phi += (moments_cg + m_a)/(F_c*kFbrk)*shuffle/57.3 phi_rslt.append(phi) return (np.array(phi_rslt)*57.3).reshape((-1,1)) display(HTML('<b>Constant Parameters</b>')) table = ListTable() table.append(['Name','Value','unit']) table.append(['$S_c$',S_c,'$m^2$']) table.append(['$b_c$',b_c,'$m$']) table.append(['$g$',g,'$m/s^2$']) table.append(['$m_T$',m_T,'$kg$']) table.append(['$l_{zT}$',l_z_T,'$m$']) table.append(['$V$',V,'$m/s$']) table.append(['$F_c$',F_c,'$Nm$']) table.append(['$C_{l \delta a,cmp}$',Clda_cmp,'$rad^{-1}$']) display(table)
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Initial guess Input default values and ranges for parameters Select sections for trainning Adjust parameters based on simulation results Decide start values of parameters for optimization
#initial guess param0 = [1]*(4*angles_num)+[0] param_name = ['k_{}_{}'.format(i/angles_num+1, angles[i%angles_num]) for i in range(4*angles_num)] + ['$phi_0$'] param_unit = ['1']*(4*angles_num) + ['$rad$'] NparID = Npar opt_idx = range(Npar) opt_param0 = [param0[i] for i in opt_idx] par_del = [0.001]*(4*angles_num) + [0.0001] bounds = [(0,1.5)]*(4*angles_num) +[(-0.1, 0.1)] display_default_params() #select sections for training section_idx = range(8) del section_idx[3] display_data_for_train() #push parameters to engines push_opt_param() # select 4 section from training data #idx = random.sample(section_idx, 4) idx = section_idx[:] interact_guess();
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Show and test results
display_opt_params() # show result idx = range(len(time_marks)) display_data_for_test(); update_guess(); res_params = res['x'] params = param0[:] for i,j in enumerate(opt_idx): params[j] = res_params[i] k1 = np.array(params[0:angles_num]) k2 = np.array(params[angles_num:angles_num*2]) k3 = np.array(params[angles_num*2:angles_num*3]) k4 = np.array(params[angles_num*3:angles_num*4]) Clda_cmp1 = Clda_cmp*0.00436332313*k1*angles Clda_cmp2 = Clda_cmp*0.00436332313*k2*angles Clda_cmp3 = Clda_cmp*0.00436332313*k3*angles Clda_cmp4 = Clda_cmp*0.00436332313*k4*angles print('angeles = ') print(angles) print('Clda_cmpx = ') print(np.vstack((Clda_cmp1,Clda_cmp2,Clda_cmp3,Clda_cmp4))) %matplotlib inline plt.figure(figsize=(12,8),dpi=300) plt.plot(angles, Clda_cmp1, 'r') plt.plot(angles, Clda_cmp2, 'g') plt.plot(angles, Clda_cmp3, 'b') plt.plot(angles, Clda_cmp4, 'm') plt.xlabel('$\delta_{a,cmp}$') plt.ylabel('$C_{l \delta a,cmp}$') plt.show() toggle_inputs() button_qtconsole()
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
While re-running the above cell you will see the output tensorflow==2.5.0 that is the installed version of tensorflow.
import tensorflow as tf import numpy as np print(tf.__version__)
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The equivalent code in TensorFlow consists of two steps: <p> <h3> Step 1: Build the graph </h3>
tf.compat.v1.disable_eager_execution() # Need to disable eager in TF2.x a = tf.compat.v1.constant([5, 3, 8]) b = tf.compat.v1.constant([3, -1, 2]) c = tf.add(a,b,name='c') print(c)
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph. Try the following in the cell above: <ol> <li> Change the 5 to 5.0, and similarly the other five numbers. What happens when you run this cell? </li> <li> Add an extra number to a, but leave b at the original (3,) shape. What happens when you run this cell? </li> <li> Change the code back to a version that works </li> </ol> <p/> <h3> Step 2: Launch the graph in a session.</h3>
sess = tf.compat.v1.Session() # Evaluate the tensor c. print(sess.run(c))
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Using a feed_dict </h2> Same graph, but without hardcoding inputs at build stage
a = tf.compat.v1.placeholder(tf.int32, name='a') b = tf.compat.v1.placeholder(tf.int32, name='b') c = tf.add(a,b,name='c') sess = tf.compat.v1.Session() print(sess.run(c, feed_dict={ a: [3, 4, 5], b: [-1, 2, 3] }))
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Heron's Formula in TensorFlow </h2> The area of triangle whose three sides are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$ Look up the available operations at https://www.tensorflow.org/api_docs/python/tf
def compute_area(sides): # slice the input to get the sides a = sides[:,0] # 5.0, 2.3 b = sides[:,1] # 3.0, 4.1 c = sides[:,2] # 7.1, 4.8 # Heron's formula s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b) areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b) return tf.sqrt(areasq) sess = tf.compat.v1.Session() # pass in two triangles area = compute_area(tf.constant([ [5.0, 3.0, 7.1], [2.3, 4.1, 4.8] ])) print(sess.run(area))
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Placeholder and feed_dict </h2> More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through which inputs will be passed in at run-time.
sess = tf.compat.v1.Session() sides = tf.compat.v1.placeholder(tf.float32, shape=(None, 3)) # batchsize number of triangles, 3 sides area = compute_area(sides) print(sess.run(area, feed_dict = { sides: [ [5.0, 3.0, 7.1], [2.3, 4.1, 4.8] ] }))
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
tf.eager tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution. <p> One thing you could do is to develop using tf.eager and then comment out the eager execution and add in the session management code. <b>You may need to restart the session to try this out.</b>
import tensorflow as tf tf.compat.v1.enable_eager_execution() def compute_area(sides): # slice the input to get the sides a = sides[:,0] # 5.0, 2.3 b = sides[:,1] # 3.0, 4.1 c = sides[:,2] # 7.1, 4.8 # Heron's formula s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b) areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multiply(a, b), not tf.matmul(a, b) return tf.sqrt(areasq) area = compute_area(tf.constant([ [5.0, 3.0, 7.1], [2.3, 4.1, 4.8] ])) print(area)
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize Data View a sample from the dataset. You do not need to modify this section.
import random import matplotlib.pyplot as plt %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image, cmap="gray") print(y_train[index])
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Setup TensorFlow The EPOCH and BATCH_SIZE values affect the training speed and model accuracy. You do not need to modify this section.
import tensorflow as tf EPOCHS = 10 BATCH_SIZE = 64
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Features and Labels Train LeNet to classify MNIST data. x is a placeholder for a batch of input images. y is a placeholder for a batch of output labels. You do not need to modify this section.
x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) with tf.device('/cpu:0'): one_hot_y = tf.one_hot(y, 10)
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Model Evaluation Evaluate how well the loss and accuracy of the model for a given dataset. You do not need to modify this section.
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in range(0, num_examples, BATCH_SIZE): batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE] accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y}) total_accuracy += (accuracy * len(batch_x)) return total_accuracy / num_examples print("Complete")
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Train the Model Run the training data through the training pipeline to train the model. Before each epoch, shuffle the training set. After each epoch, measure the loss and accuracy of the validation set. Save the model after training. You do not need to modify this section.
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH_SIZE batch_x, batch_y = X_train[offset:end], y_train[offset:end] sess.run(training_operation, feed_dict={x: batch_x, y: batch_y}) validation_accuracy = evaluate(X_validation, y_validation) print("EPOCH {} ...".format(i+1)) print("Validation Accuracy = {:.3f}".format(validation_accuracy)) print() saver.save(sess, '.\lenet') print("Model saved")
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Basic knowledge We assume that you have completed at least some of the previous examples and have a general idea of how adaptiveMD works. Still, let's recapitulate what we think is the typical way of a simulation. How to execute something To execute something you need a description of the task to be done. This is the Task object. Once you have this you can, use it in a Scheduler which will interpret the Task into some code that the computer understands. It handles all the little things you expect from the task, like registering generated file, etc... And to do so, the Scheduler needs your Resource description which acts like a config for the scheduler When you have a Scheduler (with Resource) you let it execute Task objects. If you know how to build these you are done. That is all you need. What are Generators? Build a task can be cumbersome and often repetative, and a factory for Task objects is extremely useful. These are called Generators (maybe TaskFactory) is a better name?!? In your final scheme where you observe all generated objects and want to build new tasks accordingly you will (almost) never build a Task yourself. You use a generator. A typical example is an Engine. It will generate tasks, that simulate new trajectories, extend existing ones, etc... Basic stuff. The second big class is Analysis. It will use trajectories to generate models or properties of interest to guide your decisions for new trajectories. In this example we will build a simple generator for a task, that uses the mdtraj package to compute some features and store these in the database and in a file. The MDTrajFeaturizer generator First, we think about how this featurizer works if we would not use adaptivemd. The reason is, that we have basically two choices for designing a Task (see example 4 about Task objects). A task that calls bash commands for you A task that calls a python function for you Since we want to call mdtraj functions we use the 2nd and start with a skeleton for this type and store it under my_generator.py
%%file my_generator.py # This is an example for building your own generator # This file must be added to the project so that it is loaded # when you import `adaptivemd`. Otherwise your workers don't know # about the class! from adaptivemd import Generator class MDTrajFeaturizer(Generator): def __init__(self, {things we always need}): super(PyEMMAAnalysis, self).__init__() # stage file you want to reuse (optional) # self['pdb_file'] = pdb_file # stage = pdb_file.transfer('staging:///') # self['pdb_file_stage'] = stage.target # self.initial_staging.append(stage) @staticmethod def then_func(project, task, data, inputs): # add the output for later reference project.data.add(data) def execute(self, {options per task}): t = PythonTask(self) # get your staged files (optional) # input_pdb = t.link(self['pdb_file_stage'], 'input.pdb') # add the python function call to your script (there can be only one!) t.call( my_script, param1, param2, ... ) return t def my_script(param1, param2, ...): return {"whatever you want to return"}
examples/tutorial/5_example_advanced_generators.ipynb
jrossyra/adaptivemd
lgpl-2.1
<H2>Load CA3 matrices</H2>
mydataset = DataLoader('../data/CA3/') # 1102 experiments print(mydataset.motif) # number of connections tested and found for every type #number of interneurons and principal cells print('{:4d} principal cells recorded'.format(mydataset.nPC)) print('{:4d} interneurons recorded'.format(mydataset.nIN)) # mydataset.configuration # number of recording configurations PEE = mydataset.motif.ee_chem_found/mydataset.motif.ee_chem_tested print('Connection probability between CA3 cells = {:4.4f} %'.format(PEE))
Analysis/misc/Counting CA3 synapses.ipynb
ClaudiaEsp/inet
gpl-2.0
the element in the list with IN[0] contains zero interneurons (all the rest are principal neurons)
mydataset.IN[0] # this is the whole data set
Analysis/misc/Counting CA3 synapses.ipynb
ClaudiaEsp/inet
gpl-2.0
<H2> Descriptive statistics </H2> The stats attribute will return basis statistics of the whole dataset
y = mydataset.stats() print AsciiTable(y).table mymotifs = mydataset.motif info = [ ['Connection type', 'Value'], ['CA3-CA3 chemical synapses', mymotifs.ee_chem_found], ['CA3-CA3 electrical synapses', mymotifs.ee_elec_found], [' ',' '], ['CA3-CA3 bidirectional motifs', mymotifs.ee_c2_found], ['CA3-CA3 divergent motifs', mymotifs.ee_div_found], ['CA3-CA3 convergetn motifs', mymotifs.ee_con_found], ['CA3-CA3 linear chains', mymotifs.ee_lin_found], [' ',' '], ['P(CA3-CA3) unidirectional motifs', mymotifs.ee_chem_found/mymotifs.ee_chem_tested], ['P(CA3-CA3) bidirectional motifs', mymotifs.ee_c2_found/mymotifs.ee_c2_tested], ['P(CA3-CA3) convergent motifs', mymotifs.ee_con_found/mymotifs.ee_con_tested], ['P(CA3-CA3) divergent motifs', mymotifs.ee_div_found/mymotifs.ee_div_tested], ['P(CA3-CA3) chain motifs', mymotifs.ee_lin_found/mymotifs.ee_lin_tested], ] table = AsciiTable(info) print (table.table)
Analysis/misc/Counting CA3 synapses.ipynb
ClaudiaEsp/inet
gpl-2.0
Getting data that matters In this example, we are only interested in Java source code files that still exist in the software project. We can retrieve the existing Java source code files by using Git's <tt>ls-files</tt> combined with a filter for the Java source code file extension. The command will return a plain text string that we split by the line endings to get a list of files. Because we want to combine this information with the other above, we put it into a <tt>DataFrame</tt> with the column name <tt>path</tt>.
existing_files = pd.DataFrame(git_bin.execute('git ls-files -- *.java').split("\n"), columns=['path']) existing_files.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
The next step is to combine the <tt>commit_data</tt> with the <tt>existing_files</tt> information by using Pandas' <tt>merge</tt> function. By default, <tt>merge</tt> will - combine the data by the columns with the same name in each <tt>DataFrame</tt> - only leave those entries that have the same value (using an "inner join"). In plain English, <tt>merge</tt> will only leave the still existing Java source code files in the <tt>DataFrame</tt>. This is exactly what we need.
contributions = pd.merge(commit_data, existing_files) contributions.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
We can now convert some columns to their correct data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deleted lines of code as numbers. We have to convert those accordingly.
contributions['additions'] = pd.to_numeric(contributions['additions']) contributions['deletions'] = pd.to_numeric(contributions['deletions']) contributions.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Calculating the knowledge about code We want to estimate the knowledge about code as the proportion of additions to the whole source code file. This means we need to calculate the relative amount of added lines for each developer. To be able to do this, we have to know the sum of all additions for a file. Additionally, we calculate it for deletions as well to easily get the number of lines of code later on. We use an additional <tt>DataFrame</tt> to do these calculations.
contributions_sum = contributions.groupby('path').sum()[['additions', 'deletions']].reset_index() contributions_sum.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
We also want to have an indicator about the quantity of the knowledge. This can be achieved if we calculate the lines of code for each file, which is a simple subtraction of the deletions from the additions (be warned: this does only work for simple use cases where there are no heavy renames of files as in our case).
contributions_sum['lines'] = contributions_sum['additions'] - contributions_sum['deletions'] contributions_sum.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
We combine both <tt>DataFrame</tt>s with a <tt>merge</tt> analog as above.
contributions_all = pd.merge( contributions, contributions_sum, left_on='path', right_on='path', suffixes=['', '_sum']) contributions_all.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Identify knowledge hotspots OK, here comes the key: We group all additions by the file paths and the authors. This gives us all the additions to a file per author. Additionally, we want to keep the sum of all additions as well as the information about the lines of code. Because those are contained in the <tt>DataFrame</tt> multiple times, we just get the first entry for each.
grouped_contributions = contributions_all.groupby( ['path', 'author']).agg( {'additions' : 'sum', 'additions_sum' : 'first', 'lines' : 'first'}) grouped_contributions.head(10)
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Now we are ready to calculate the knowledge "ownership". The ownership is the relative amount of additions to all additions of one file per author.
grouped_contributions['ownership'] = grouped_contributions['additions'] / grouped_contributions['additions_sum'] grouped_contributions.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Having this data, we can now extract the author with the highest ownership value for each file. This gives us a list with the knowledge "holder" for each file.
ownerships = grouped_contributions.reset_index().groupby(['path']).max() ownerships.head(5)
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Preparing the visualization Reading tables is not as much fun as a good visualization. I find Adam Tornhill's suggestion of an enclosure or bubble chart very good: <img src="https://pbs.twimg.com/media/C-fYgvCWsAAB1y8.jpg" style="width: 500px;"/> Source: Thorsten Brunzendorf (@thbrunzendorf) The visualization is written in D3 and just need data in a specific format called "flare". So let's prepare some data for this! First, we calculate the <tt>responsible</tt> author. We say that an author that contributed more than 70% of the source code is the responsible person that we have to ask if we want to know something about the code. For all the other code parts, we assume that the knowledge is distributed among different heads.
plot_data = ownerships.reset_index() plot_data['responsible'] = plot_data['author'] plot_data.loc[plot_data['ownership'] <= 0.7, 'responsible'] = "None" plot_data.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Next, we need some colors per author to be able to differ them in our visualization. We use the two classic data analysis libraries for this. We just draw some colors from a color map here for each author.
import numpy as np from matplotlib import cm from matplotlib.colors import rgb2hex authors = plot_data[['author']].drop_duplicates() rgb_colors = [rgb2hex(x) for x in cm.RdYlGn_r(np.linspace(0,1,len(authors)))] authors['color'] = rgb_colors authors.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Then we combine the colors to the plot data and whiten the minor ownership with all the <tt>None</tt> responsibilities.
colored_plot_data = pd.merge( plot_data, authors, left_on='responsible', right_on='author', how='left', suffixes=['', '_color']) colored_plot_data.loc[colored_plot_data['responsible'] == 'None', 'color'] = "white" colored_plot_data.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Visualizing The bubble chart needs D3's flare format for displaying. We just dump the <tt>DataFrame</tt> data into this hierarchical format. As for hierarchy, we use the Java source files that are structured via directories.
import os import json json_data = {} json_data['name'] = 'flare' json_data['children'] = [] for row in colored_plot_data.iterrows(): series = row[1] path, filename = os.path.split(series['path']) last_children = None children = json_data['children'] for path_part in path.split("/"): entry = None for child in children: if "name" in child and child["name"] == path_part: entry = child if not entry: entry = {} children.append(entry) entry['name'] = path_part if not 'children' in entry: entry['children'] = [] children = entry['children'] last_children = children last_children.append({ 'name' : filename + " [" + series['responsible'] + ", " + "{:6.2f}".format(series['ownership']) + "]", 'size' : series['lines'], 'color' : series['color']}) with open ( "vis/flare.json", mode='w', encoding='utf-8') as json_file: json_file.write(json.dumps(json_data, indent=3))
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
These are additional imports we imagine you might like.
import scipy.stats as st import emcee import incredible as cr from pygtc import plotGTC
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
1. Data Let's arbitarily use the first galaxy for this exercise - it's somewhere in the middle of the pack in terms of how many measured cepheids it contains. Even though we're only looking at one galaxy so far, let's try to write code that can later be re-used to handle any galaxy (so that we can fit all galaxies simultaneously). To that end, most functions will have an argument, g, which is a key into the data dictionary.
g = ngc_numbers[0] g
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Print the number of cepheids in this galaxy:
data[g]['Ngal']
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
2. Model specification Note: it isn't especially onerous to keep all the galaxies around for the "Model" and "Strategy" sections, but feel free to specialize to the single galaxy case if it helps. Before charging forward, let's finish specifying the model. We previously said we would allow an intrinsic scatter about the overall period-luminosity relation - let's take that to be Gaussian such that the linear relation sets the mean of the scatter distribution, and there is an additional parameter for the width (in magnitudes), $\sigma_i$, for the $i$th galaxy. (Note that normal scatter in magnitudes, which are log-luminosity, could also be called log-normal scatter in luminosity; these are completely equivalent.) We'll hold off on specifying hyperpriors for the distributions of intercepts, slopes and intrinsic scatters among galaxies. Your previous PGM should need minimal if any modification, but make sure that all of the model parameters are represented: The observed apparent magnitude of the $j^{th}$ cepheid in the $i^{th}$ galaxy, $m^{\rm obs}_{ij}$ The "true" apparent magnitude of the $j^{th}$ cepheid in the $i^{th}$ galaxy, $m_{ij}$ The known observational uncertainty on the apparent magnitude of the $j^{th}$ cepheid in the $i^{th}$ galaxy, $\varepsilon_{ij}$ The true absolute magnitude of the $j^{th}$ cepheid in the $i^{th}$ galaxy, $M_{ij}$ The log period for the $j^{th}$ cepheid in the $i^{th}$ galaxy, $\log_{10}P_{ij}$ The luminosity distance to the $i^{th}$ galaxy, $d_{L,i}$ The intercept parameter of the period-luminosity relation in the $i^{th}$ galaxy, $a_{i}$ The slope parameter of the period-luminosity relation in the $i^{th}$ galaxy, $b_{i}$ The intrinsic scatter parameter about the period-luminosity relation in the $i^{th}$ galaxy, $\sigma_{i}$ TBC: new PGM Also write down the probabilistic expressions represented in the PGM, with the exception of those for $a_i$, $b_i$ and $\sigma_i$, which we still haven't chosen priors. TBC: probabilistic relationships For the remainder of this notebook, we will assume wide, uniform priors for $a_i$, $b_i$ and $\sigma_i$, but it's useful (for later) to have the model sketched out above withut those assumptions. 3. Strategy The hierarchical nature of this problem has left us with a large number of nuisance parameters, namely a true absolute magnitude for every one of the cepheids. The question now is: how are we going to deal with it? There are a few possibilities: Sampling: We could take a brute force approach - just apply one of the general-purpose algorithms we've looked at and hope it works. Alternatively, while it might not be obvious, this problem (a linear model with normal distributions everywhere) is fully conjugate, given the right choice of prior. We could therefore use a conjugate Gibbs sampling code specific to the linear/Gaussian case (it's common enough thay they exist) or a more general code that works out and takes advantage of any conjugate relations, given a model. (You could also work out and code up the conjugacies yourself, if you're into that kind of thing.) These are all still "brute-force" in the sense that they are sampling all the nuisance parameters, but we might hope for faster convergence than a more generic algorithm. Direct integration: If some parameters truly are nuisance parameters, in the sense that we don't care what their posteriors are, then we'll ultimately marginalize over them anyway. Rather than sampling the full-dimensional parameter space and then looking only at the marginal distributions we care about, we always have the option of sampling only parameters we care about, and, while evaluating their posterior, doing integrals over the nuisance parameters in some other way. In other words, we should remember that obtaining samples of a parameter is only one method of integrating over it. Whether it makes sense to go this route depends on the structure of the model (and how sophisticated you care to make your sampler). Somtimes, sampling the nuisance parameters just like the parameters of interest turns out to be the best option. Other times, direct integration is much more efficient. And, of course, "direct integration" could take many forms, depending on the integrand: an integral might be analytic, or it might be best accomplished by quadrature or by monte carlo integration. The dimensionality of the integration (in particular, whether it factors into one- or at least low-dimensional integrals) is something to consider. So, for this model, try to write down the posterior for $a_i$, $b_i$ and $\sigma_i$, marginalized over the $M_{ij}$ parameters. If you're persistent, you should find that the integral is analytic, meaning that we can reduce the sampling problem to a computationally efficient posterior distribution over just $a_i$, $b_i$ and $\sigma_i$, at the expense of having to use our brains. If you get super stuck, note that working this out is not a requirement for the notebook (see below), but I suspect it provides the most efficient solution overall. Hint: the gaussians notebook is helpful here. TBC math 4. Obtain the posterior Sample the posterior of $a_i$, $b_i$ and $\sigma_i$ for the one galaxy chosen above (i.e. a single $i$), and do the usual sanity checks and visualizations. Use "wide uniform" priors on $a$, $b$ and $\sigma$. In the subsections below, you'll get to do this 3 different ways! First you'll apply a generic sampler to the brute-force and analytic integration methods. Then we'll walk through using a Gibbs sampling package. Hint: a common trick to reduce the posterior correlation between the intercept and slope parameters of a line is to reparametrize the model as $a+bx \rightarrow a' + b(x-x_0)$, where the "pivot" $x_0$ is roughly the mean of $x$ in the data. You don't have to do this, but smaller correlations usually mean faster convergence. If you do, don't forget about the redefinition when visualizing/interpretting the results!
# find pivots (nb different for every galaxy, which is not what we'd want in a simultaneous analysis) for i in ngc_numbers: data[i]['pivot'] = data[i]['logP'].mean() # to avoid confusion later, reset all pivots to the same value global_pivot = np.mean([data[i]['logP'].mean() for i in ngc_numbers]) for i in ngc_numbers: data[i]['pivot'] = global_pivot
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Here's a function to evaluate the mean relation, with an extra argument for the pivot point:
def meanfunc(x, xpivot, a, b): ''' x is log10(period/days) returns an absolute magnitude ''' return a + b*(x - xpivot)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
4a. Brute force sampling of all parameters Attempt to simply sample all the parameters of the model. Let's... not include all the individual magnitudes in these lists of named parameters, though.
param_names = ['a', 'b', 'sigma'] param_labels = [r'$a$', r'$b$', r'$\sigma$']
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
I suggest starting by finding decent guesses of $a$, $b$, $\sigma$ by trial and error/inspection. For extra fun, chose values such that the model goes through the points, but isn't a great fit. This will let us see how well the sampler used below performs when it needs to find its own way to the best fit.
TBC(1) # guess = {'a': ... guessvec = [guess[p] for p in param_names] # it will be useful to have `guess` as a vector also plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magnitude', fontsize=14); xx = np.linspace(0.5, 2.25, 100) plt.plot(xx, meanfunc(xx, data[g]['pivot'], guess['a'], guess['b'])) plt.gca().invert_yaxis();
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
We'll provide the familiar skeleton of function prototypes below, with a couple of small changes. One is that we added an option argument Mtrue to the log-prior - this allows the same prior function to be used in all parts of this exercise, even when the true magnitudes are not being explicitly sampled (the function calls in later sections would simply not pass anything for Mtrue). The log-posterior function is also generic, in the sense that it takes as an argument the log-likelihood function it should use. Another difference is that we provide a function called logpost_vecarg_A ("A" referring to this part of the notebook) that takes a vector of parameters as input, ordered $a$, $b$, $\sigma$, $M_1$, $M_2$, ..., instead of a dictionary. This is for compatibility with the emcee sampler which is used below. (If you would like to use a different but still generic method instead, like HMC, go for it.)
# prior, likelihood, posterior functions for a SINGLE galaxy # generic prior for use in all parts of the notebook def log_prior(a, b, sigma, Mtrue=None): TBC() # likelihood specifically for part A def log_likelihood_A(gal, a, b, sigma, Mtrue): ''' `gal` is an entry in the `data` dictionary; `a`, `b`, and `sigma` are scalars; `Mtrue` is an array ''' TBC() # generic posterior, again for all parts of the problem def log_posterior(gal, loglike, **params): lnp = log_prior(**params) if lnp != -np.inf: lnp += loglike(gal, **params) return lnp # posterior for part A, taking a parameter array argument for compatibility with emcee def logpost_vecarg_A(pvec): params = {name:pvec[i] for i,name in enumerate(param_names)} params['Mtrue'] = pvec[len(param_names):] return log_posterior(data[g], log_likelihood_A, **params) TBC_above()
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Here's a quick sanity check, which you can refine if needed:
guess_A = np.concatenate((guessvec, data[g]['M'])) logpost_vecarg_A(guess_A)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
The cell below will set up and run emcee using the functions defined above. We've made some generic choices, such as using twice as many "walkers" as free parameters, and starting them distributed according to a Gaussian around guess_A with a width of 1%. IMPORTANT You do not need to run this version long enough to get what we would normally consider acceptable results, in terms of convergence and number of independent samples. Just convince yourself that it's functioning, and get a sense of how it performs. Please do not turn in a notebook where the sampling cell below takes longer than $\sim30$ seconds to evaluate.
%%time nsteps = 1000 # or whatever npars = len(guess_A) nwalkers = 2*npars sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_A) start = np.array([np.array(guess_A)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)]) sampler.run_mcmc(start, nsteps) print('Yay!')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Let's look at the usual trace plots, including only one of the magnitudes since there are so many.
npars = len(guess)+1 plt.rcParams['figure.figsize'] = (16.0, 3.0*npars) fig, ax = plt.subplots(npars, 1); cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:npars], ax, labels=param_labels+[r'$M_1$']); npars = len(guess_A)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Chances are this is not very impressive. But we carry on, to have it as a point of comparison. The cell below will print out the usual quantitiative diagnostics.
TBC() # burn = ... # maxlag = ... tmp_samples = [sampler.chain[i,burn:,:4] for i in range(nwalkers)] print('R =', cr.GelmanRubinR(tmp_samples)) print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag)) print('NB: Since walkers are not independent, these will be optimistic!') print("Plus, there's a good chance that the results in this section are garbage...")
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Finally, we'll look at a triangle plot.
samples_A = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars) plotGTC([samples_A[:,:4]], paramNames=param_labels+[r'$M_1$'], chainLabels=['emcee/brute'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
We should also probably look at how well the fitted model matches the data, qualitatively.
plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magnitude', fontsize=14); xx = np.linspace(0.5, 2.25, 100) plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_A[:,0].mean(), samples_A[:,1].mean()), label='emcee/brute') plt.gca().invert_yaxis(); plt.legend();
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
4b. Sampling with analytic marginalization Next, implement sampling of $a$, $b$, $\sigma$ using your analytic marginalization over the true magnitudes. Again, the machinery to do the sampling is below; you only need to provide the log-posterior function.
def log_likelihood_B(gal, a, b, sigma): ''' `gal` is an entry in the `data` dictionary; `a`, `b`, and `sigma` are scalars ''' TBC() def logpost_vecarg_B(pvec): params = {name:pvec[i] for i,name in enumerate(param_names)} return log_posterior(data[g], log_likelihood_B, **params) TBC_above()
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Check for NaNs:
logpost_vecarg_B(guessvec)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, we run emcee below. Anticipating an improvment in efficiency, we've increased the default number of steps below. Unlike the last time, you should run long enough to have useful samples in the end.
%%time nsteps = 10000 npars = len(param_names) nwalkers = 2*npars sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_B) start = np.array([np.array(guessvec)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)]) sampler.run_mcmc(start, nsteps) print('Yay!')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, trace plots. Note that we no longer get a trace of the magnitude parameters. If we really wanted a posterior for them, we would now need to do extra calculations.
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars) fig, ax = plt.subplots(npars, 1); cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:], ax, labels=param_labels);
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, $R$ and $n_\mathrm{eff}$.
TBC() # burn = ... # maxlab = ... tmp_samples = [sampler.chain[i,burn:,:] for i in range(nwalkers)] print('R =', cr.GelmanRubinR(tmp_samples)) print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag)) print('NB: Since walkers are not independent, these will be optimistic!')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Now, let's compare the posterior from this analysis to the one we got before:
samples_B = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars) plotGTC([samples_A[:,:3], samples_B], paramNames=param_labels, chainLabels=['emcee/brute', 'emcee/analytic'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Checkpoint: Your posterior is compared with our solution by the cell below. Note that we used the global_pivot defined above. If you did not, your constraints on $a$ will differ due to this difference in definition, even if everything is correct.
sol = np.loadtxt('solutions/ceph1.dat.gz') plotGTC([sol, samples_B], paramNames=param_labels, chainLabels=['solution', 'my emcee/analytic'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Moving on, look at how the two fits you've done compare visually:
plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magnitude', fontsize=14); xx = np.linspace(0.5, 2.25, 100) plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_A[:,0].mean(), samples_A[:,1].mean()), label='emcee/brute') plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_B[:,0].mean(), samples_B[:,1].mean()), label='emcee/analytic') plt.gca().invert_yaxis(); plt.legend();
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Comment on things like the efficiency, accuracy, and/or utility of the two approaches. TBC commentary 4c. Conjugate Gibbs sampling Finally, we'll step through using a specialized Gibbs sampler to solve this problem. We'll use the LRGS package, not because it's the best option (it isn't), but because it's written in pure Python. The industry-standard (and far less specialized) alternative goes by the name JAGS, and requires a separate installation (though one can add a Python interface on top of that). Let me stress that LRGS is in no fashion optimized for speed; JAGS is presumably faster, not to mention applicable to more than just fitting lines. Even so, LRGS seems to be comparable in speed with our analytically supercharged emcee in this case, when one considers that the samples it returns are less correlated.
import lrgs
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
LRGS is a "general" linear model fitter, meaning that $x$ and $y$ can be multidimensional. So the input data are formatted as matrices with one row for each data point. In this case, they're column vectors ($n\times1$ matrices). Measurement uncertainties are given as a list of covariance matrices. The code handles errors on both $x$ and $y$, so these are $2\times2$ for us. Since our $x$'s are given precisely, we just put in a dummy value here and use a different option to fix the values of $x$ below.
x = np.asmatrix(data[g]['logP'] - data[g]['pivot']).T y = np.asmatrix(data[g]['M']).T M = [np.matrix([[1e-6, 0], [0, err**2]]) for err in data[g]['merr']]
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Conjugate Gibbs sampling can be parallelized in the simplest possible way - you just run multiple chains from different starting points or even just with different random seeds in parallel. (emcee is parallelized internally, since walkers need to talk to each other.) Therefore...
import multiprocessing
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
This function sets things up and does the actual sampling, returning a Numpy array in the usual format. The default priors are equivalent to the ones we chose above, helpfully.
nsteps = 2000 # some arbitrary number of steps to run def do_gibbs(i): # every parallel process will have the same random seed if we don't reset them here if i > 0: np.random.seed(i) # lrgs.Parameters set up a sampler that assumes the x's are known precisely. # Other classes would correspond to different possible priors on x. par = lrgs.Parameters(x, y, M) chain = lrgs.Chain(par, nsteps) chain.run(fix='x') # fix='x' isn't necessary here, but it shows how one would fix other parameters if we wanted to # Extracts the chain as a dictionary. Note that we have the option of hanging onto the samples of the magnitude # parameters in addition to the intercept, slope and scatter, though this is not the default. dchain = chain.to_dict(["B", "Sigma"]) # since $sigma^2$ is sampled rather than $\sigma$, take the square root here return np.array([dchain['B_0_0'], dchain['B_1_0'], np.sqrt(dchain['Sigma_0_0'])]).T
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Go!
%%time with multiprocessing.Pool() as pool: gibbs_samples = pool.map(do_gibbs, range(2)) # 2 parallel processes - change if you want
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Show!
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars) fig, ax = plt.subplots(npars, 1); cr.plot_traces(gibbs_samples, ax, labels=param_labels); burn = 50 maxlag = 1000 tmp_samples = [x[burn:,:] for x in gibbs_samples] print('R =', cr.GelmanRubinR(tmp_samples)) print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Here are the posteriors:
samples_C = np.concatenate(tmp_samples, axis=0) plotGTC([samples_A[:,:3], samples_B, samples_C], paramNames=param_labels, chainLabels=['emcee/brute', 'emcee/analytic', 'LRGS/Gibbs'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, look at the fit compared with the other methods:
plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magnitude', fontsize=14); xx = np.linspace(0.5, 2.25, 100) plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_A[:,0].mean(), samples_A[:,1].mean()), label='emcee/brute') plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_B[:,0].mean(), samples_B[:,1].mean()), label='emcee/analytic') plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_C[:,0].mean(), samples_C[:,1].mean()), '--', label='LRGS/Gibbs') plt.gca().invert_yaxis(); plt.legend();
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Finishing up There's yet more fun to be had in the next tutorial, so let's again save the definitions and data from this session.
del pool # cannot be pickled TBC() # change path below if desired # dill.dump_session('../ignore/cepheids_one.db')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
5. Functionals Lambda Lambda() is small, anonymous function (functions without names). They are disposable functions, i.e. they are used only where they are created. Map Map() applies a function to each item in a sequence (list, tuple, set). The syntax is map(function, sequence-of-items). Great use case for lambda(). Filter Filter() returns a sequence of items (list, tuple, set) that satisfy a given condition. The syntax is filter(condition, sequence-of-items). Great use case for lambda(). Note: Comprehensions could be used in place of Map() & Filter(). Reduce Reduce() continually applies a function to a sequence of items (list, tuple, set). It returns a single value. The syntax is reduce(function, sequence-of-items). Great use case for lambda().
celsius = [39.2, 36.5, 37.3, 37.8] fahrenheit = map(lambda x: (float(9)/5)*x + 32, celsius) fahrenheit fib = [0,1,1,2,3,5,8,13,21,34,55] filter(lambda x: x % 2 == 0, fib) f = lambda x,y: x if (x > y) else y reduce(f, [47,11,42,102,13])
python-review.ipynb
dsnair/data_science
gpl-3.0
6. Unpacking Lists, Tuples & Sets With an iterable, such as list or tuple: args = (1, 2, 3) f(*args) -&gt; f(1, 2, 3) python print range(*(0, 5)) print range(*[0, 5, 2]) # range(start, end, step) With a mapping, such as dict: kwargs = {'a':1, 'b':2, 'c':3} f(**kwargs) -&gt; f(a=1, b=2, c=3) python dic = {'a':1, 'b':2} def myFunc(a=0, b=0, c=0): print(a, b, c) myFunc(**dic)
a, b = [1, 2] c, d = (3, 4) e, f = {5, 6} print a, b, c, d, e, f
python-review.ipynb
dsnair/data_science
gpl-3.0
B. Common Functions 1. List [] creates empty list [1, 2], [3, 4] creates nested list list[start:stop:step] slices list1 + list2 concatenates item in list checks membership .append(item) appends item to the end of the list .remove(item) removes 1st value from list .insert(index, item) inserts item to specified index .pop(index) inverse of insert .sort() sorts .reverse() reverses 2. Tuple () creates empty tuple. 1,2 creates tuple. (1, 2), (3, 4) creates nested tuple tup[start:stop:step] slices tup1 + tup2 concatenates item in tup checks membership 3. Set set() creates empty set {1}, {2} creates nested set 1 in {1, 2} checks membership set1.issubset(set2) $A \subseteq B$ set1.issuperset(set2) $B \subseteq A$ set1 == set2 set1 | set2 $A \cup B$ set1 &amp; set2 $ A \cap B$ set1 - set2 $A-B$: elements in set1, but NOT in set2 set1 ^ set2 $(A \cap B)^c$: symmetric difference (xor) .add(item) appends item to set .remove(item) removes item to set set1.update(set2) appends set2 items to set1 4. Dictionary {} creates empty dictionary {'a':1}, {'b':2} creates nested dictionary dict(zip(keys, values)) creates dict from iterable (list, tuple, set) keys and values. zip creates a list of $(k,v)$ tuples. d[key] prints values for key d[key] = value inserts/updates value for key .keys() produces keys list .values() produces values list key in d checks if key exists del d[key] deletes the key-value pair from dictionary d1.update(d2) appends d2 key-value pairs to d1 5. String '' or "" or ''' ''' creates empty string. Triple quotes allow line breaks. string[start:stop:step] slices str1 + str2 concatenates 's' in 'string' checks membership .lower() converts to lowercase .upper() converts to uppercase str.split(delimiter=" ") splits str by delimiter, default space separator.join(iterable) concatenates strings in iterable (list, tuple, string, dictionary, set) with separator string str.replace(old_string, new_string) replaces all occurrences of old_string with new_string str.strip(string) removes string from the beginning and the end of str. str.lstrip(string) removes string only from the beginning. str.rstrip(string) removes string only from the end. str.startswith(string) and str.endswith(string) checks if str starts with or ends with string str.swapcase() swap uppercase with lowercase and vice-versa str.count(substring) counts how many times substring occurs in str C. Exercises Given a list of integers and an integer $d$, build a multiplication table. Example: [[1, 2, 3], [2, 4, 6], [3, 6, 9]]
li = range(1, 4) d = 3 [[l*i for l in li] for i in range(1, d+1)]
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a list of integers, return a list that only contains even integers from even indices. Example: If list[2]=2, include. If list[3]=2, exclude
li = [1,3,5,8,10,13,18,36,78] [x for x in li[::2] if x%2 == 0]
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a sequence of integers and an integer $d$, perform d-left rotations on the list. Example: If d=2 and list=[1,2,3,4,5], then 2-left rotation is [3,4,5,1,2]
li = range(1, 6) d = 2 li[d:] + li[:d]
python-review.ipynb
dsnair/data_science
gpl-3.0
Given three integers $x$, $y$ and $d$, create ordered pairs $(i,j)$ such that $0 \le i \le x$, $0 \le j \le y$, and $i+j \ne d$.
x = 2 y = 3 d = 4 list1 = range(0, x+1) list2 = range(0, y+1) [(i,j) for i in list1 for j in list2 if i+j != d]
python-review.ipynb
dsnair/data_science
gpl-3.0
Capitalize the first letter in first and last names. Example: Convert "divya nair" to "Divya Nair"
"divya nair".title() #"divya nair".capitalize() name = "divya nair" li = name.split(" ") li = [li[i].capitalize() for i in range(0, len(li))] " ".join(li)
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a string and a natural number $d$, wrap the string into a paragraph of width $d$.
s = "ABCDEFGHIJKLIMNOQRSTUVWXYZ" d = 4 i = 0 while (i < len(s)): print s[i:i+d] i = i+d
python-review.ipynb
dsnair/data_science
gpl-3.0
Find all the vowels in a string.
string = "divya nair" vowels = ["a", "e", "i", "o", "u"] [char for char in string if char in vowels]
python-review.ipynb
dsnair/data_science
gpl-3.0
Calculate $n!$
n = 3 reduce(lambda x,y: x*y, range(1, n+1))
python-review.ipynb
dsnair/data_science
gpl-3.0
Given an sequence of $n$ integers, calculate the sum of its elements.
import random size = 3 # Generate 3 random numbers between [1, 10] numList = [random.randint(1,10) for i in range(0, size)] print numList "The sum is {}".format(reduce(lambda x,y: x+y, numList)) # Shuffle item position in numList random.shuffle(numList) numList # Randomly sample 2 items from numList print random.sample(numList, 2)
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a sequence of $n$ integers, $a_0, a_1, \cdots, a_{n-1}$, and a natural number $d$, print all pairs of $(i, j)$ such that $a_i + a_j$ is divisible by $d$.
n = 6 d = 3 a = [1, 3, 2, 6, 1, 2] sums = [(a[i] + a[j], i, j) for i in range(0, len(a)) for j in range(0, len(a))] [(s[1], s[2]) for s in sums if s[0]%d==0]
python-review.ipynb
dsnair/data_science
gpl-3.0
Return $n$ numbers in the Fibonacci sequence.
fib = [0,1] n = 6 for i in range(0, n): fib.append(fib[i] + fib[i+1]) fib
python-review.ipynb
dsnair/data_science
gpl-3.0
Given 4 coin denominations (1c, 5c, 10c, 25c) and a dollar amount, find the best way to express that amount using least number of coins. Example: $2.12 = (25c, 8), (10c, 1), (2c, 2)
amount = 0.50 dollar, cent = str(amount).split(".") cent = cent+'0' if len(cent)==1 else cent # ternary operation amount = int(dollar+cent) deno = [25, 10, 5, 1] coins = [] for d in deno: coins.append(amount/d) amount = amount%d print '{} quarters, {} dimes, {} nickels, {} pennies'.format(*coins) # unpacking list
python-review.ipynb
dsnair/data_science
gpl-3.0
Output the first $d$ rows of Pascal's triangle, where $2 \le d \le 10$.
row = [1] print row d = 4 while (len(row) <= d): row = [1] + [row[i] + row[i+1] for i in range(0, len(row)-1)] + [1] print row
python-review.ipynb
dsnair/data_science
gpl-3.0
At bus stop #1, 10 passengers got on the bus and 6 passengers got off the bus. Then, at stop #2, 5 passengers got on and 4 passengers got off. How many passengers are remaining in the bus?
stops = [[10, 6], [5, 4]] remaining = [x[0]-x[1] for x in stops] reduce(lambda x,y: x+y, remaining)
python-review.ipynb
dsnair/data_science
gpl-3.0
Find the 2nd largest number in the list.
num = [2, 3, 6, 6, -5] noRep = list(set(num)) noRep.sort() noRep.pop(-2)
python-review.ipynb
dsnair/data_science
gpl-3.0
Draw a rectangle of a given length and width.
length = 2 width = 3 for l in range(0, length): print '* ' * width
python-review.ipynb
dsnair/data_science
gpl-3.0
Calculate the sum of all the odd numbers in a list.
num = [0, 1, 2, -3] print num odd = filter(lambda x: x%2 != 0, num) print odd reduce(lambda x,y: x+y, odd)
python-review.ipynb
dsnair/data_science
gpl-3.0