markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The complete data set can be described using the traditional statistical descriptors:
# Calculate some useful statistics showing how the data is distributed data.describe()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 1 Based on the descriptive statistics above, how would you summarize this data for the Board in a few sentences? Step 2: Define the Task You Want to Accomplish The tasks that are possible to accomplish in a machine learning problem depend on how you slice up the dataset into features (inputs) and the target (o...
# Here are the input values # Number of columns in our dataset cols = data.shape[1] # Inputs are in the first column - indexed as 0 X = data.iloc[:, 0:cols-1] # Alternatively, X = data['Population'] print("Number of columns in the dataset {}".format(cols)) print("First few inputs\n {}".format(X.head())) # The last few...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Step 2b: Identify the Output The output is annual restaurant profit. For each value of the input we have a value for the output. Keep in mind that each value is in \$10,000s. So multipy the value you see by \$10,000 to get the actual annual profit for the restaurant. Let's look at some of these output values.
# Here are the output vaues # Outputs are in the second column - indexed as 1 y = data.iloc[:, cols-1:cols] # Alternatively, y = data['Profits'] # See a sample of the outputs y.head() # Last few items of the ouput y.tail()
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Once we've identified the inputs and the output, the task is easy to define: given the inputs, predict the output. So in this case, given a town's population, predict the profit a restarurant would generate. Step 3: Define the Model As we saw in the Nuts and Bolts session, a model is a way of transforming the inputs i...
# A Handful of Penalty Functions # Generate the error range x = np.linspace(-10,10,100) [penaltyPlot(x, pen) for pen in penaltyFunctions.keys()];
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 3 Does the penalty we've chosen make sense? Convince yourself of this and write a paragraph explaining why it makes sense.
penalty(X,y,[-10, 1], VPenalty) penalty(X,y,[-10, 1], invertedVPenalty)
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
SIDEBAR - How the Penalty is Usually Written The cost of getting it wrong is defined as a function $J(W)$: $$J(W) = \frac{1}{2m} \sum_{i=1}^{m} (h_{W}x^{(i)}) - y^{(i)})^2$$ What we're saying here: For each input, transform it using $w_{0}$ and $w_{1}$. This will give us a number. Subtract from this number the actual ...
# Visualize what np.meshgrid does when used with plot w0 = np.linspace(1,5,5) w1 = np.linspace(1,5,5) W0, W1 = np.meshgrid(w0,w1) plt.plot(W0,W1, marker='*', color='g', linestyle='none'); # Plot the cost surface # From https://stackoverflow.com/questions/9170838 # See Also: Helpful matplotlib tutorial at # http://j...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Step 5: Find the Parameter Values that Minimize the Cost The cost function might have a minimum but how can we possibly find it? We can't use the brute force method of choosing every possible combination of values for $w_{0}$ and $w_{1}$ -- there are an infinite number of combinations and we'll never finish our task. T...
# Initialize the parameter values W and pick the penalty function W_init = [1,-1.0] penalty_function = squaredPenalty # Test out the penalty function in the Shared-Functions notebook penalty(X, y, W_init, penalty_function) # Test out the gradientDescent function in the Shared-Functions notebook gradientDescent(X, y, ...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Run the iterative gradient descent method to determine the optimal parameter values.
# Set hyper-parameters num_iters = 50 # number of iterations learning_rate = 0.0005 # the learning rate # Run gradient descent and capture the progression # of cost values and the ultimate optimal W values %time W_opt, final_penalty, running_w, running_penalty = gradientDescent(X, y, W_init, num_iters, learning_rate...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
We can see that the Ws are changing even after 5000 interations...but at the 4th decimal place. Similarly, the penalty is changing (decreasing) in the 100s place. How cost changes as the number of iterations increase
# How the penalty changes as the number of iterations increase fig, ax = plt.subplots(figsize=(8,5)) ax.plot(np.arange(num_iters), running_penalty, 'g') ax.set_xlabel('Number of Iterations') ax.set_ylabel('Cost') ax.set_title('Cost vs. Iterations Over the Dataset for a Specific Learning Rate'); np.array(running_w).fla...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 4 Experiment with different values of alpha, W, and iters. Write down your observations. Step 6: Use the Model and Optimal Parameter Values to Make Predictions Let's see how our optimal parameter values can be used to make predictions.
W_opt[0,0], W_opt[1,0] # Create 100 equally spaced values going from the minimum value of population # to the maximum value of the population in the dataset. x = np.linspace(data.Population.min(), data.Population.max(), 100) f = (W_opt[0, 0] * 1) + (W_opt[1, 0] * x) fig, ax = plt.subplots(figsize=(8,5)) ax.plot(x, f...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Experimenting with Hyperparameters Balancing Learning Rate with Number of Iterations Learning Rate - The Intuition <img src="../Images/gradient-descent-intuition.png" alt="Under- and Overshoot in Learning Size" style="width: 600px;"/>
# How predictions change as the learning rate and the # number of iterations are changed learning_rates = [0.001, 0.009] epochs = [10, 500] # epoch is another way of saying num_iters # All combinations of learning rates and epochs from itertools import permutations combos = [list(zip(epochs, p)) for p in permutation...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Exercise 5 In the plot above what do you observe about the way in which the learning rate and the number of interactions determine the "prediction line" that is learned? Exercise 6 Now you can make predictions of profit based on your data. What are the predicted profits for populations of 50,000, 100,000, 160,000, and...
# We're using the optimal W values obtained when the learning rate = 0.001 # and the number of iterations = 500 predictions = [(W_values[3][0] * 1) + (W_values[3][1] * pop) for pop in [50000, 100000, 160000, 180000]] # Get into the right form form printing preds = np.array(predictions).squeeze() print(['${:5,.0f}'.fo...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
Learning from Experience Exercise 7 What happens to the optimal values of W if we use just a quarter of the dataset? What happens if we now use half of the training set? How does this relate to Tom Mitchell's definition of machine learning?
# We'll use the values of num_iters and learning_rate defined above print("num_iters: {}".format(num_iters)) print("learning rate: {}".format(learning_rate)) # Vary the size of the dataset dataset_sizes = [2, 5, 10, 25, 50, len(X)] gdResults = [gradientDescent(X[0:dataset_sizes[i]], y[0:dataset_sizes[i]], \ ...
Notebooks/Regression-with-a-Single-Feature.ipynb
jsub10/MLCourse
mit
OpenStreetMap的OSM文件对象数据分类捡取器 by openthings@163.com, 2016-03-21. 功能: 输出三个单行存储的json文件,可在Spark中使用,通过spark的sc.read.json()直接读入处理。 本工具将osm文件按照tag快速分类,直接转为node/way/relation三个json文件,并按行存储。 说明: Spark默认按行处理,因此处理xml尤其是多行XML比较麻烦,如OpenStreetMap的OSM格式。 对于xml文件较大、无法全部载入内存的情况,需要预处理成行方式,然后在Spark中分布式载入。 后续工作: 1、映射way的nd节点坐标,...
import os import time import json from pprint import * import lxml from lxml import etree import xmltodict, sys, gc from pymongo import MongoClient gc.enable() #Enable Garbadge Collection # 将指定tag的对象提取,写入json文件。 def process_element(elem): elem_data = etree.tostring(elem) elem_dict = xmltodict.parse(elem_dat...
geospatial/openstreetmap/osm-extract2json.ipynb
supergis/git_notebook
gpl-3.0
执行osm的xml到json转换,一次扫描提取为三个文件。 context = etree.iterparse(osmfile,tag=["node","way"])的tag参数必须要给值,否则取出来的sub element全部是none。 使用了3个打开的全局文件:fnode、fway、frelation
#maxline = 0 #抽样调试使用,最多转换的对象,设为0则转换文件的全部。 def transform(osmfile,maxline = 0): ISOTIMEFORMAT="%Y-%m-%d %X" print(time.strftime( ISOTIMEFORMAT),", Process osm XML...",osmfile," =>MaxLine:",maxline) global fnode global fway global frelation fnode = open(osmfile + "_node.json","w+") fway ...
geospatial/openstreetmap/osm-extract2json.ipynb
supergis/git_notebook
gpl-3.0
执行转换。
# 需要处理的osm文件名,自行修改。 osmfile = '../data/osm/muenchen.osm' transform(osmfile,0)
geospatial/openstreetmap/osm-extract2json.ipynb
supergis/git_notebook
gpl-3.0
Data preparation Load raw data
filename = 'FIWT_Exp050_20150612160930.dat.npz' def loadData(): # Read and parse raw data global exp_data exp_data = np.load(filename) # Select colums global T_cmp, da1_cmp, da2_cmp, da3_cmp , da4_cmp T_cmp = exp_data['data33'][:,0] da1_cmp = exp_data['data33'][:,3] da2_cmp = exp_data[...
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Input $\delta_T$ and focused time ranges
# Pick up focused time ranges time_marks = [ [28.4202657857,88.3682684612,"ramp cmp1 u"], [90.4775063395,150.121612502,"ramp cmp1 d"], [221.84785848,280.642700604,"ramp cmp2 u"], [283.960460465,343.514106772,"ramp cmp2 d"], [345.430891556,405.5707005,"ramp cmp3 u"], [408.588176529,468.175897568,"ramp cmp3 d"], [541.713...
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Define dynamic model to be estimated $$\left{\begin{matrix}\begin{align} M_{x,rig} &= M_{x,a} + M_{x,f} + M_{x,cg} = 0 \ M_{x,a} &= \frac{1}{2} \rho V^2S_cb_c C_{la,cmp}\delta_{a,cmp} \ M_{x,f} &= -F_c \, sign(\dot{\phi}{rig}) \ M{x,cg} &= -m_T g l_{zT} \sin \left ( \phi - \phi_0 \right ) \end{align}\end{matrix}\righ...
%%px --local #update common const parameters in all engines angles = range(-40,41,5) angles[0] -= 1 angles[-1] += 1 del angles[angles.index(0)] angles_num = len(angles) #problem size Nx = 0 Nu = 4 Ny = 1 Npar = 4*angles_num+1 #reference S_c = 0.1254 #S_c(m2) b_c = 0.7 #b_c(m) g = 9.81 #g(m/s2) #static...
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Initial guess Input default values and ranges for parameters Select sections for trainning Adjust parameters based on simulation results Decide start values of parameters for optimization
#initial guess param0 = [1]*(4*angles_num)+[0] param_name = ['k_{}_{}'.format(i/angles_num+1, angles[i%angles_num]) for i in range(4*angles_num)] + ['$phi_0$'] param_unit = ['1']*(4*angles_num) + ['$rad$'] NparID = Npar opt_idx = range(Npar) opt_param0 = [param0[i] for i in opt_idx] par_del = [0.001]*(4*angles_num) +...
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Show and test results
display_opt_params() # show result idx = range(len(time_marks)) display_data_for_test(); update_guess(); res_params = res['x'] params = param0[:] for i,j in enumerate(opt_idx): params[j] = res_params[i] k1 = np.array(params[0:angles_num]) k2 = np.array(params[angles_num:angles_num*2]) k3 = np.array(params[angle...
workspace_py/RigStaticRollId-Exp50-2.ipynb
matthewzhenggong/fiwt
lgpl-3.0
While re-running the above cell you will see the output tensorflow==2.5.0 that is the installed version of tensorflow.
import tensorflow as tf import numpy as np print(tf.__version__)
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The equivalent code in TensorFlow consists of two steps: <p> <h3> Step 1: Build the graph </h3>
tf.compat.v1.disable_eager_execution() # Need to disable eager in TF2.x a = tf.compat.v1.constant([5, 3, 8]) b = tf.compat.v1.constant([3, -1, 2]) c = tf.add(a,b,name='c') print(c)
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
c is an Op ("Add") that returns a tensor of shape (3,) and holds int32. The shape is inferred from the computation graph. Try the following in the cell above: <ol> <li> Change the 5 to 5.0, and similarly the other five numbers. What happens when you run this cell? </li> <li> Add an extra number to a, but leave b at the...
sess = tf.compat.v1.Session() # Evaluate the tensor c. print(sess.run(c))
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Using a feed_dict </h2> Same graph, but without hardcoding inputs at build stage
a = tf.compat.v1.placeholder(tf.int32, name='a') b = tf.compat.v1.placeholder(tf.int32, name='b') c = tf.add(a,b,name='c') sess = tf.compat.v1.Session() print(sess.run(c, feed_dict={ a: [3, 4, 5], b: [-1, 2, 3] }))
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Heron's Formula in TensorFlow </h2> The area of triangle whose three sides are $(a, b, c)$ is $\sqrt{s(s-a)(s-b)(s-c)}$ where $s=\frac{a+b+c}{2}$ Look up the available operations at https://www.tensorflow.org/api_docs/python/tf
def compute_area(sides): # slice the input to get the sides a = sides[:,0] # 5.0, 2.3 b = sides[:,1] # 3.0, 4.1 c = sides[:,2] # 7.1, 4.8 # Heron's formula s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b) areasq = s * (s - a) * (s - b) * (s - c) # (a * b) is a short-cut to tf.multipl...
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Placeholder and feed_dict </h2> More common is to define the input to a program as a placeholder and then to feed in the inputs. The difference between the code below and the code above is whether the "area" graph is coded up with the input values or whether the "area" graph is coded up with a placeholder through...
sess = tf.compat.v1.Session() sides = tf.compat.v1.placeholder(tf.float32, shape=(None, 3)) # batchsize number of triangles, 3 sides area = compute_area(sides) print(sess.run(area, feed_dict = { sides: [ [5.0, 3.0, 7.1], [2.3, 4.1, 4.8] ] }))
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
tf.eager tf.eager allows you to avoid the build-then-run stages. However, most production code will follow the lazy evaluation paradigm because the lazy evaluation paradigm is what allows for multi-device support and distribution. <p> One thing you could do is to develop using tf.eager and then comment out the eager e...
import tensorflow as tf tf.compat.v1.enable_eager_execution() def compute_area(sides): # slice the input to get the sides a = sides[:,0] # 5.0, 2.3 b = sides[:,1] # 3.0, 4.1 c = sides[:,2] # 7.1, 4.8 # Heron's formula s = (a + b + c) * 0.5 # (a + b) is a short-cut to tf.add(a, b) areasq = s * (s...
courses/machine_learning/tensorflow/a_tfstart.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize Data View a sample from the dataset. You do not need to modify this section.
import random import matplotlib.pyplot as plt %matplotlib inline index = random.randint(0, len(X_train)) image = X_train[index].squeeze() plt.figure(figsize=(1,1)) plt.imshow(image, cmap="gray") print(y_train[index])
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Setup TensorFlow The EPOCH and BATCH_SIZE values affect the training speed and model accuracy. You do not need to modify this section.
import tensorflow as tf EPOCHS = 10 BATCH_SIZE = 64
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Features and Labels Train LeNet to classify MNIST data. x is a placeholder for a batch of input images. y is a placeholder for a batch of output labels. You do not need to modify this section.
x = tf.placeholder(tf.float32, (None, 32, 32, 1)) y = tf.placeholder(tf.int32, (None)) with tf.device('/cpu:0'): one_hot_y = tf.one_hot(y, 10)
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Model Evaluation Evaluate how well the loss and accuracy of the model for a given dataset. You do not need to modify this section.
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1)) accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) saver = tf.train.Saver() def evaluate(X_data, y_data): num_examples = len(X_data) total_accuracy = 0 sess = tf.get_default_session() for offset in ra...
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Train the Model Run the training data through the training pipeline to train the model. Before each epoch, shuffle the training set. After each epoch, measure the loss and accuracy of the validation set. Save the model after training. You do not need to modify this section.
with tf.Session() as sess: sess.run(tf.global_variables_initializer()) num_examples = len(X_train) print("Training...") print() for i in range(EPOCHS): X_train, y_train = shuffle(X_train, y_train) for offset in range(0, num_examples, BATCH_SIZE): end = offset + BATCH...
CarND-LetNet/LeNet-Lab-Solution.ipynb
swirlingsand/self-driving-car-nanodegree-nd013
mit
Basic knowledge We assume that you have completed at least some of the previous examples and have a general idea of how adaptiveMD works. Still, let's recapitulate what we think is the typical way of a simulation. How to execute something To execute something you need a description of the task to be done. This is the ...
%%file my_generator.py # This is an example for building your own generator # This file must be added to the project so that it is loaded # when you import `adaptivemd`. Otherwise your workers don't know # about the class! from adaptivemd import Generator class MDTrajFeaturizer(Generator): def __init__(self, {thi...
examples/tutorial/5_example_advanced_generators.ipynb
jrossyra/adaptivemd
lgpl-2.1
<H2>Load CA3 matrices</H2>
mydataset = DataLoader('../data/CA3/') # 1102 experiments print(mydataset.motif) # number of connections tested and found for every type #number of interneurons and principal cells print('{:4d} principal cells recorded'.format(mydataset.nPC)) print('{:4d} interneurons recorded'.format(mydataset.nIN)) # mydataset.con...
Analysis/misc/Counting CA3 synapses.ipynb
ClaudiaEsp/inet
gpl-2.0
the element in the list with IN[0] contains zero interneurons (all the rest are principal neurons)
mydataset.IN[0] # this is the whole data set
Analysis/misc/Counting CA3 synapses.ipynb
ClaudiaEsp/inet
gpl-2.0
<H2> Descriptive statistics </H2> The stats attribute will return basis statistics of the whole dataset
y = mydataset.stats() print AsciiTable(y).table mymotifs = mydataset.motif info = [ ['Connection type', 'Value'], ['CA3-CA3 chemical synapses', mymotifs.ee_chem_found], ['CA3-CA3 electrical synapses', mymotifs.ee_elec_found], [' ',' '], ['CA3-CA3 bidirectional motifs', mymotif...
Analysis/misc/Counting CA3 synapses.ipynb
ClaudiaEsp/inet
gpl-2.0
Getting data that matters In this example, we are only interested in Java source code files that still exist in the software project. We can retrieve the existing Java source code files by using Git's <tt>ls-files</tt> combined with a filter for the Java source code file extension. The command will return a plain text ...
existing_files = pd.DataFrame(git_bin.execute('git ls-files -- *.java').split("\n"), columns=['path']) existing_files.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
The next step is to combine the <tt>commit_data</tt> with the <tt>existing_files</tt> information by using Pandas' <tt>merge</tt> function. By default, <tt>merge</tt> will - combine the data by the columns with the same name in each <tt>DataFrame</tt> - only leave those entries that have the same value (using an "inn...
contributions = pd.merge(commit_data, existing_files) contributions.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
We can now convert some columns to their correct data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deleted lines of code as numbers. We have to convert those accordingly.
contributions['additions'] = pd.to_numeric(contributions['additions']) contributions['deletions'] = pd.to_numeric(contributions['deletions']) contributions.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Calculating the knowledge about code We want to estimate the knowledge about code as the proportion of additions to the whole source code file. This means we need to calculate the relative amount of added lines for each developer. To be able to do this, we have to know the sum of all additions for a file. Additionally,...
contributions_sum = contributions.groupby('path').sum()[['additions', 'deletions']].reset_index() contributions_sum.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
We also want to have an indicator about the quantity of the knowledge. This can be achieved if we calculate the lines of code for each file, which is a simple subtraction of the deletions from the additions (be warned: this does only work for simple use cases where there are no heavy renames of files as in our case).
contributions_sum['lines'] = contributions_sum['additions'] - contributions_sum['deletions'] contributions_sum.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
We combine both <tt>DataFrame</tt>s with a <tt>merge</tt> analog as above.
contributions_all = pd.merge( contributions, contributions_sum, left_on='path', right_on='path', suffixes=['', '_sum']) contributions_all.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Identify knowledge hotspots OK, here comes the key: We group all additions by the file paths and the authors. This gives us all the additions to a file per author. Additionally, we want to keep the sum of all additions as well as the information about the lines of code. Because those are contained in the <tt>DataFrame<...
grouped_contributions = contributions_all.groupby( ['path', 'author']).agg( {'additions' : 'sum', 'additions_sum' : 'first', 'lines' : 'first'}) grouped_contributions.head(10)
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Now we are ready to calculate the knowledge "ownership". The ownership is the relative amount of additions to all additions of one file per author.
grouped_contributions['ownership'] = grouped_contributions['additions'] / grouped_contributions['additions_sum'] grouped_contributions.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Having this data, we can now extract the author with the highest ownership value for each file. This gives us a list with the knowledge "holder" for each file.
ownerships = grouped_contributions.reset_index().groupby(['path']).max() ownerships.head(5)
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Preparing the visualization Reading tables is not as much fun as a good visualization. I find Adam Tornhill's suggestion of an enclosure or bubble chart very good: <img src="https://pbs.twimg.com/media/C-fYgvCWsAAB1y8.jpg" style="width: 500px;"/> Source: Thorsten Brunzendorf (@thbrunzendorf) The visualization is wri...
plot_data = ownerships.reset_index() plot_data['responsible'] = plot_data['author'] plot_data.loc[plot_data['ownership'] <= 0.7, 'responsible'] = "None" plot_data.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Next, we need some colors per author to be able to differ them in our visualization. We use the two classic data analysis libraries for this. We just draw some colors from a color map here for each author.
import numpy as np from matplotlib import cm from matplotlib.colors import rgb2hex authors = plot_data[['author']].drop_duplicates() rgb_colors = [rgb2hex(x) for x in cm.RdYlGn_r(np.linspace(0,1,len(authors)))] authors['color'] = rgb_colors authors.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Then we combine the colors to the plot data and whiten the minor ownership with all the <tt>None</tt> responsibilities.
colored_plot_data = pd.merge( plot_data, authors, left_on='responsible', right_on='author', how='left', suffixes=['', '_color']) colored_plot_data.loc[colored_plot_data['responsible'] == 'None', 'color'] = "white" colored_plot_data.head()
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
Visualizing The bubble chart needs D3's flare format for displaying. We just dump the <tt>DataFrame</tt> data into this hierarchical format. As for hierarchy, we use the Java source files that are structured via directories.
import os import json json_data = {} json_data['name'] = 'flare' json_data['children'] = [] for row in colored_plot_data.iterrows(): series = row[1] path, filename = os.path.split(series['path']) last_children = None children = json_data['children'] for path_part in path.split("/"): ...
notebooks/Knowledge Islands.ipynb
feststelltaste/software-analytics
gpl-3.0
These are additional imports we imagine you might like.
import scipy.stats as st import emcee import incredible as cr from pygtc import plotGTC
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
1. Data Let's arbitarily use the first galaxy for this exercise - it's somewhere in the middle of the pack in terms of how many measured cepheids it contains. Even though we're only looking at one galaxy so far, let's try to write code that can later be re-used to handle any galaxy (so that we can fit all galaxies simu...
g = ngc_numbers[0] g
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Print the number of cepheids in this galaxy:
data[g]['Ngal']
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
2. Model specification Note: it isn't especially onerous to keep all the galaxies around for the "Model" and "Strategy" sections, but feel free to specialize to the single galaxy case if it helps. Before charging forward, let's finish specifying the model. We previously said we would allow an intrinsic scatter about th...
# find pivots (nb different for every galaxy, which is not what we'd want in a simultaneous analysis) for i in ngc_numbers: data[i]['pivot'] = data[i]['logP'].mean() # to avoid confusion later, reset all pivots to the same value global_pivot = np.mean([data[i]['logP'].mean() for i in ngc_numbers]) for i in ngc_numb...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Here's a function to evaluate the mean relation, with an extra argument for the pivot point:
def meanfunc(x, xpivot, a, b): ''' x is log10(period/days) returns an absolute magnitude ''' return a + b*(x - xpivot)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
4a. Brute force sampling of all parameters Attempt to simply sample all the parameters of the model. Let's... not include all the individual magnitudes in these lists of named parameters, though.
param_names = ['a', 'b', 'sigma'] param_labels = [r'$a$', r'$b$', r'$\sigma$']
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
I suggest starting by finding decent guesses of $a$, $b$, $\sigma$ by trial and error/inspection. For extra fun, chose values such that the model goes through the points, but isn't a great fit. This will let us see how well the sampler used below performs when it needs to find its own way to the best fit.
TBC(1) # guess = {'a': ... guessvec = [guess[p] for p in param_names] # it will be useful to have `guess` as a vector also plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magni...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
We'll provide the familiar skeleton of function prototypes below, with a couple of small changes. One is that we added an option argument Mtrue to the log-prior - this allows the same prior function to be used in all parts of this exercise, even when the true magnitudes are not being explicitly sampled (the function ca...
# prior, likelihood, posterior functions for a SINGLE galaxy # generic prior for use in all parts of the notebook def log_prior(a, b, sigma, Mtrue=None): TBC() # likelihood specifically for part A def log_likelihood_A(gal, a, b, sigma, Mtrue): ''' `gal` is an entry in the `data` dictionary; `a`, `b`, ...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Here's a quick sanity check, which you can refine if needed:
guess_A = np.concatenate((guessvec, data[g]['M'])) logpost_vecarg_A(guess_A)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
The cell below will set up and run emcee using the functions defined above. We've made some generic choices, such as using twice as many "walkers" as free parameters, and starting them distributed according to a Gaussian around guess_A with a width of 1%. IMPORTANT You do not need to run this version long enough to get...
%%time nsteps = 1000 # or whatever npars = len(guess_A) nwalkers = 2*npars sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_A) start = np.array([np.array(guess_A)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)]) sampler.run_mcmc(start, nsteps) print('Yay!')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Let's look at the usual trace plots, including only one of the magnitudes since there are so many.
npars = len(guess)+1 plt.rcParams['figure.figsize'] = (16.0, 3.0*npars) fig, ax = plt.subplots(npars, 1); cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:npars], ax, labels=param_labels+[r'$M_1$']); npars = len(guess_A)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Chances are this is not very impressive. But we carry on, to have it as a point of comparison. The cell below will print out the usual quantitiative diagnostics.
TBC() # burn = ... # maxlag = ... tmp_samples = [sampler.chain[i,burn:,:4] for i in range(nwalkers)] print('R =', cr.GelmanRubinR(tmp_samples)) print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag)) print('NB: Since walkers are not independent, these will be optimistic!') print("Plus, there's a good chance ...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Finally, we'll look at a triangle plot.
samples_A = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars) plotGTC([samples_A[:,:4]], paramNames=param_labels+[r'$M_1$'], chainLabels=['emcee/brute'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
We should also probably look at how well the fitted model matches the data, qualitatively.
plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magnitude', fontsize=14); xx = np.linspace(0.5, 2.25, 100) plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_A[:,0].mean(), samp...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
4b. Sampling with analytic marginalization Next, implement sampling of $a$, $b$, $\sigma$ using your analytic marginalization over the true magnitudes. Again, the machinery to do the sampling is below; you only need to provide the log-posterior function.
def log_likelihood_B(gal, a, b, sigma): ''' `gal` is an entry in the `data` dictionary; `a`, `b`, and `sigma` are scalars ''' TBC() def logpost_vecarg_B(pvec): params = {name:pvec[i] for i,name in enumerate(param_names)} return log_posterior(data[g], log_likelihood_B, **params) TBC_above()
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Check for NaNs:
logpost_vecarg_B(guessvec)
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, we run emcee below. Anticipating an improvment in efficiency, we've increased the default number of steps below. Unlike the last time, you should run long enough to have useful samples in the end.
%%time nsteps = 10000 npars = len(param_names) nwalkers = 2*npars sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_B) start = np.array([np.array(guessvec)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)]) sampler.run_mcmc(start, nsteps) print('Yay!')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, trace plots. Note that we no longer get a trace of the magnitude parameters. If we really wanted a posterior for them, we would now need to do extra calculations.
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars) fig, ax = plt.subplots(npars, 1); cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:], ax, labels=param_labels);
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, $R$ and $n_\mathrm{eff}$.
TBC() # burn = ... # maxlab = ... tmp_samples = [sampler.chain[i,burn:,:] for i in range(nwalkers)] print('R =', cr.GelmanRubinR(tmp_samples)) print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag)) print('NB: Since walkers are not independent, these will be optimistic!')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Now, let's compare the posterior from this analysis to the one we got before:
samples_B = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars) plotGTC([samples_A[:,:3], samples_B], paramNames=param_labels, chainLabels=['emcee/brute', 'emcee/analytic'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Checkpoint: Your posterior is compared with our solution by the cell below. Note that we used the global_pivot defined above. If you did not, your constraints on $a$ will differ due to this difference in definition, even if everything is correct.
sol = np.loadtxt('solutions/ceph1.dat.gz') plotGTC([sol, samples_B], paramNames=param_labels, chainLabels=['solution', 'my emcee/analytic'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Moving on, look at how the two fits you've done compare visually:
plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magnitude', fontsize=14); xx = np.linspace(0.5, 2.25, 100) plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_A[:,0].mean(), samp...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Comment on things like the efficiency, accuracy, and/or utility of the two approaches. TBC commentary 4c. Conjugate Gibbs sampling Finally, we'll step through using a specialized Gibbs sampler to solve this problem. We'll use the LRGS package, not because it's the best option (it isn't), but because it's written in p...
import lrgs
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
LRGS is a "general" linear model fitter, meaning that $x$ and $y$ can be multidimensional. So the input data are formatted as matrices with one row for each data point. In this case, they're column vectors ($n\times1$ matrices). Measurement uncertainties are given as a list of covariance matrices. The code handles erro...
x = np.asmatrix(data[g]['logP'] - data[g]['pivot']).T y = np.asmatrix(data[g]['M']).T M = [np.matrix([[1e-6, 0], [0, err**2]]) for err in data[g]['merr']]
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Conjugate Gibbs sampling can be parallelized in the simplest possible way - you just run multiple chains from different starting points or even just with different random seeds in parallel. (emcee is parallelized internally, since walkers need to talk to each other.) Therefore...
import multiprocessing
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
This function sets things up and does the actual sampling, returning a Numpy array in the usual format. The default priors are equivalent to the ones we chose above, helpfully.
nsteps = 2000 # some arbitrary number of steps to run def do_gibbs(i): # every parallel process will have the same random seed if we don't reset them here if i > 0: np.random.seed(i) # lrgs.Parameters set up a sampler that assumes the x's are known precisely. # Other classes would correspond to...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Go!
%%time with multiprocessing.Pool() as pool: gibbs_samples = pool.map(do_gibbs, range(2)) # 2 parallel processes - change if you want
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Show!
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars) fig, ax = plt.subplots(npars, 1); cr.plot_traces(gibbs_samples, ax, labels=param_labels); burn = 50 maxlag = 1000 tmp_samples = [x[burn:,:] for x in gibbs_samples] print('R =', cr.GelmanRubinR(tmp_samples)) print('neff =', cr.effective_samples(tmp_samples, maxlag=max...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Here are the posteriors:
samples_C = np.concatenate(tmp_samples, axis=0) plotGTC([samples_A[:,:3], samples_B, samples_C], paramNames=param_labels, chainLabels=['emcee/brute', 'emcee/analytic', 'LRGS/Gibbs'], figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Again, look at the fit compared with the other methods:
plt.rcParams['figure.figsize'] = (7.0, 5.0) plt.errorbar(data[g]['logP'], data[g]['M'], yerr=data[g]['merr'], fmt='none'); plt.xlabel('log10 period/days', fontsize=14); plt.ylabel('absolute magnitude', fontsize=14); xx = np.linspace(0.5, 2.25, 100) plt.plot(xx, meanfunc(xx, data[g]['pivot'], samples_A[:,0].mean(), samp...
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
Finishing up There's yet more fun to be had in the next tutorial, so let's again save the definitions and data from this session.
del pool # cannot be pickled TBC() # change path below if desired # dill.dump_session('../ignore/cepheids_one.db')
tutorials/cepheids_one_galaxy.ipynb
KIPAC/StatisticalMethods
gpl-2.0
5. Functionals Lambda Lambda() is small, anonymous function (functions without names). They are disposable functions, i.e. they are used only where they are created. Map Map() applies a function to each item in a sequence (list, tuple, set). The syntax is map(function, sequence-of-items). Great use case for lambda()....
celsius = [39.2, 36.5, 37.3, 37.8] fahrenheit = map(lambda x: (float(9)/5)*x + 32, celsius) fahrenheit fib = [0,1,1,2,3,5,8,13,21,34,55] filter(lambda x: x % 2 == 0, fib) f = lambda x,y: x if (x > y) else y reduce(f, [47,11,42,102,13])
python-review.ipynb
dsnair/data_science
gpl-3.0
6. Unpacking Lists, Tuples & Sets With an iterable, such as list or tuple: args = (1, 2, 3) f(*args) -&gt; f(1, 2, 3) python print range(*(0, 5)) print range(*[0, 5, 2]) # range(start, end, step) With a mapping, such as dict: kwargs = {'a':1, 'b':2, 'c':3} f(**kwargs) -&gt; f(a=1, b=2, c=3) python dic = {'a':1, 'b':2...
a, b = [1, 2] c, d = (3, 4) e, f = {5, 6} print a, b, c, d, e, f
python-review.ipynb
dsnair/data_science
gpl-3.0
B. Common Functions 1. List [] creates empty list [1, 2], [3, 4] creates nested list list[start:stop:step] slices list1 + list2 concatenates item in list checks membership .append(item) appends item to the end of the list .remove(item) removes 1st value from list .insert(index, item) inserts item to specified index .p...
li = range(1, 4) d = 3 [[l*i for l in li] for i in range(1, d+1)]
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a list of integers, return a list that only contains even integers from even indices. Example: If list[2]=2, include. If list[3]=2, exclude
li = [1,3,5,8,10,13,18,36,78] [x for x in li[::2] if x%2 == 0]
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a sequence of integers and an integer $d$, perform d-left rotations on the list. Example: If d=2 and list=[1,2,3,4,5], then 2-left rotation is [3,4,5,1,2]
li = range(1, 6) d = 2 li[d:] + li[:d]
python-review.ipynb
dsnair/data_science
gpl-3.0
Given three integers $x$, $y$ and $d$, create ordered pairs $(i,j)$ such that $0 \le i \le x$, $0 \le j \le y$, and $i+j \ne d$.
x = 2 y = 3 d = 4 list1 = range(0, x+1) list2 = range(0, y+1) [(i,j) for i in list1 for j in list2 if i+j != d]
python-review.ipynb
dsnair/data_science
gpl-3.0
Capitalize the first letter in first and last names. Example: Convert "divya nair" to "Divya Nair"
"divya nair".title() #"divya nair".capitalize() name = "divya nair" li = name.split(" ") li = [li[i].capitalize() for i in range(0, len(li))] " ".join(li)
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a string and a natural number $d$, wrap the string into a paragraph of width $d$.
s = "ABCDEFGHIJKLIMNOQRSTUVWXYZ" d = 4 i = 0 while (i < len(s)): print s[i:i+d] i = i+d
python-review.ipynb
dsnair/data_science
gpl-3.0
Find all the vowels in a string.
string = "divya nair" vowels = ["a", "e", "i", "o", "u"] [char for char in string if char in vowels]
python-review.ipynb
dsnair/data_science
gpl-3.0
Calculate $n!$
n = 3 reduce(lambda x,y: x*y, range(1, n+1))
python-review.ipynb
dsnair/data_science
gpl-3.0
Given an sequence of $n$ integers, calculate the sum of its elements.
import random size = 3 # Generate 3 random numbers between [1, 10] numList = [random.randint(1,10) for i in range(0, size)] print numList "The sum is {}".format(reduce(lambda x,y: x+y, numList)) # Shuffle item position in numList random.shuffle(numList) numList # Randomly sample 2 items from numList print random.samp...
python-review.ipynb
dsnair/data_science
gpl-3.0
Given a sequence of $n$ integers, $a_0, a_1, \cdots, a_{n-1}$, and a natural number $d$, print all pairs of $(i, j)$ such that $a_i + a_j$ is divisible by $d$.
n = 6 d = 3 a = [1, 3, 2, 6, 1, 2] sums = [(a[i] + a[j], i, j) for i in range(0, len(a)) for j in range(0, len(a))] [(s[1], s[2]) for s in sums if s[0]%d==0]
python-review.ipynb
dsnair/data_science
gpl-3.0
Return $n$ numbers in the Fibonacci sequence.
fib = [0,1] n = 6 for i in range(0, n): fib.append(fib[i] + fib[i+1]) fib
python-review.ipynb
dsnair/data_science
gpl-3.0
Given 4 coin denominations (1c, 5c, 10c, 25c) and a dollar amount, find the best way to express that amount using least number of coins. Example: $2.12 = (25c, 8), (10c, 1), (2c, 2)
amount = 0.50 dollar, cent = str(amount).split(".") cent = cent+'0' if len(cent)==1 else cent # ternary operation amount = int(dollar+cent) deno = [25, 10, 5, 1] coins = [] for d in deno: coins.append(amount/d) amount = amount%d print '{} quarters, {} dimes, {} nickels, {} pennies'.format(*coins) # unpac...
python-review.ipynb
dsnair/data_science
gpl-3.0
Output the first $d$ rows of Pascal's triangle, where $2 \le d \le 10$.
row = [1] print row d = 4 while (len(row) <= d): row = [1] + [row[i] + row[i+1] for i in range(0, len(row)-1)] + [1] print row
python-review.ipynb
dsnair/data_science
gpl-3.0
At bus stop #1, 10 passengers got on the bus and 6 passengers got off the bus. Then, at stop #2, 5 passengers got on and 4 passengers got off. How many passengers are remaining in the bus?
stops = [[10, 6], [5, 4]] remaining = [x[0]-x[1] for x in stops] reduce(lambda x,y: x+y, remaining)
python-review.ipynb
dsnair/data_science
gpl-3.0
Find the 2nd largest number in the list.
num = [2, 3, 6, 6, -5] noRep = list(set(num)) noRep.sort() noRep.pop(-2)
python-review.ipynb
dsnair/data_science
gpl-3.0
Draw a rectangle of a given length and width.
length = 2 width = 3 for l in range(0, length): print '* ' * width
python-review.ipynb
dsnair/data_science
gpl-3.0
Calculate the sum of all the odd numbers in a list.
num = [0, 1, 2, -3] print num odd = filter(lambda x: x%2 != 0, num) print odd reduce(lambda x,y: x+y, odd)
python-review.ipynb
dsnair/data_science
gpl-3.0