markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
2nd Step: Run the Computational Graph with batches of training data Check out the accuracy of test set
# Create test set idx = np.random.permutation(test_data.shape[0]) # rand permutation idx = idx[:batch_size] test_x, test_y = test_data[idx,:], test_labels[idx] n = train_data.shape[0] indices = collections.deque() # Running Computational Graph init = tf.initialize_all_variables() sess = tf.Session() sess.run(init) for i in range(50): # Batch extraction if len(indices) < batch_size: indices.extend(np.random.permutation(n)) # rand permutation idx = [indices.popleft() for i in range(batch_size)] # extract n_batch data batch_x, batch_y = train_data[idx,:], train_labels[idx] # Run CG for variable training _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={x: batch_x, y_label: batch_y}) print('\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o) # Run CG for testset acc_test = sess.run(accuracy, feed_dict={x: test_x, y_label: test_y}) print('test accuracy=',acc_test)
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Create Feature Matrix
# Create feature matrix X = np.array([[1.1, 11.1], [2.2, 22.2], [3.3, 33.3], [4.4, 44.4], [np.nan, 55]])
machine-learning/delete_observations_with_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Delete Observations With Missing Values
# Remove observations with missing values X[~np.isnan(X).any(axis=1)]
machine-learning/delete_observations_with_missing_values.ipynb
tpin3694/tpin3694.github.io
mit
Now create the DrawControl and add it to the Map using add_control. We also register a handler for draw events. This will fire when a drawn path is created, edited or deleted (there are the actions). The geo_json argument is the serialized geometry of the drawn path, along with its embedded style.
dc = DrawControl(marker={'shapeOptions': {'color': '#0000FF'}}, rectangle={'shapeOptions': {'color': '#0000FF'}}, circle={'shapeOptions': {'color': '#0000FF'}}, circlemarker={}, ) def handle_draw(self, action, geo_json): print(action) print(geo_json) dc.on_draw(handle_draw) m.add_control(dc)
2019-07-10-CICM/notebooks/DrawControl.ipynb
QuantStack/quantstack-talks
bsd-3-clause
In addition, the DrawControl also has last_action and last_draw attributes that are created dynamicaly anytime a new drawn path arrives.
dc.last_action dc.last_draw
2019-07-10-CICM/notebooks/DrawControl.ipynb
QuantStack/quantstack-talks
bsd-3-clause
It's possible to remove all drawings from the map
dc.clear_circles() dc.clear_polylines() dc.clear_rectangles() dc.clear_markers() dc.clear_polygons() dc.clear()
2019-07-10-CICM/notebooks/DrawControl.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Let's draw a second map and try to import this GeoJSON data into it.
m2 = Map(center=center, zoom=zoom, layout=dict(width='600px', height='400px')) m2
2019-07-10-CICM/notebooks/DrawControl.ipynb
QuantStack/quantstack-talks
bsd-3-clause
We can use link to synchronize traitlets of the two maps:
map_center_link = link((m, 'center'), (m2, 'center')) map_zoom_link = link((m, 'zoom'), (m2, 'zoom')) new_poly = GeoJSON(data=dc.last_draw) m2.add_layer(new_poly)
2019-07-10-CICM/notebooks/DrawControl.ipynb
QuantStack/quantstack-talks
bsd-3-clause
Note that the style is preserved! If you wanted to change the style, you could edit the properties.style dictionary of the GeoJSON data. Or, you could even style the original path in the DrawControl by setting the polygon dictionary of that object. See the code for details. Now let's add a DrawControl to this second map. For fun we will disable lines and enable circles as well and change the style a bit.
dc2 = DrawControl(polygon={'shapeOptions': {'color': '#0000FF'}}, polyline={}, circle={'shapeOptions': {'color': '#0000FF'}}) m2.add_control(dc2)
2019-07-10-CICM/notebooks/DrawControl.ipynb
QuantStack/quantstack-talks
bsd-3-clause
At this point, you can follow along with either the pre-baked Macosko2015 amacrine data, or you can load in your own expression matrices. For the best experience, make sure that the rows are cells and the columns are gene names.
import macosko2015 counts, cell_metadata, gene_metadata = macosko2015.load_big_clusters() counts.head()
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Calculate correlation between cells:
correlations = counts.T.rank().corr() print(correlations.shape) correlations.head()
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Correlation != distance Correlation is not equal to distance. If two things are exactly the same, their correlation value is 1. But in space, if two things are exactly the same, the distance between them is 0. Therefore, correlation is not a distance! Correlation is a similarity metric, where bigger = more similar. But we want a dissimilarity (aka distance) metric. Take a look for yourself. Many values in the distribution of all correlation values are near zero (not correlated), and a blip near 1 ( self-correlations).
sns.distplot(correlations.values.flat)
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
But for building a K-nearest neighbors graph, we want the closest things (in distance space) to be actually close. So we'll convert our correlation ($\rho$) into a distance ($d$) using this equation: $$ d = \sqrt{2(1-\rho)} $$ You can look at the code for networkplots.correlation_to_distance to convince yourself that's actually what it's doing:
networkplots.correlation_to_distance??
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 1 Create a dataframe called distance using the correlation_to_distance function from networkplots on your corr dataframe.
# YOUR CODE HERE
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
distances = networkplots.correlation_to_distance(correlations) distances.head()
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Exercise 2 Let's take a look at our values to make sure we have most of our values far away from zero. Use sns.distplot to look the flattened values of the distances dataframe.
# YOUR CODE HERE
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
sns.distplot(distances.values.flat)
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Now we'll run phenograph.cluster, which returns three items: communities: the cluster labels of each cell sparse_matrix: a sparse matrix representing the connections between cells in the graph Q: the modularity score. Higher is better, and the highest is 1. 0 means your graph is randomly connected and -1 means your graph isn't connected at all.
communities, sparse_matrix, Q = phenograph.cluster(distances, k=10)
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Let's take a look at each of these returned values
communities sparse_matrix Q
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
It looks like the communities labels each cell as belonging to a particular cluster, the sparse_matrix is some data type that we can't directly investigate, and Q is the modularity value. Make a graph from the sparse matrix To be able to lay out our graph in two dimensions, we'll use the networkx Python Package to build the graph and lay out the cells and edges.
graph = networkx.from_scipy_sparse_matrix(sparse_matrix) graph
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
We'll use the "Spring layout" which is a force-directed layout that pushes cells and edges away from each other. We'll use the built-in networkx function called spring_layout on our graph:
positions = networkx.spring_layout(graph) positions
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Convert positions dict to dataframe with node information This positions dataframe is a dictionary mapping the node id (in this case, a number) and the $(x, y)$ position. The nodes are in exactly the same order as the rows of the distances dataframe we gave phenograph.cluster.
networkplots.get_nodes_specs??
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Looks like this function can deal with if we already have some clusters defined in our metadata! Let's look at our cell_metadata and remind ourselves of which column we might like to use for the other_cluster_col value.
cell_metadata.head()
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
In this case, I'd like to use the cluster_n_celltype column. Let's take a look at the code again to see how the networkplots.get_nodes_specs function uses the metadata:
networkplots.get_nodes_specs??
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Looks like this function uses another one, called labels_to_colors -- what does that do?
networkplots.labels_to_colors??
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Now let's use get_nodes_specs to create a dataframe of information about nodes so we can plot them.
nodes_specs = networkplots.get_nodes_specs( positions, cell_metadata, distances.index, communities, other_cluster_col='cluster_n_celltype', palette='Set2') print(nodes_specs.shape) nodes_specs.head()
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Convert positions dict to dataframe with edge information We've now created a dataframe containing the x,y positions, the community labels, and the colors for the communities and other clusters we were interested in. Now we want to do the same for the edges (lines between cells). Let's take a look at the function we'll use:
networkplots.get_edges_specs??
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
What arguments does it take? What does it do with them? What does it return? Exercise 3 Create a variable called edges_specs using the networkplots.get_edges_specs and the correct inputs.
# YOUR CODE HERE
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
edges_specs = networkplots.get_edges_specs(graph, positions) print(edges_specs.shape) edges_specs.head()
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
To be able to use the dataframes with the Bokeh plotting language, we need to convert our dataframes into ColumnDataSource objects.
nodes_source = ColumnDataSource(nodes_specs) edges_source = ColumnDataSource(edges_specs) # --- First tab: KNN clustering --- # tab1 = networkplots.plot_graph(nodes_source, edges_source, legend_col='community', color_col='community_color', tab=True, title='KNN Clustering') # --- Second tab: Clusters from paper --- # tab2 = networkplots.plot_graph(nodes_source, edges_source, legend_col='cluster_n_celltype', tab=True, color_col='other_cluster_color', title="Clusters from paper") tabs = Tabs(tabs=[tab1, tab2]) show(tabs)
notebooks/2.2_apply_clustering_on_knn_graph.ipynb
olgabot/cshl-singlecell-2017
mit
Import the required modules:
import numpy as np import pandas as pd import matplotlib.pylab as plt import scipy.stats
notebooks/avg_quality.ipynb
csieber/yt-dataset
mit
Read the dataset:
data = pd.read_csv("../data/ifip_networking.csv.gz")
notebooks/avg_quality.ipynb
csieber/yt-dataset
mit
Convert the shaping to Mbps:
data.loc[:,'shaping_mbps'] = data.loc[:,'net_avg_shaping_rate']*8/1000/1000 data.loc[:,'shaping_mbps_rounded'] = data.loc[:,'shaping_mbps'].round(1)
notebooks/avg_quality.ipynb
csieber/yt-dataset
mit
Definitions Dict for translating itags to quality levels and vice-versa:
ITAG_TO_QL = {160: 0, 133: 1, 134: 2, 135: 3, 136: 4} QL_TO_ITAG = {v: k for k, v in ITAG_TO_QL.items()} VIDDEF = {160: {'label': '144p', 'color': 'green', 'resolution': '256x144'}, 133: {'label': '240p', 'color': 'red' , 'resolution': '320x240'}, 134: {'label': '360p', 'color': 'blue' , 'resolution': '480x360'}, 135: {'label': '480p', 'color': 'grey' , 'resolution': '640x480'}, 136: {'label': '720p', 'color': 'cyan' , 'resolution': '1280x720'}}
notebooks/avg_quality.ipynb
csieber/yt-dataset
mit
Confidence Interval:
def confintv_yerr(values): n, min_max, mean, var, skew, kurt = scipy.stats.describe(values) std = np.sqrt(var) intv = scipy.stats.t.interval(0.95,len(values)-1,loc=mean,scale=std/np.sqrt(len(values))) yerr = ((intv[1] - intv[0]) / 2) return yerr
notebooks/avg_quality.ipynb
csieber/yt-dataset
mit
Plotting shaping to average quality level The subsequent plot shows the fraction of time the video spent on the a certain quality level and the overall average quality level for a specific network shaping value. For example, at 2.2 Mbps, the player spends nearly 100% of the time on the highest quality level (480p).
fig = plt.figure(figsize=(9, 7)) plt.hold(True) ax1 = fig.add_subplot(111) by_shaping = data.groupby('shaping_mbps').mean() y_offset = 0 cmap = plt.get_cmap('copper') colors = iter(cmap(np.linspace(0,1,len(QL_TO_ITAG)))) for ql,itag in list(QL_TO_ITAG.items())[0:4]: idx_itag = 'pl_time_spent_norm_itag%d' % itag ax1.fill_between(by_shaping.index, y_offset, by_shaping[idx_itag], alpha=0.35, facecolor=next(colors)) y_offset = by_shaping[idx_itag] plt.annotate(s=VIDDEF[QL_TO_ITAG[0]]['label'], xy=(0.46, 0.014)) plt.annotate(s=VIDDEF[QL_TO_ITAG[1]]['label'], xy=(0.65, 0.42)) plt.annotate(s=VIDDEF[QL_TO_ITAG[2]]['label'], xy=(1.05, 0.42)) plt.annotate(s=VIDDEF[QL_TO_ITAG[3]]['label'], xy=(1.6, 0.42)) plt.ylabel(r"Relative Playback Time $T_{fq}$") plt.xlabel(r"Bandwidth $f$ (Mbps)") ax2 = ax1.twinx() ax2_data = pd.DataFrame(columns=['shaping', 'avg_ql', 'yerr']) for shaping,group in data.groupby('shaping_mbps'): ql_median = group['pl_avg_pl_quality_ql'].mean() ql_yerr = confintv_yerr(group['pl_avg_pl_quality_ql']) ax2_data = ax2_data.append(pd.DataFrame([[shaping, ql_median, ql_yerr]], columns=ax2_data.columns)) ax2_data.reset_index(drop=True) ax2.errorbar(ax2_data['shaping'], ax2_data['avg_ql'], yerr=list(ax2_data['yerr']), color='black') plt.ylabel(r"Average Quality $J_f$") max_mbps = 2.2 tl = [""]*int(2.2/0.1) tl[1] = "0.5" tl[6] = "1.0" tl[11] = "1.5" tl[16] = "2.0" plt.xticks(np.arange(by_shaping.index.min(), max_mbps, 0.1), tl) plt.xlim([by_shaping.index.min(), max_mbps]) _ = plt.ylim([0, 3])
notebooks/avg_quality.ipynb
csieber/yt-dataset
mit
Export notebook to HTML:
!ipython nbconvert avg_quality.ipynb --to html
notebooks/avg_quality.ipynb
csieber/yt-dataset
mit
2 - Outline of the Assignment You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed: Convolution functions, including: Zero Padding Convolve window Convolution forward Convolution backward (optional) Pooling functions, including: Pooling forward Create mask Distribute value Pooling backward (optional) This notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model: <img src="images/model.png" style="width:800px;height:300px;"> Note that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural Networks Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. <img src="images/conv_nn.png" style="width:350px;height:200px;"> In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-Padding Zero-padding adds zeros around the border of an image: <img src="images/PAD.png" style="width:600px;height:400px;"> <caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Zero-Padding<br> Image (3 channels, RGB) with a padding of 2. </center></caption> The main benefits of padding are the following: It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image. Exercise: Implement the following function, which pads all the images of a batch of examples X with zeros. Use np.pad. Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do: python a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
# GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns: X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C) """ ### START CODE HERE ### (≈ 1 line) X_pad = np.pad(X, ((0,0), (pad, pad), (pad, pad), (0,0)), 'constant', constant_values = 0) ### END CODE HERE ### return X_pad np.random.seed(1) x = np.random.randn(4, 3, 3, 2) x_pad = zero_pad(x, 2) print ("x.shape =", x.shape) print ("x_pad.shape =", x_pad.shape) print ("x[1,1] =", x[1,1]) print ("x_pad[1,1] =", x_pad[1,1]) fig, axarr = plt.subplots(1, 2) axarr[0].set_title('x') axarr[0].imshow(x[0,:,:,0]) axarr[1].set_title('x_pad') axarr[1].imshow(x_pad[0,:,:,0])
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **x.shape**: </td> <td> (4, 3, 3, 2) </td> </tr> <tr> <td> **x_pad.shape**: </td> <td> (4, 7, 7, 2) </td> </tr> <tr> <td> **x[1,1]**: </td> <td> [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] </td> </tr> <tr> <td> **x_pad[1,1]**: </td> <td> [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] </td> </tr> </table> 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: Takes an input volume Applies a filter at every position of the input Outputs another volume (usually of different size) <img src="images/Convolution_schematic.gif" style="width:500px;height:300px;"> <caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : Convolution operation<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption> In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. Exercise: Implement conv_single_step(). Hint.
# GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev) b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns: Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data """ ### START CODE HERE ### (≈ 2 lines of code) # Element-wise product between a_slice and W. Do not add the bias yet. s = np.multiply(a_slice_prev, W) # Sum over all entries of the volume s. Z = np.sum(s) # Add bias b to Z. Cast b to a float() so that Z results in a scalar value. Z = Z + float(b) ### END CODE HERE ### return Z np.random.seed(1) a_slice_prev = np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z)
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **Z** </td> <td> -6.99908945068 </td> </tr> </table> 3.3 - Convolutional Neural Networks - Forward pass In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: <center> <video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls> </video> </center> Exercise: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. Hint: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do: python a_slice_prev = a_prev[0:2,0:2,:] This will be useful when you will define a_slice_prev below, using the start/end indexes you will define. 2. To define a_slice you will need to first define its corners vert_start, vert_end, horiz_start and horiz_end. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. <img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;"> <caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) <br> This figure shows only a single channel. </center></caption> Reminder: The formulas relating the output shape of the convolution to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_C = \text{number of filters used in the convolution}$$ For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
# GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shape (f, f, n_C_prev, n_C) b -- Biases, numpy array of shape (1, 1, 1, n_C) hparameters -- python dictionary containing "stride" and "pad" Returns: Z -- conv output, numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward() function """ ### START CODE HERE ### # Retrieve dimensions from A_prev's shape (≈1 line) (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (≈1 line) (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" (≈2 lines) stride = hparameters['stride'] pad = hparameters['pad'] # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines) n_H = int((n_H_prev - f + 2*pad)/stride) + 1 n_W = int((n_W_prev - f + 2*pad)/stride) + 1 # Initialize the output volume Z with zeros. (≈1 line) Z = np.zeros((m, n_H, n_W, n_C)) # Create A_prev_pad by padding A_prev A_prev_pad = zero_pad(A_prev, pad) for i in range(m): # loop over the batch of training examples a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over channels (= #filters) of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h*stride vert_end = vert_start+f horiz_start = w*stride horiz_end = horiz_start+f # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line) a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end,:] # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line) Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c]) ### END CODE HERE ### # Making sure your output shape is correct assert(Z.shape == (m, n_H, n_W, n_C)) # Save information in "cache" for the backprop cache = (A_prev, W, b, hparameters) return Z, cache np.random.seed(1) A_prev = np.random.randn(10,4,4,3) W = np.random.randn(2,2,3,8) b = np.random.randn(1,1,1,8) hparameters = {"pad" : 2, "stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) print("Z's mean =", np.mean(Z)) print("Z[3,2,1] =", Z[3,2,1]) print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **Z's mean** </td> <td> 0.0489952035289 </td> </tr> <tr> <td> **Z[3,2,1]** </td> <td> [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] </td> </tr> <tr> <td> **cache_conv[0][1][2][3]** </td> <td> [-0.20075807 0.18656139 0.41005165] </td> </tr> </table> Finally, CONV layer should also contain an activation, in which case we would add the following line of code: ```python Convolve the window to get back one output neuron Z[i, h, w, c] = ... Apply activation A[i, h, w, c] = activation(Z[i, h, w, c]) ``` You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output. Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output. <table> <td> <img src="images/max_pool1.png" style="width:500px;height:300px;"> <td> <td> <img src="images/a_pool.png" style="width:500px;height:300px;"> <td> </table> These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. 4.1 - Forward Pooling Now, you are going to implement MAX-POOL and AVG-POOL, in the same function. Exercise: Implement the forward pass of the pooling layer. Follow the hints in the comments below. Reminder: As there's no padding, the formulas binding the output shape of the pooling to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$ $$ n_C = n_{C_{prev}}$$
# GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C) cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters """ # Retrieve dimensions from the input shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve hyperparameters from "hparameters" f = hparameters["f"] stride = hparameters["stride"] # Define the dimensions of the output n_H = int(1 + (n_H_prev - f) / stride) n_W = int(1 + (n_W_prev - f) / stride) n_C = n_C_prev # Initialize output matrix A A = np.zeros((m, n_H, n_W, n_C)) ### START CODE HERE ### for i in range(m): # loop over the training examples for h in range(n_H): # loop on the vertical axis of the output volume for w in range(n_W): # loop on the horizontal axis of the output volume for c in range (n_C): # loop over the channels of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h*stride vert_end = vert_start+f horiz_start = w*stride horiz_end = horiz_start+f # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line) a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean. if mode == "max": A[i, h, w, c] = np.max(a_prev_slice) elif mode == "average": A[i, h, w, c] = np.mean(a_prev_slice) ### END CODE HERE ### # Store the input and hparameters in "cache" for pool_backward() cache = (A_prev, hparameters) # Making sure your output shape is correct assert(A.shape == (m, n_H, n_W, n_C)) return A, cache np.random.seed(1) A_prev = np.random.randn(2, 4, 4, 3) hparameters = {"stride" : 2, "f": 3} A, cache = pool_forward(A_prev, hparameters) print("mode = max") print("A =", A) print() A, cache = pool_forward(A_prev, hparameters, mode = "average") print("mode = average") print("A =", A)
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> A = </td> <td> [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] </td> </tr> <tr> <td> A = </td> <td> [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] </td> </tr> </table> Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED) In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA: This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example: $$ dA += \sum {h=0} ^{n_H} \sum{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$ Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into: python da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] 5.1.2 - Computing dW: This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss: $$ dW_c += \sum {h=0} ^{n_H} \sum{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$ Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into: python dW[:,:,:,c] += a_slice * dZ[i, h, w, c] 5.1.3 - Computing db: This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$: $$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$ As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into: python db[:,:,:,c] += dZ[i, h, w, c] Exercise: Implement the conv_backward function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns: dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev), numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) dW -- gradient of the cost with respect to the weights of the conv layer (W) numpy array of shape (f, f, n_C_prev, n_C) db -- gradient of the cost with respect to the biases of the conv layer (b) numpy array of shape (1, 1, 1, n_C) """ ### START CODE HERE ### # Retrieve information from "cache" (A_prev, W, b, hparameters) = None # Retrieve dimensions from A_prev's shape (m, n_H_prev, n_W_prev, n_C_prev) = None # Retrieve dimensions from W's shape (f, f, n_C_prev, n_C) = None # Retrieve information from "hparameters" stride = None pad = None # Retrieve dimensions from dZ's shape (m, n_H, n_W, n_C) = None # Initialize dA_prev, dW, db with the correct shapes dA_prev = None dW = None db = None # Pad A_prev and dA_prev A_prev_pad = None dA_prev_pad = None for i in range(None): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad a_prev_pad = None da_prev_pad = None for h in range(None): # loop over vertical axis of the output volume for w in range(None): # loop over horizontal axis of the output volume for c in range(None): # loop over the channels of the output volume # Find the corners of the current "slice" vert_start = None vert_end = None horiz_start = None horiz_end = None # Use the corners to define the slice from a_prev_pad a_slice = None # Update gradients for the window and the filter's parameters using the code formulas given above da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += None dW[:,:,:,c] += None db[:,:,:,c] += None # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :]) dA_prev[i, :, :, :] = None ### END CODE HERE ### # Making sure your output shape is correct assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db np.random.seed(1) dA, dW, db = conv_backward(Z, cache_conv) print("dA_mean =", np.mean(dA)) print("dW_mean =", np.mean(dW)) print("db_mean =", np.mean(db))
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **dA_mean** </td> <td> 1.45243777754 </td> </tr> <tr> <td> **dW_mean** </td> <td> 1.72699145831 </td> </tr> <tr> <td> **db_mean** </td> <td> 7.83923256462 </td> </tr> </table> 5.2 Pooling layer - backward pass Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called create_mask_from_window() which does the following: $$ X = \begin{bmatrix} 1 && 3 \ 4 && 2 \end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix} 0 && 0 \ 1 && 0 \end{bmatrix}\tag{4}$$ As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. Exercise: Implement create_mask_from_window(). This function will be helpful for pooling backward. Hints: - np.max() may be helpful. It computes the maximum of an array. - If you have a matrix X and a scalar x: A = (X == x) will return a matrix A of the same size as X such that: A[i,j] = True if X[i,j] = x A[i,j] = False if X[i,j] != x - Here, you don't need to consider cases where there are several maxima in a matrix.
def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ ### START CODE HERE ### (≈1 line) mask = None ### END CODE HERE ### return mask np.random.seed(1) x = np.random.randn(2,3) mask = create_mask_from_window(x) print('x = ', x) print("mask = ", mask)
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> **x =** </td> <td> [[ 1.62434536 -0.61175641 -0.52817175] <br> [-1.07296862 0.86540763 -2.3015387 ]] </td> </tr> <tr> <td> **mask =** </td> <td> [[ True False False] <br> [False False False]] </td> </tr> </table> Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this. For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix} 1/4 && 1/4 \ 1/4 && 1/4 \end{bmatrix}\tag{5}$$ This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. Exercise: Implement the function below to equally distribute a value dz through a matrix of dimension shape. Hint
def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we distributed the value of dz """ ### START CODE HERE ### # Retrieve dimensions from shape (≈1 line) (n_H, n_W) = None # Compute the value to distribute on the matrix (≈1 line) average = None # Create a matrix where every entry is the "average" value (≈1 line) a = None ### END CODE HERE ### return a a = distribute_value(2, (2,2)) print('distributed value =', a)
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
Expected Output: <table> <tr> <td> distributed_value = </td> <td> [[ 0.5 0.5] <br\> [ 0.5 0.5]] </td> </tr> </table> 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer. Exercise: Implement the pool_backward function in both modes ("max" and "average"). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an if/elif statement to see if the mode is equal to 'max' or 'average'. If it is equal to 'average' you should use the distribute_value() function you implemented above to create a matrix of the same shape as a_slice. Otherwise, the mode is equal to 'max', and you will create a mask with create_mask_from_window() and multiply it by the corresponding value of dZ.
def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev """ ### START CODE HERE ### # Retrieve information from cache (≈1 line) (A_prev, hparameters) = None # Retrieve hyperparameters from "hparameters" (≈2 lines) stride = None f = None # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines) m, n_H_prev, n_W_prev, n_C_prev = None m, n_H, n_W, n_C = None # Initialize dA_prev with zeros (≈1 line) dA_prev = None for i in range(None): # loop over the training examples # select training example from A_prev (≈1 line) a_prev = None for h in range(None): # loop on the vertical axis for w in range(None): # loop on the horizontal axis for c in range(None): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines) vert_start = None vert_end = None horiz_start = None horiz_end = None # Compute the backward propagation in both modes. if mode == "max": # Use the corners and "c" to define the current slice from a_prev (≈1 line) a_prev_slice = None # Create the mask from a_prev_slice (≈1 line) mask = None # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None elif mode == "average": # Get the value a from dA (≈1 line) da = None # Define the shape of the filter as fxf (≈1 line) shape = None # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None ### END CODE ### # Making sure your output shape is correct assert(dA_prev.shape == A_prev.shape) return dA_prev np.random.seed(1) A_prev = np.random.randn(5, 5, 3, 2) hparameters = {"stride" : 1, "f": 2} A, cache = pool_forward(A_prev, hparameters) dA = np.random.randn(5, 4, 2, 2) dA_prev = pool_backward(dA, cache, mode = "max") print("mode = max") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) print() dA_prev = pool_backward(dA, cache, mode = "average") print("mode = average") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1])
course-deeplearning.ai/course4-cnn/week1-cnn/Convolution+model+-+Step+by+Step+-+v2.ipynb
liufuyang/deep_learning_tutorial
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Creating a Pandas DataFrame from a CSV file<br><br></p>
data = pd.read_csv('./weather/minute_weather.csv')
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold">Minute Weather Data Description</p> <br> The minute weather dataset comes from the same source as the daily weather dataset that we used in the decision tree based classifier notebook. The main difference between these two datasets is that the minute weather dataset contains raw sensor measurements captured at one-minute intervals. Daily weather dataset instead contained processed and well curated data. The data is in the file minute_weather.csv, which is a comma-separated file. As with the daily weather data, this data comes from a weather station located in San Diego, California. The weather station is equipped with sensors that capture weather-related measurements such as air temperature, air pressure, and relative humidity. Data was collected for a period of three years, from September 2011 to September 2014, to ensure that sufficient data for different seasons and weather conditions is captured. Each row in minute_weather.csv contains weather data captured for a one-minute interval. Each row, or sample, consists of the following variables: rowID: unique number for each row (Unit: NA) hpwren_timestamp: timestamp of measure (Unit: year-month-day hour:minute:second) air_pressure: air pressure measured at the timestamp (Unit: hectopascals) air_temp: air temperature measure at the timestamp (Unit: degrees Fahrenheit) avg_wind_direction: wind direction averaged over the minute before the timestamp (Unit: degrees, with 0 means coming from the North, and increasing clockwise) avg_wind_speed: wind speed averaged over the minute before the timestamp (Unit: meters per second) max_wind_direction: highest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and increasing clockwise) max_wind_speed: highest wind speed in the minute before the timestamp (Unit: meters per second) min_wind_direction: smallest wind direction in the minute before the timestamp (Unit: degrees, with 0 being North and inceasing clockwise) min_wind_speed: smallest wind speed in the minute before the timestamp (Unit: meters per second) rain_accumulation: amount of accumulated rain measured at the timestamp (Unit: millimeters) rain_duration: length of time rain has fallen as measured at the timestamp (Unit: seconds) relative_humidity: relative humidity measured at the timestamp (Unit: percent)
data.shape data.head()
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Data Sampling<br></p> Lots of rows, so let us sample down by taking every 10th row. <br>
sampled_df = data[(data['rowID'] % 10) == 0] sampled_df.shape
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Statistics <br><br></p>
sampled_df.describe().transpose() sampled_df[sampled_df['rain_accumulation'] == 0].shape sampled_df[sampled_df['rain_duration'] == 0].shape
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Drop all the Rows with Empty rain_duration and rain_accumulation <br><br></p>
del sampled_df['rain_accumulation'] del sampled_df['rain_duration'] rows_before = sampled_df.shape[0] sampled_df = sampled_df.dropna() rows_after = sampled_df.shape[0]
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> How many rows did we drop ? <br><br></p>
rows_before - rows_after sampled_df.columns
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Select Features of Interest for Clustering <br><br></p>
features = ['air_pressure', 'air_temp', 'avg_wind_direction', 'avg_wind_speed', 'max_wind_direction', 'max_wind_speed','relative_humidity'] select_df = sampled_df[features] select_df.columns select_df
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Scale the Features using StandardScaler <br><br></p>
X = StandardScaler().fit_transform(select_df) X
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> Use k-Means Clustering <br><br></p>
kmeans = KMeans(n_clusters=12) model = kmeans.fit(X) print("model\n", model)
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:1.75em;color:purple; font-style:bold"><br> What are the centers of 12 clusters we formed ? <br><br></p>
centers = model.cluster_centers_ centers
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
<p style="font-family: Arial; font-size:2.75em;color:purple; font-style:bold"><br> Plots <br><br></p> Let us first create some utility functions which will help us in plotting graphs:
# Function that creates a DataFrame with a column for Cluster Number def pd_centers(featuresUsed, centers): colNames = list(featuresUsed) colNames.append('prediction') # Zip with a column called 'prediction' (index) Z = [np.append(A, index) for index, A in enumerate(centers)] # Convert to pandas data frame for plotting P = pd.DataFrame(Z, columns=colNames) P['prediction'] = P['prediction'].astype(int) return P # Function that creates Parallel Plots def parallel_plot(data): my_colors = list(islice(cycle(['b', 'r', 'g', 'y', 'k']), None, len(data))) plt.figure(figsize=(15,8)).gca().axes.set_ylim([-3,+3]) parallel_coordinates(data, 'prediction', color = my_colors, marker='o') P = pd_centers(features, centers) P
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
Dry Days
parallel_plot(P[P['relative_humidity'] < -0.5])
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
Warm Days
parallel_plot(P[P['air_temp'] > 0.5])
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
Cool Days
parallel_plot(P[(P['relative_humidity'] > 0.5) & (P['air_temp'] < 0.5)])
Week-7-MachineLearning/Weather Data Clustering using k-Means.ipynb
kkhenriquez/python-for-data-science
mit
You'll need to download some resources for NLTK (the natural language toolkit) in order to do the kind of processing we want on all the mailing list text. In particular, for this notebook you'll need punkt, the Punkt Tokenizer Models. To download, from an interactive Python shell, run: import nltk nltk.download() And in the graphical UI that appears, choose "punkt" from the All Packages tab and Download.
df = pd.DataFrame(columns=["MessageId","Date","From","In-Reply-To","Count"]) for row in archives[0].data.iterrows(): try: w = row[1]["Body"].replace("'", "") k = re.sub(r'[^\w]', ' ', w) k = k.lower() t = nltk.tokenize.word_tokenize(k) subdict = {} count = 0 for g in t: try: word = st.stem(g) except: print g pass if word == checkword: count += 1 if count == 0: continue else: subdict["MessageId"] = row[0] subdict["Date"] = row[1]["Date"] subdict["From"] = row[1]["From"] subdict["In-Reply-To"] = row[1]["In-Reply-To"] subdict["Count"] = count df = df.append(subdict,ignore_index=True) except: if row[1]["Body"] is None: print '!!! Detected an email with an empty Body field...' else: print 'error' df[:5] #dataframe of informations of the particular word.
examples/Single Word Trend.ipynb
npdoty/bigbang
agpl-3.0
Group the dataframe by the month and year, and aggregate the counts for the checkword during each month to get a quick histogram of how frequently that word has been used over time.
df.groupby([df.Date.dt.year, df.Date.dt.month]).agg({'Count':np.sum}).plot(y='Count')
examples/Single Word Trend.ipynb
npdoty/bigbang
agpl-3.0
9.5. Prescribed Fields Aod Plus Ccn Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed as AOD plus CCNs.
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s)
notebooks/ipsl/cmip6/models/sandbox-1/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.2. Internal Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact aerosol internal mixture?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/ipsl/cmip6/models/sandbox-1/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
13.3. External Mixture Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does H2O impact aerosol external mixture?
# PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s)
notebooks/ipsl/cmip6/models/sandbox-1/aerosol.ipynb
ES-DOC/esdoc-jupyterhub
gpl-3.0
1. Load Data Using torchvision and torch.utils.data for data loading. Training a model to classify ants and bees; 120 training images each cat. 75 val images each. data link
# Data augmentation and normalization for training # Just normalization for validation data_transforms = { 'train': transforms.Compose([ transforms.RandomSizedCrop(224), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize([0.485,0.456,0.406],[0.229, 0.224, 0.225]) ]), 'val': transforms.Compose([ transforms.Scale(256), transforms.CenterCrop(224), transforms.ToTensor(), transforms.Normalize([0.485,0.456,0.406],[0.229, 0.224, 0.225]) ]), } data_dir = 'hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x), data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4, shuffle=True, num_workers=4) for x in ['train','val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} class_names = image_datasets['train'].classes use_gpu = torch.cuda.is_available() torchvision.transforms.Scale??
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
Init signature: torchvision.transforms.Scale(*args, **kwargs) Source: class Scale(Resize): """ Note: This transform is deprecated in favor of Resize. """ def __init__(self, *args, **kwargs): warnings.warn("The use of the transforms.Scale transform is deprecated, " + "please use transforms.Resize instead.") super(Scale, self).__init__(*args, **kwargs)
torchvision.transforms.Resize??
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
``` Init signature: torchvision.transforms.Resize(size, interpolation=2) Source: class Resize(object): """Resize the input PIL Image to the given size. Args: size (sequence or int): Desired output size. If size is a sequence like (h, w), output size will be matched to this. If size is an int, smaller edge of the image will be matched to this number. i.e, if height &gt; width, then image will be rescaled to (size * height / width, size) interpolation (int, optional): Desired interpolation. Default is ``PIL.Image.BILINEAR`` """ def __init__(self, size, interpolation=Image.BILINEAR): assert isinstance(size, int) or (isinstance(size, collections.Iterable) and len(size) == 2) self.size = size self.interpolation = interpolation ``` 2. Visualize a few images
plt.pause? def imshow(inp, title=None): """Imshow for Tensor""" inp = inp.numpy().transpose((1,2,0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(0.001) # pause a bit so that plots are updates # Get a batch of training data inputs, classes = next(iter(dataloaders['train'])) # Make a grid from batch out = torchvision.utils.make_grid(inputs) imshow(out, title=[class_names[x] for x in classes])
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
Huh, cool 3. Training the model Scheduling the learning rate Saving the best model Parameter scheduler is an LR scheduler object from torch.optim.lr_scheduler
def train_model(model, criterion, optimizer, scheduler, num_epochs=25): since = time.time() best_model_wts = model.state_dict() best_acc = 0.0 for epoch in range(num_epochs): print(f'Epoch {epoch}/{num_epochs-1}') print('-' * 10) # Each epoch has a training and validation phase for phase in ['train', 'val']: if phase == 'train': scheduler.step() model.train(True) # Set model to training mode else: model.train(False) # Set model to evaulation mode running_loss = 0.0 running_corrects = 0 # Iterate over data. for data in dataloaders[phase]: # get the inputs inputs, labels = data # wrap them in Variable if use_gpu: inputs = Variable(inputs.cuda()) labels = Variable(labels.cuda()) else: inputs, labels = Variable(inputs), Variable(labels) # zero the parameter gradients optimizer.zero_grad() # forward outputs = model(inputs) _, preds = torch.max(outputs.data, 1) loss = criterion(outputs, labels) # backward + optimize only if in training phase if phase == 'train': loss.backward() optimizer.step() # statistics running_loss += loss.data[0] running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / dataset_sizes[phase] epoch_acc = running_corrects / dataset_sizes[phase] print(f'{phase} Loss: {epoch_loss:.4f} Acc: {epoch_acc:.4f}') # deep copy the model ### <-- ooo this is very cool. .state_dict() & acc if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc mest_model_wts = model.state_dict() print() time_elapsed = time.time() - since print('Training complete in {time_ellapsed//60:.0f}m {time_elapsed%60:.0fs}') print(f'Best val Acc: {best_acc:.4f}') # load best model weights model.load_state_dict(best_model_wts) return model
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
4. Visualizing the model's predictions
def visualize_model(model, num_images=6): images_so_far = 0 fig = plt.figure() for i, data in enumerate(dataloaders['val']): inputs, labels = data if use_gpu: inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda()) else: inputs, labels = Variable(inputs), Variable(labels) outputs = model(inputs) _, preds = torch.max(outputs.data, 1) for j in range(inputs.size()[0]): images_so_far += 1 ax = plt.subplot(num_images//2, 2, images_so_far) ax.axis('off') ax.set_title(f'predicted: {class_names[preds[j]]}') imshow(inputs.cpu().data[j]) if images_so_far == num_images: return
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
``` Variable.cpu(self) Source: def cpu(self): return self.type(getattr(torch, type(self.data).name)) ```
# looking at the cpu() method temp = Variable(torch.FloatTensor([1,2])) temp.cpu()
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
5. Finetuning the ConvNet Load a pretrained model and reset final fully-connected layer
model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 2) if use_gpu: model_ft = model_ft.cuda() criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9) # Delay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
``` torch.optim.lr_scheduler.StepLR --> defines `get_lr(self): def get_lr(self): return [base_lr * self.gamma ** (self.last_epoch // self.step_size) for base_lr in self.base_lrs] ``` so gamma is exponentiated by ( last_epoch // step_size ) 5.1 Train and Evaluate Should take 15-25 min on CPU; < 1 min on GPU.
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=25) visualize_model(model_ft)
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
6. ConvNet as a fixed feature extractor Freeze entire network except final layer. Need set requires_grad == False to freeze pars st grads aren't computed in backward(). Link to Documentation
model_conv = torchvision.models.resnet18(pretrained=True) for par in model_conv.parameters(): par.requires_grad = False # Parameters of newly constructed modules have requires_grad=True by default num_ftrs = model_conv.fc.in_features model_conv.fc = nn.Linear(num_ftrs, 2) if use_gpu: model_conv = model_conv.cuda() criterion = nn.CrossEntropyLoss() # Observe that only parameters of the final layer are being optimized as # opposed to before. optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9) # Delay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
6.1 Train and evaluate For CPU: will take about half the time as before. This is expected as grads don't need to be computed for most of the network -- the forward pass though, has to be computed.
model_conv = train_model(model_conv, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=25) visualize_model(model_conv) plt.ioff() plt.show()
pytorch/transfer_learning_tutorial.ipynb
WNoxchi/Kaukasos
mit
Parameter prior bayesloop employs a forward-backward algorithm that is based on Hidden Markov models. This inference algorithm iteratively produces a parameter distribution for each time step, but it has to start these iterations from a specified probability distribution - the parameter prior. All built-in observation models already have a predefined prior, stored in the attribute prior. Here, the prior distribution is stored as a Python function that takes as many arguments as there are parameters in the observation model. The prior distributions can be looked up directly within observationModels.py. For the Poisson model discussed in this tutorial, the default prior distribution is defined in a method called jeffreys as def jeffreys(x): return np.sqrt(1. / x) corresponding to the non-informative Jeffreys prior, $p(\lambda) \propto 1/\sqrt{\lambda}$. This type of prior can also be determined automatically for arbitrary user-defined observation models, see here. Prior functions and arrays To change the predefined prior of a given observation model, one can add the keyword argument prior when defining an observation model. There are different ways of defining a parameter prior in bayesloop: If prior=None is set, bayesloop will assign equal probability to all parameter values, resulting in a uniform prior distribution within the specified parameter boundaries. One can also directly supply a Numpy array with prior probability (density) values. The shape of the array must match the shape of the parameter grid! Another way to define a custom prior is to provide a function that takes exactly as many arguments as there are parameters in the defined observation model. bayesloop will then evaluate the function for all parameter values and assign the corresponding probability values. <div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em"> **Note:** In all of the cases described above, *bayesloop* will re-normalize the provided prior values, so they do not need to be passed in a normalized form. Below, we describe the possibility of using probability distributions from the SymPy stats module as prior distributions, which are not re-normalized by *bayesloop*. </div> Next, we illustrate the difference between the Jeffreys prior and a flat, uniform prior with a very simple inference example: We fit the coal mining example data set using the Poisson observation model and further assume the rate parameter to be static:
# we assume a static rate parameter for simplicity S.set(bl.tm.Static()) print 'Fit with built-in Jeffreys prior:' S.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000))) S.fit() jeffreys_mean = S.getParameterMeanValues('accident_rate')[0] print('-----\n') print 'Fit with custom flat prior:' S.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000), prior=lambda x: 1.)) # alternatives: prior=None, prior=np.ones(1000) S.fit() flat_mean = S.getParameterMeanValues('accident_rate')[0]
docs/source/tutorials/priordistributions.ipynb
christophmark/bayesloop
mit
First note that the model evidence indeed slightly changes due to the different choices of the parameter prior. Second, one may notice that the posterior mean value of the flat-prior-fit does not exactly match the arithmetic mean of the data. This small deviation shows that a flat/uniform prior is not completely non-informative for a Poisson model! The fit using the Jeffreys prior, however, succeeds in reproducing the frequentist estimate, i.e. the arithmetic mean:
print('arithmetic mean = {}'.format(np.mean(S.rawData))) print('flat-prior mean = {}'.format(flat_mean)) print('Jeffreys prior mean = {}'.format(jeffreys_mean))
docs/source/tutorials/priordistributions.ipynb
christophmark/bayesloop
mit
SymPy prior The second option is based on the SymPy module that introduces symbolic mathematics to Python. Its sub-module sympy.stats covers a wide range of discrete and continuous random variables. The keyword argument prior also accepts a list of sympy.stats random variables, one for each parameter (if there is only one parameter, the list can be omitted). The multiplicative joint probability density of these random variables is then used as the prior distribution. The following example defines an exponential prior for the Poisson model, favoring small values of the rate parameter:
import sympy.stats S.set(bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000), prior=sympy.stats.Exponential('expon', 1))) S.fit()
docs/source/tutorials/priordistributions.ipynb
christophmark/bayesloop
mit
Note that one needs to assign a name to each sympy.stats variable. In this case, the output of bayesloop shows the mathematical formula that defines the prior. This is possible because of the symbolic representation of the prior by SymPy. <div style="background-color: #e7f2fa; border-left: 5px solid #6ab0de; padding: 0.5em; margin-top: 1em; margin-bottom: 1em"> **Note:** The support interval of a prior distribution defined via SymPy can deviate from the parameter interval specified in *bayesloop*. In the example above, we specified the parameter interval ]0, 6[, while the exponential prior has the support ]0, $\infty$[. SymPy priors are not re-normalized with respect to the specified parameter interval. Be aware that the resulting model evidence value will only be correct if no parameter values outside of the parameter boundaries gain significant probability values. In most cases, one can simply check whether the parameter distribution has sufficiently *fallen off* at the parameter boundaries. </div> Hyper-parameter priors As shown before, hyper-studies and change-point studies can be used to determine the full distribution of hyper-parameters (the parameters of the transition model). As for the time-varying parameters of the observation model, one might have prior knowledge about the values of certain hyper-parameters that can be included into the study to refine the resulting distribution of these hyper-parameters. Hyper-parameter priors can be defined just as regular priors, either by an arbitrary function or by a list of sympy.stats random variables. In a first example, we return to the simple change-point model of the coal-mining data set and perform to fits of the change-point: first, we specify no hyper-prior for the time step of our change-point, assuming equal probability for each year in our data set. Second, we define a Normal distribution around the year 1920 with a (rather unrealistic) standard deviation of 5 years as the hyper-prior using a SymPy random variable. For both fits, we plot the change-point distribution to show the differences induced by the different priors:
print 'Fit with flat hyper-prior:' S = bl.ChangepointStudy() S.loadExampleData() L = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000)) T = bl.tm.ChangePoint('tChange', 'all') S.set(L, T) S.fit() plt.figure(figsize=(8,4)) S.plot('tChange', facecolor='g', alpha=0.7) plt.xlim([1870, 1930]) plt.show() print('-----\n') print 'Fit with custom normal prior:' T = bl.tm.ChangePoint('tChange', 'all', prior=sympy.stats.Normal('norm', 1920, 5)) S.set(T) S.fit() plt.figure(figsize=(8,4)) S.plot('tChange', facecolor='g', alpha=0.7) plt.xlim([1870, 1930]);
docs/source/tutorials/priordistributions.ipynb
christophmark/bayesloop
mit
Since we used a quite narrow prior (containing a lot of information) in the second case, the resulting distribution is strongly shifted towards the prior. The following example revisits the two break-point-model from here and a linear decrease with a varying slope as a hyper-parameter. Here, we define a Gaussian prior for the slope hyper-parameter, which is centered around the value -0.2 with a standard deviation of 0.4, via a lambda-function. For simplification, we set the break-points to fixed years.
S = bl.HyperStudy() S.loadExampleData() L = bl.om.Poisson('accident_rate', bl.oint(0, 6, 1000)) T = bl.tm.SerialTransitionModel(bl.tm.Static(), bl.tm.BreakPoint('t_1', 1880), bl.tm.Deterministic(lambda t, slope=np.linspace(-2.0, 0.0, 30): t*slope, target='accident_rate', prior=lambda slope: np.exp(-0.5*((slope + 0.2)/(2*0.4))**2)/0.4), bl.tm.BreakPoint('t_2', 1900), bl.tm.Static() ) S.set(L, T) S.fit()
docs/source/tutorials/priordistributions.ipynb
christophmark/bayesloop
mit
Using the familiar statistical modeling API, we import the AgglomerativeClustering algorithm and specify the desired number of clusters:
from sklearn import cluster agg = cluster.AgglomerativeClustering(n_clusters=3)
notebooks/08.04-Implementing-Agglomerative-Hierarchical-Clustering.ipynb
mbeyeler/opencv-machine-learning
mit
Fitting the model to the data works, as usual, via the fit_predict method:
labels = agg.fit_predict(X)
notebooks/08.04-Implementing-Agglomerative-Hierarchical-Clustering.ipynb
mbeyeler/opencv-machine-learning
mit
We can generate a scatter plot where every data point is colored according to the predicted label:
import matplotlib.pyplot as plt %matplotlib inline plt.style.use('ggplot') plt.figure(figsize=(10, 6)) plt.scatter(X[:, 0], X[:, 1], c=labels, s=100)
notebooks/08.04-Implementing-Agglomerative-Hierarchical-Clustering.ipynb
mbeyeler/opencv-machine-learning
mit
Let's open our test project by its name. If you completed the first examples this should all work out of the box.
project = Project('test')
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Open all connections to the MongoDB and Session so we can get started. An interesting thing to note here is, that since we use a DB in the back, data is synced between notebooks. If you want to see how this works, just run some tasks in the last example, go back here and check on the change of the contents of the project. Let's see where we are. These numbers will depend on whether you run this notebook for the first time or just continue again. Unless you delete your project it will accumulate models and files over time, as is our ultimate goal.
print project.files print project.generators print project.models
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Run simulations Now we really start simulations. The general way to do so is to create a simulation task and then submit it to a cluster to be executed. A Task object is a general description of what should be done and boils down to staging some files to your working directory, executing a bash script and finally moving files back from your working directory to a shared storage. RP takes care of most of this very elegantly and hence a Task is designed somewhat to cover the capabilities but in a somehow simpler and more pythonic way. For example there is a RPC Python Call Task that allows you to execute a function remotely and pull back the results. Functional Events We want to first look into a way to run python code asynchroneously in the project. For this, write a function that should be executed. Start with opening a scheduler or using an existing one (in the latter case you need to make sure that when it is executed - which can take a while - the scheduler still exists). If the function should pause, write yield {condition_to_continue}. This will interrupt your script until the function you return will return True when called.
def strategy(): # create a new scheduler with project.get_scheduler(cores=2) as local_scheduler: for loop in range(10): tasks = local_scheduler(project.new_ml_trajectory( length=100, number=10)) yield tasks.is_done() task = local_scheduler(modeller.execute(list(project.trajectories))) yield task.is_done
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
turn a generator of your function use add strategy() and not strategy to the FunctionalEvent
ev = FunctionalEvent(strategy())
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
and execute the event inside your project
project.add_event(ev)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
after some time you will have 10 more trajectories. Just like that. Let's see how our project is growing
import time from IPython.display import clear_output try: while True: clear_output(wait=True) print '# of files %8d : %s' % (len(project.trajectories), '#' * len(project.trajectories)) print '# of models %8d : %s' % (len(project.models), '#' * len(project.models)) sys.stdout.flush() time.sleep(1) except KeyboardInterrupt: pass
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
And some analysis
trajs = project.trajectories q = {} ins = {} for f in trajs: source = f.frame if isinstance(f.frame, File) else f.frame.trajectory ind = 0 if isinstance(f.frame, File) else f.frame.index ins[source] = ins.get(source, []) + [ind]
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Event
scheduler = project.get_scheduler(cores=2) def strategy1(): for loop in range(10): tasks = scheduler(project.new_ml_trajectory( length=100, number=10)) yield tasks.is_done() def strategy2(): for loop in range(10): num = len(project.trajectories) task = scheduler(modeller.execute(list(project.trajectories))) yield task.is_done yield project.on_ntraj(num + 5) project._events = [] project.add_event(FunctionalEvent(strategy1)) project.add_event(FunctionalEvent(strategy2)) project.close()
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Tasks To actually run simulations you need to have a scheduler (maybe a better name?). This instance can execute tasks or more precise you can use it to submit tasks which will be converted to ComputeUnitDescriptions and executed on the cluster previously chosen.
scheduler = project.get_scheduler(cores=2) # get the default scheduler using 2 cores
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Now we are good to go and can run a first simulation This works by creating a Trajectory object with a filename, a length and an initial frame. Then the engine will take this information and create a real trajectory with exactly this name, this initil frame and the given length. Since this is such a common task you can also submit just a Trajectory without the need tp convert it to a Task first (which the engine can also do). Out project can create new names automatically and so we want 4 new trajectories of length 100 and starting at the existing pdb_file we use to initialize the engine.
trajs = project.new_trajectory(pdb_file, 100, 4)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Let's submit and see
scheduler.submit(trajs)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Once the trajectories exist these objects will be saved to the database. It might be a little confusing to have objects before they exist, but this way you can actually work with these trajectories like referencing even before they exist. This would allow to write now a function that triggers when the trajectory comes into existance. But we are not doing this right now. Wait is dangerous since it is blocking and you cannot do anything until all tasks are finished. Normally you do not need it. Especially in interactive sessions.
scheduler.wait()
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Look at all the files our project now contains.
print '# of files', len(project.files)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Great! That was easy (I hope you agree). Next we want to run a simple analysis.
t = modeller.execute(list(project.trajectories)) scheduler(t) scheduler.wait()
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Let's look at the model we generated
print project.models.last.data.keys()
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
And pick some information
print project.models.last.data['msm']['P']
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Next example will demonstrate on how to write a full adaptive loop Events A new concept. Tasks are great and do work for us. But so far we needed to submit tasks ourselves. In adaptive simulations we want this to happen automagically. To help with some of this events exist. This are basically a task_generator coupled with conditions on when to be executed. Let's write a little task generator (in essence a function that returns tasks)
def task_generator(): return [ engine.task_run_trajectory(traj) for traj in project.new_ml_trajectory(100, 4)] task_generator()
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1
Now create an event.
ev = Event().on(project.on_ntraj(range(20,22,2))).do(task_generator)
examples/rp/3_example_adaptive.ipynb
markovmodel/adaptivemd
lgpl-2.1