markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument: dashed red: r-- blue circles: bo dotted black: k. Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
# YOUR CODE HERE raise NotImplementedError() plot_sine2(4.0, -1.0, 'r--')
assignments/assignment05/InteractEx02.ipynb
sthuggins/phys202-2015-work
mit
Use interact to create a UI for plot_sine2. Use a slider for a and b as above. Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
# YOUR CODE HERE raise NotImplementedError() assert True # leave this for grading the plot_sine2 exercise
assignments/assignment05/InteractEx02.ipynb
sthuggins/phys202-2015-work
mit
Federal funds rate with switching intercept The first example models the federal funds rate as noise around a constant intercept, but where the intercept changes during different regimes. The model is simply: $$r_t = \mu_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$ where $S_t \in {0, 1}$, and the regime transitions according to $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ 1 - p_{00} & 1 - p_{10} \end{bmatrix} $$ We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \sigma^2$. The data used in this example can be found at http://www.stata-press.com/data/r14/usmacro.
# Get the federal funds rate data from statsmodels.tsa.regime_switching.tests.test_markov_regression import fedfunds dta_fedfunds = pd.Series(fedfunds, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) # Plot the data dta_fedfunds.plot(title='Federal funds rate', figsize=(12,3)) # Fit the model # (a switching mean is the default of the MarkovRegession model) mod_fedfunds = sm.tsa.MarkovRegression(dta_fedfunds, k_regimes=2) res_fedfunds = mod_fedfunds.fit() res_fedfunds.summary()
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
From the summary output, the mean federal funds rate in the first regime (the "low regime") is estimated to be $3.7$ whereas in the "high regime" it is $9.6$. Below we plot the smoothed probabilities of being in the high regime. The model suggests that the 1980's was a time-period in which a high federal funds rate existed.
res_fedfunds.smoothed_marginal_probabilities[1].plot( title='Probability of being in the high regime', figsize=(12,3));
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
From the estimated transition matrix we can calculate the expected duration of a low regime versus a high regime.
print(res_fedfunds.expected_durations)
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
A low regime is expected to persist for about fourteen years, whereas the high regime is expected to persist for only about five years. Federal funds rate with switching intercept and lagged dependent variable The second example augments the previous model to include the lagged value of the federal funds rate. $$r_t = \mu_{S_t} + r_{t-1} \beta_{S_t} + \varepsilon_t \qquad \varepsilon_t \sim N(0, \sigma^2)$$ where $S_t \in {0, 1}$, and the regime transitions according to $$ P(S_t = s_t | S_{t-1} = s_{t-1}) = \begin{bmatrix} p_{00} & p_{10} \ 1 - p_{00} & 1 - p_{10} \end{bmatrix} $$ We will estimate the parameters of this model by maximum likelihood: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma^2$.
# Fit the model mod_fedfunds2 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[1:], k_regimes=2, exog=dta_fedfunds.iloc[:-1]) res_fedfunds2 = mod_fedfunds2.fit() res_fedfunds2.summary()
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
There are several things to notice from the summary output: The information criteria have decreased substantially, indicating that this model has a better fit than the previous model. The interpretation of the regimes, in terms of the intercept, have switched. Now the first regime has the higher intercept and the second regime has a lower intercept. Examining the smoothed probabilities of the high regime state, we now see quite a bit more variability.
res_fedfunds2.smoothed_marginal_probabilities[0].plot( title='Probability of being in the high regime', figsize=(12,3));
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
Finally, the expected durations of each regime have decreased quite a bit.
print(res_fedfunds2.expected_durations)
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
Taylor rule with 2 or 3 regimes We now include two additional exogenous variables - a measure of the output gap and a measure of inflation - to estimate a switching Taylor-type rule with both 2 and 3 regimes to see which fits the data better. Because the models can be often difficult to estimate, for the 3-regime model we employ a search over starting parameters to improve results, specifying 20 random search repetitions.
# Get the additional data from statsmodels.tsa.regime_switching.tests.test_markov_regression import ogap, inf dta_ogap = pd.Series(ogap, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) dta_inf = pd.Series(inf, index=pd.date_range('1954-07-01', '2010-10-01', freq='QS')) exog = pd.concat((dta_fedfunds.shift(), dta_ogap, dta_inf), axis=1).iloc[4:] # Fit the 2-regime model mod_fedfunds3 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[4:], k_regimes=2, exog=exog) res_fedfunds3 = mod_fedfunds3.fit() # Fit the 3-regime model np.random.seed(12345) mod_fedfunds4 = sm.tsa.MarkovRegression( dta_fedfunds.iloc[4:], k_regimes=3, exog=exog) res_fedfunds4 = mod_fedfunds4.fit(search_reps=20) res_fedfunds3.summary() res_fedfunds4.summary()
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
Due to lower information criteria, we might prefer the 3-state model, with an interpretation of low-, medium-, and high-interest rate regimes. The smoothed probabilities of each regime are plotted below.
fig, axes = plt.subplots(3, figsize=(10,7)) ax = axes[0] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[0]) ax.set(title='Smoothed probability of a low-interest rate regime') ax = axes[1] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[1]) ax.set(title='Smoothed probability of a medium-interest rate regime') ax = axes[2] ax.plot(res_fedfunds4.smoothed_marginal_probabilities[2]) ax.set(title='Smoothed probability of a high-interest rate regime') fig.tight_layout()
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
Switching variances We can also accomodate switching variances. In particular, we consider the model $$ y_t = \mu_{S_t} + y_{t-1} \beta_{S_t} + \varepsilon_t \quad \varepsilon_t \sim N(0, \sigma_{S_t}^2) $$ We use maximum likelihood to estimate the parameters of this model: $p_{00}, p_{10}, \mu_0, \mu_1, \beta_0, \beta_1, \sigma_0^2, \sigma_1^2$. The application is to absolute returns on stocks, where the data can be found at http://www.stata-press.com/data/r14/snp500.
# Get the federal funds rate data from statsmodels.tsa.regime_switching.tests.test_markov_regression import areturns dta_areturns = pd.Series(areturns, index=pd.date_range('2004-05-04', '2014-5-03', freq='W')) # Plot the data dta_areturns.plot(title='Absolute returns, S&P500', figsize=(12,3)) # Fit the model mod_areturns = sm.tsa.MarkovRegression( dta_areturns.iloc[1:], k_regimes=2, exog=dta_areturns.iloc[:-1], switching_variance=True) res_areturns = mod_areturns.fit() res_areturns.summary()
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
The first regime is a low-variance regime and the second regime is a high-variance regime. Below we plot the probabilities of being in the low-variance regime. Between 2008 and 2012 there does not appear to be a clear indication of one regime guiding the economy.
res_areturns.smoothed_marginal_probabilities[0].plot( title='Probability of being in a low-variance regime', figsize=(12,3));
examples/notebooks/markov_regression.ipynb
josef-pkt/statsmodels
bsd-3-clause
2 - Outline of the Assignment To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will: Initialize the parameters for a two-layer network and for an $L$-layer neural network. Implement the forward propagation module (shown in purple in the figure below). Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$). We give you the ACTIVATION function (relu/sigmoid). Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function. Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function. Compute the loss. Implement the backward propagation module (denoted in red in the figure below). Complete the LINEAR part of a layer's backward propagation step. We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward) Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function. Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function Finally update the parameters. <img src="images/final outline.png" style="width:800px;height:500px;"> <caption><center> Figure 1</center></caption><br> Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps. 3 - Initialization You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers. 3.1 - 2-layer Neural Network Exercise: Create and initialize the parameters of the 2-layer neural network. Instructions: - The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID. - Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape. - Use zero initialization for the biases. Use np.zeros(shape).
# GRADED FUNCTION: initialize_parameters def initialize_parameters(n_x, n_h, n_y): """ Argument: n_x -- size of the input layer n_h -- size of the hidden layer n_y -- size of the output layer Returns: parameters -- python dictionary containing your parameters: W1 -- weight matrix of shape (n_h, n_x) b1 -- bias vector of shape (n_h, 1) W2 -- weight matrix of shape (n_y, n_h) b2 -- bias vector of shape (n_y, 1) """ np.random.seed(1) ### START CODE HERE ### (≈ 4 lines of code) W1 = None b1 = None W2 = None b2 = None ### END CODE HERE ### assert(W1.shape == (n_h, n_x)) assert(b1.shape == (n_h, 1)) assert(W2.shape == (n_y, n_h)) assert(b2.shape == (n_y, 1)) parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters parameters = initialize_parameters(3,2,1) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected output: <table style="width:80%"> <tr> <td> **W1** </td> <td> [[ 0.01624345 -0.00611756 -0.00528172] [-0.01072969 0.00865408 -0.02301539]] </td> </tr> <tr> <td> **b1**</td> <td>[[ 0.] [ 0.]]</td> </tr> <tr> <td>**W2**</td> <td> [[ 0.01744812 -0.00761207]]</td> </tr> <tr> <td> **b2** </td> <td> [[ 0.]] </td> </tr> </table> 3.2 - L-layer Neural Network The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then: <table style="width:100%"> <tr> <td> </td> <td> **Shape of W** </td> <td> **Shape of b** </td> <td> **Activation** </td> <td> **Shape of Activation** </td> <tr> <tr> <td> **Layer 1** </td> <td> $(n^{[1]},12288)$ </td> <td> $(n^{[1]},1)$ </td> <td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td> <td> $(n^{[1]},209)$ </td> <tr> <tr> <td> **Layer 2** </td> <td> $(n^{[2]}, n^{[1]})$ </td> <td> $(n^{[2]},1)$ </td> <td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td> <td> $(n^{[2]}, 209)$ </td> <tr> <tr> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$ </td> <td> $\vdots$</td> <td> $\vdots$ </td> <tr> <tr> <td> **Layer L-1** </td> <td> $(n^{[L-1]}, n^{[L-2]})$ </td> <td> $(n^{[L-1]}, 1)$ </td> <td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td> <td> $(n^{[L-1]}, 209)$ </td> <tr> <tr> <td> **Layer L** </td> <td> $(n^{[L]}, n^{[L-1]})$ </td> <td> $(n^{[L]}, 1)$ </td> <td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td> <td> $(n^{[L]}, 209)$ </td> <tr> </table> Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if: $$ W = \begin{bmatrix} j & k & l\ m & n & o \ p & q & r \end{bmatrix}\;\;\; X = \begin{bmatrix} a & b & c\ d & e & f \ g & h & i \end{bmatrix} \;\;\; b =\begin{bmatrix} s \ t \ u \end{bmatrix}\tag{2}$$ Then $WX + b$ will be: $$ WX + b = \begin{bmatrix} (ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\ (ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\ (pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u \end{bmatrix}\tag{3} $$ Exercise: Implement initialization for an L-layer Neural Network. Instructions: - The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function. - Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01. - Use zeros initialization for the biases. Use np.zeros(shape). - We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers! - Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network). python if L == 1: parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01 parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
# GRADED FUNCTION: initialize_parameters_deep def initialize_parameters_deep(layer_dims): """ Arguments: layer_dims -- python array (list) containing the dimensions of each layer in our network Returns: parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL": Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1]) bl -- bias vector of shape (layer_dims[l], 1) """ np.random.seed(3) parameters = {} L = len(layer_dims) # number of layers in the network for l in range(1, L): ### START CODE HERE ### (≈ 2 lines of code) parameters['W' + str(l)] = None parameters['b' + str(l)] = None ### END CODE HERE ### assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1])) assert(parameters['b' + str(l)].shape == (layer_dims[l], 1)) return parameters parameters = initialize_parameters_deep([5,4,3]) print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"]))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected output: <table style="width:80%"> <tr> <td> **W1** </td> <td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388] [-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218] [-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034] [-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td> </tr> <tr> <td>**b1** </td> <td>[[ 0.] [ 0.] [ 0.] [ 0.]]</td> </tr> <tr> <td>**W2** </td> <td>[[-0.01185047 -0.0020565 0.01486148 0.00236716] [-0.01023785 -0.00712993 0.00625245 -0.00160513] [-0.00768836 -0.00230031 0.00745056 0.01976111]]</td> </tr> <tr> <td>**b2** </td> <td>[[ 0.] [ 0.] [ 0.]]</td> </tr> </table> 4 - Forward propagation module 4.1 - Linear Forward Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order: LINEAR LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid. [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model) The linear forward module (vectorized over all the examples) computes the following equations: $$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$ where $A^{[0]} = X$. Exercise: Build the linear part of forward propagation. Reminder: The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
# GRADED FUNCTION: linear_forward def linear_forward(A, W, b): """ Implement the linear part of a layer's forward propagation. Arguments: A -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) Returns: Z -- the input of the activation function, also called pre-activation parameter cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently """ ### START CODE HERE ### (≈ 1 line of code) Z = None ### END CODE HERE ### assert(Z.shape == (W.shape[0], A.shape[1])) cache = (A, W, b) return Z, cache A, W, b = linear_forward_test_case() Z, linear_cache = linear_forward(A, W, b) print("Z = " + str(Z))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected output: <table style="width:35%"> <tr> <td> **Z** </td> <td> [[ 3.26295337 -1.23429987]] </td> </tr> </table> 4.2 - Linear-Activation Forward In this notebook, you will use two activation functions: Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call: python A, activation_cache = sigmoid(Z) ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call: python A, activation_cache = relu(Z) For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step. Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
# GRADED FUNCTION: linear_activation_forward def linear_activation_forward(A_prev, W, b, activation): """ Implement the forward propagation for the LINEAR->ACTIVATION layer Arguments: A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples) W -- weights matrix: numpy array of shape (size of current layer, size of previous layer) b -- bias vector, numpy array of shape (size of the current layer, 1) activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: A -- the output of the activation function, also called the post-activation value cache -- a python dictionary containing "linear_cache" and "activation_cache"; stored for computing the backward pass efficiently """ if activation == "sigmoid": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = None A, activation_cache = None ### END CODE HERE ### elif activation == "relu": # Inputs: "A_prev, W, b". Outputs: "A, activation_cache". ### START CODE HERE ### (≈ 2 lines of code) Z, linear_cache = None A, activation_cache = None ### END CODE HERE ### assert (A.shape == (W.shape[0], A_prev.shape[1])) cache = (linear_cache, activation_cache) return A, cache A_prev, W, b = linear_activation_forward_test_case() A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid") print("With sigmoid: A = " + str(A)) A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu") print("With ReLU: A = " + str(A))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected output: <table style="width:35%"> <tr> <td> **With sigmoid: A ** </td> <td > [[ 0.96890023 0.11013289]]</td> </tr> <tr> <td> **With ReLU: A ** </td> <td > [[ 3.43896131 0. ]]</td> </tr> </table> Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers. d) L-Layer Model For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID. <img src="images/model_architecture_kiank.png" style="width:600px;height:300px;"> <caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br> Exercise: Implement the forward propagation of the above model. Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.) Tips: - Use the functions you had previously written - Use a for loop to replicate [LINEAR->RELU] (L-1) times - Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
# GRADED FUNCTION: L_model_forward def L_model_forward(X, parameters): """ Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation Arguments: X -- data, numpy array of shape (input size, number of examples) parameters -- output of initialize_parameters_deep() Returns: AL -- last post-activation value caches -- list of caches containing: every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2) the cache of linear_sigmoid_forward() (there is one, indexed L-1) """ caches = [] A = X L = len(parameters) // 2 # number of layers in the neural network # Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list. for l in range(1, L): A_prev = A ### START CODE HERE ### (≈ 2 lines of code) A, cache = None ### END CODE HERE ### # Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list. ### START CODE HERE ### (≈ 2 lines of code) AL, cache = None ### END CODE HERE ### assert(AL.shape == (1,X.shape[1])) return AL, caches X, parameters = L_model_forward_test_case() AL, caches = L_model_forward(X, parameters) print("AL = " + str(AL)) print("Length of caches list = " + str(len(caches)))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
<table style="width:40%"> <tr> <td> **AL** </td> <td > [[ 0.17007265 0.2524272 ]]</td> </tr> <tr> <td> **Length of caches list ** </td> <td > 2</td> </tr> </table> Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions. 5 - Cost function Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning. Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
# GRADED FUNCTION: compute_cost def compute_cost(AL, Y): """ Implement the cost function defined by equation (7). Arguments: AL -- probability vector corresponding to your label predictions, shape (1, number of examples) Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples) Returns: cost -- cross-entropy cost """ m = Y.shape[1] # Compute loss from aL and y. ### START CODE HERE ### (≈ 1 lines of code) cost = None ### END CODE HERE ### cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17). assert(cost.shape == ()) return cost Y, AL = compute_cost_test_case() print("cost = " + str(compute_cost(AL, Y)))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected Output: <table> <tr> <td>**cost** </td> <td> 0.41493159961539694</td> </tr> </table> 6 - Backward propagation module Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters. Reminder: <img src="images/backprop_kiank.png" style="width:650px;height:250px;"> <caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption> <!-- For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows: $$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$ In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted. Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$. This is why we talk about **backpropagation**. !--> Now, similar to forward propagation, you are going to build the backward propagation in three steps: - LINEAR backward - LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation - [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model) 6.1 - Linear backward For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation). Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$. <img src="images/linearback_kiank.png" style="width:250px;height:300px;"> <caption><center> Figure 4 </center></caption> The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need: $$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$ $$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$ $$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$ Exercise: Use the 3 formulas above to implement linear_backward().
# GRADED FUNCTION: linear_backward def linear_backward(dZ, cache): """ Implement the linear portion of backward propagation for a single layer (layer l) Arguments: dZ -- Gradient of the cost with respect to the linear output (of current layer l) cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ A_prev, W, b = cache m = A_prev.shape[1] ### START CODE HERE ### (≈ 3 lines of code) dW = None db = None dA_prev = None ### END CODE HERE ### assert (dA_prev.shape == A_prev.shape) assert (dW.shape == W.shape) assert (db.shape == b.shape) return dA_prev, dW, db # Set up some test inputs dZ, linear_cache = linear_backward_test_case() dA_prev, dW, db = linear_backward(dZ, linear_cache) print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected Output: <table style="width:90%"> <tr> <td> **dA_prev** </td> <td > [[ 0.51822968 -0.19517421] [-0.40506361 0.15255393] [ 2.37496825 -0.89445391]] </td> </tr> <tr> <td> **dW** </td> <td > [[-0.10076895 1.40685096 1.64992505]] </td> </tr> <tr> <td> **db** </td> <td> [[ 0.50629448]] </td> </tr> </table> 6.2 - Linear-Activation backward Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward. To help you implement linear_activation_backward, we provided two backward functions: - sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows: python dZ = sigmoid_backward(dA, activation_cache) relu_backward: Implements the backward propagation for RELU unit. You can call it as follows: python dZ = relu_backward(dA, activation_cache) If $g(.)$ is the activation function, sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$. Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
# GRADED FUNCTION: linear_activation_backward def linear_activation_backward(dA, cache, activation): """ Implement the backward propagation for the LINEAR->ACTIVATION layer. Arguments: dA -- post-activation gradient for current layer l cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu" Returns: dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev dW -- Gradient of the cost with respect to W (current layer l), same shape as W db -- Gradient of the cost with respect to b (current layer l), same shape as b """ linear_cache, activation_cache = cache if activation == "relu": ### START CODE HERE ### (≈ 2 lines of code) dZ = None dA_prev, dW, db = None ### END CODE HERE ### elif activation == "sigmoid": ### START CODE HERE ### (≈ 2 lines of code) dZ = None dA_prev, dW, db = None ### END CODE HERE ### return dA_prev, dW, db AL, linear_activation_cache = linear_activation_backward_test_case() dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid") print ("sigmoid:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db) + "\n") dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu") print ("relu:") print ("dA_prev = "+ str(dA_prev)) print ("dW = " + str(dW)) print ("db = " + str(db))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected output with sigmoid: <table style="width:100%"> <tr> <td > dA_prev </td> <td >[[ 0.11017994 0.01105339] [ 0.09466817 0.00949723] [-0.05743092 -0.00576154]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.10266786 0.09778551 -0.01968084]] </td> </tr> <tr> <td > db </td> <td > [[-0.05729622]] </td> </tr> </table> Expected output with relu <table style="width:100%"> <tr> <td > dA_prev </td> <td > [[ 0.44090989 0. ] [ 0.37883606 0. ] [-0.2298228 0. ]] </td> </tr> <tr> <td > dW </td> <td > [[ 0.44513824 0.37371418 -0.10478989]] </td> </tr> <tr> <td > db </td> <td > [[-0.20837892]] </td> </tr> </table> 6.3 - L-Model Backward Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass. <img src="images/mn_backward.png" style="width:450px;height:300px;"> <caption><center> Figure 5 : Backward pass </center></caption> Initializing backpropagation: To backpropagate through this network, we know that the output is, $A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$. To do so, use this formula (derived using calculus which you don't need in-depth knowledge of): python dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula : $$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$ For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"]. Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
# GRADED FUNCTION: L_model_backward def L_model_backward(AL, Y, caches): """ Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group Arguments: AL -- probability vector, output of the forward propagation (L_model_forward()) Y -- true "label" vector (containing 0 if non-cat, 1 if cat) caches -- list of caches containing: every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2) the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1]) Returns: grads -- A dictionary with the gradients grads["dA" + str(l)] = ... grads["dW" + str(l)] = ... grads["db" + str(l)] = ... """ grads = {} L = len(caches) # the number of layers m = AL.shape[1] Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL # Initializing the backpropagation ### START CODE HERE ### (1 line of code) dAL = None ### END CODE HERE ### # Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"] ### START CODE HERE ### (approx. 2 lines) current_cache = None grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = None ### END CODE HERE ### for l in reversed(range(L-1)): # lth layer: (RELU -> LINEAR) gradients. # Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)] ### START CODE HERE ### (approx. 5 lines) current_cache = None dA_prev_temp, dW_temp, db_temp = None grads["dA" + str(l + 1)] = None grads["dW" + str(l + 1)] = None grads["db" + str(l + 1)] = None ### END CODE HERE ### return grads AL, Y_assess, caches = L_model_backward_test_case() grads = L_model_backward(AL, Y_assess, caches) print ("dW1 = "+ str(grads["dW1"])) print ("db1 = "+ str(grads["db1"])) print ("dA1 = "+ str(grads["dA1"]))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
Expected Output <table style="width:60%"> <tr> <td > dW1 </td> <td > [[ 0.41010002 0.07807203 0.13798444 0.10502167] [ 0. 0. 0. 0. ] [ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td> </tr> <tr> <td > db1 </td> <td > [[-0.22007063] [ 0. ] [-0.02835349]] </td> </tr> <tr> <td > dA1 </td> <td > [[ 0. 0.52257901] [ 0. -0.3269206 ] [ 0. -0.32070404] [ 0. -0.74079187]] </td> </tr> </table> 6.4 - Update Parameters In this section you will update the parameters of the model, using gradient descent: $$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$ $$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$ where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary. Exercise: Implement update_parameters() to update your parameters using gradient descent. Instructions: Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate): """ Update parameters using gradient descent Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients, output of L_model_backward Returns: parameters -- python dictionary containing your updated parameters parameters["W" + str(l)] = ... parameters["b" + str(l)] = ... """ L = len(parameters) // 2 # number of layers in the neural network # Update rule for each parameter. Use a for loop. ### START CODE HERE ### (≈ 3 lines of code) for l in range(L): parameters["W" + str(l+1)] = None parameters["b" + str(l+1)] = None ### END CODE HERE ### return parameters parameters, grads = update_parameters_test_case() parameters = update_parameters(parameters, grads, 0.1) print ("W1 = "+ str(parameters["W1"])) print ("b1 = "+ str(parameters["b1"])) print ("W2 = "+ str(parameters["W2"])) print ("b2 = "+ str(parameters["b2"]))
Building your Deep Neural Network - Step by Step/Building+your+Deep+Neural+Network+-+Step+by+Step+v4.ipynb
GuillaumeDec/machine-learning
gpl-3.0
mlxtend - Multilayer Perceptron Examples Sections Classify Iris Classify handwritten digits from MNIST <br> <br> Classify Iris Load 2 features from Iris (petal length and petal width) for visualization purposes.
from mlxtend.data import iris_data X, y = iris_data() X = X[:, 2:]
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Train neural network for 3 output flower classes ('Setosa', 'Versicolor', 'Virginica'), regular gradient decent (minibatches=1), 30 hidden units, and no regularization.
from mlxtend.classifier import NeuralNetMLP import numpy as np nn1 = NeuralNetMLP(n_output=3, n_features=X.shape[1], n_hidden=30, l2=0.0, l1=0.0, epochs=5000, eta=0.001, alpha=0.00, minibatches=1, shuffle=True, random_state=0) nn1.fit(X, y) y_pred = nn1.predict(X) acc = np.sum(y == y_pred, axis=0) / X.shape[0] print('Accuracy: %.2f%%' % (acc * 100))
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Now, check if the gradient descent converged after 5000 epochs, and choose smaller learning rate (eta) otherwise.
import matplotlib.pyplot as plt %matplotlib inline plt.plot(range(len(nn1.cost_)), nn1.cost_) plt.ylim([0, 300]) plt.ylabel('Cost') plt.xlabel('Epochs') plt.grid() plt.show()
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Standardize features for smoother and faster convergence.
X_std = np.copy(X) for i in range(2): X_std[:,i] = (X[:,i] - X[:,i].mean()) / X[:,i].std() nn2 = NeuralNetMLP(n_output=3, n_features=X_std.shape[1], n_hidden=30, l2=0.0, l1=0.0, epochs=1000, eta=0.05, alpha=0.1, minibatches=1, shuffle=True, random_state=1) nn2.fit(X_std, y) y_pred = nn2.predict(X_std) acc = np.sum(y == y_pred, axis=0) / X_std.shape[0] print('Accuracy: %.2f%%' % (acc * 100)) plt.plot(range(len(nn2.cost_)), nn2.cost_) plt.ylim([0, 300]) plt.ylabel('Cost') plt.xlabel('Epochs') plt.show()
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Visualize the decision regions.
from mlxtend.evaluate import plot_decision_regions plot_decision_regions(X, y, clf=nn1) plt.xlabel('petal length [cm]') plt.ylabel('petal width [cm]') plt.show()
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
<br> <br> Classify handwritten digits from MNIST Load a 5000-sample subset of the MNIST dataset.
from mlxtend.data import mnist_data X, y = mnist_data()
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Visualize a sample from the MNIST dataset.
def plot_digit(X, y, idx): img = X[idx].reshape(28,28) plt.imshow(img, cmap='Greys', interpolation='nearest') plt.title('true label: %d' % y[idx]) plt.show() plot_digit(X, y, 4)
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Initialize the neural network to recognize the 10 different digits (0-10) using 300 epochs and minibatch learning.
nn = NeuralNetMLP(n_output=10, n_features=X.shape[1], n_hidden=100, l2=0.0, l1=0.0, epochs=300, eta=0.0005, alpha=0.0, minibatches=50, random_state=1)
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Learn the features while printing the progress to get an idea about how long it may take.
nn.fit(X, y, print_progress=True) y_pred = nn.predict(X) acc = np.sum(y == y_pred, axis=0) / X.shape[0] print('Accuracy: %.2f%%' % (acc * 100))
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Check for convergence.
plt.plot(range(len(nn.cost_)), nn.cost_) plt.ylim([0, 500]) plt.ylabel('Cost') plt.xlabel('Mini-batches * Epochs') plt.show() plt.plot(range(len(nn.cost_)//50), nn.cost_[::50], color='red') plt.ylim([0, 500]) plt.ylabel('Cost') plt.xlabel('Epochs') plt.show()
docs/examples/classifier_nn_mlp.ipynb
YoungKwonJo/mlxtend
bsd-3-clause
Exercícios para a próxima aula Fazer uma função que amplie/reduza a imagem utilizando interpolação no domínio da frequência, conforme discutido em aula. Comparar os resultados com o scipy.misc.imresize, tanto de qualidade do espectro como de tempo de execução. Os alunos com RA ímpar devem fazer as ampliações e os com RA par devem fazer as reduções. Nome da função: imresize
def imresize(f, size): ''' Resize an image Parameters ---------- f: input image size: integer, float or tuple - integer: percentage of current size - float: fraction of current size - tuple: new dimensions Returns ------- output image resized ''' return f
deliver/Aula_10_Wavelets.ipynb
robertoalotufo/ia898
mit
Modificar a função pconv para executar no domínio da frequência, caso o número de elementos não zeros da menor imagem, é maior que um certo valor, digamos 15. Nome da função: pconvfft
def pconvfft(f,h): ''' Periodical convolution. This is an efficient implementation of the periodical convolution. This implementation should be comutative, i.e., pconvfft(f,h)==pconvfft(h,f). This implementation should be fast. If the number of pixels used in the convolution is larger than 15, it uses the convolution theorem to implement the convolution. Parameters: ----------- f: input image (can be complex, up to 2 dimensions) h: input kernel (can be complex, up to 2 dimensions) Outputs: image of the result of periodical convolution ''' return f
deliver/Aula_10_Wavelets.ipynb
robertoalotufo/ia898
mit
Transforma Discreta de Wavelets Iremos utilizar um notebook que foi um resultado de projeto de anos anteriores. DWT
/home/lotufo/ia898/dev/wavelets.ipynb
deliver/Aula_10_Wavelets.ipynb
robertoalotufo/ia898
mit
"Post-NOV0215" verification testing using the MAR0516O products These are nominal flight-like products created using the existing MAR0516 loads. One expected difference is that Matlab uses the OFLS CHARACTERISTICS file CHARACTERIS_07JUL16 to back out the OFLS ODB_SI_ALIGN transform. MAR0516 was planned using CHARACTERIS_28FEB16. There is a (1.77, 0.33) arcsec difference for both ACIS-S and ACIS-I in the ODB_SI_ALIGN offsets. This 1.8 arcsec offset is seen in comparisons while in newly generated products there will be no such offset.
PRODUCTS = 'MAR0516O' TEST_DIR = '/proj/sot/ska/ops/SFE' FLIGHT_DIR = '/data/mpcrit1/mplogs/2016' # Must match PRODUCTS # SI_ALIGN from Matlab code SI_ALIGN = chandra_aca.ODB_SI_ALIGN SI_ALIGN def print_dq(q1, q2): """ Print the difference between two quaternions in a nice formatted way. """ dq = q1.inv() * q2 dr, dp, dy, _ = np.degrees(dq.q) * 2 * 3600 print('droll={:6.2f}, dpitch={:6.2f}, dyaw={:6.2f} arcsec'.format(dr, dp, dy)) def check_obs(obs): """ Check `obs` (which is a row out of the dynamic offsets table) for consistency between target and aca coordinates given the target and aca offsets and the SI_ALIGN alignment matrix. """ y_off = (obs['target_offset_y'] + obs['aca_offset_y']) / 3600 z_off = (obs['target_offset_z'] + obs['aca_offset_z']) / 3600 q_targ = Quat([obs['target_ra'], obs['target_dec'], obs['target_roll']]) q_aca = Quat([obs['aca_ra'], obs['aca_dec'], obs['aca_roll']]) q_aca_out = calc_aca_from_targ(q_targ, y_off, z_off, SI_ALIGN) print('{} {:6s} '.format(obs['obsid'], obs['detector']), end='') print_dq(q_aca, q_aca_out) def check_obs_vs_manvr(obs, manvr): """ Check against attitude from actual flight products (from maneuver summary file) """ mf = manvr['final'] q_flight = Quat([mf['q1'], mf['q2'], mf['q3'], mf['q4']]) q_aca = Quat([obs['aca_ra'], obs['aca_dec'], obs['aca_roll']]) print('{} {:6s} chipx={:8.2f} chipy={:8.2f} ' .format(obs['obsid'], obs['detector'], obs['chipx'], obs['chipy']), end='') print_dq(q_aca, q_flight) filename = os.path.join(TEST_DIR, PRODUCTS, 'ofls', 'output', '{}_dynamical_offsets.txt'.format(PRODUCTS)) dat = Table.read(filename, format='ascii') dat[:5]
testing/dynam_offsets/verify_dynamic_offsets.ipynb
sot/aimpoint_mon
bsd-2-clause
Check internal consistency of dynamical_offsets table This checks that applying the sum of target and ACA offsets along with the nominal SI_ALIGN to the target attitude produces the ACA attitude. All the ACIS observations have a similar offset due to the CHARACTERISTICS mismatch noted earlier, while the HRC observations show the expected 0.0 offset.
for obs in dat: check_obs(obs)
testing/dynam_offsets/verify_dynamic_offsets.ipynb
sot/aimpoint_mon
bsd-2-clause
Check dynamical offsets ACA attitude matches TEST maneuver summary It is assumed that the maneuver summary matches the load product attitudes.
filename = glob.glob(os.path.join(TEST_DIR, PRODUCTS, 'ofls', 'mps', 'mm*.sum'))[0] print('Reading', filename) mm = parse_cm.maneuver.read_maneuver_summary(filename, structured=True) mm = {m['final']['id']: m for m in mm} # Turn into a dict for obs in dat: check_obs_vs_manvr(obs, mm[obs['obsid'] * 100])
testing/dynam_offsets/verify_dynamic_offsets.ipynb
sot/aimpoint_mon
bsd-2-clause
Check dynamical offsets ACA attitude matches FLIGHT maneuver summary to ~10 arcsec It is assumed that the maneuver summary matches the load product attitudes. There are three discrepancies below: obsids 18725, 18790, and 18800. All of these are DDT observations that are configured in the OR list to use Cycle 18 aimpoint values, but without changing the target offsets from cycle 17 values. This is an artifact of testing and would not occur in flight planning.
os.path.join(FLIGHT_DIR, PRODUCTS[:-1], 'ofls', 'mps', 'mm*.sum') filename = glob.glob(os.path.join(FLIGHT_DIR, PRODUCTS[:-1], 'ofls', 'mps', 'mm*.sum'))[0] print('Reading', filename) mm = parse_cm.maneuver.read_maneuver_summary(filename, structured=True) mm = {m['final']['id']: m for m in mm} # Turn into a dict for obs in dat: check_obs_vs_manvr(obs, mm[obs['obsid'] * 100])
testing/dynam_offsets/verify_dynamic_offsets.ipynb
sot/aimpoint_mon
bsd-2-clause
Check that predicted CHIPX / CHIPY matches expectation to within 10 arcsec. Use the test ACA attitude and observed SIM DY/DZ (alignment from fids). Generate a "fake" observation with adjusted RA_NOM, DEC_NOM, ROLL_NOM ("HRMA attitude"). Adjustment based on the delta between TEST and FLIGHT pointing attitudes. Use the CIAO tool dmcoords to compute predicted CHIPX / CHIPY. In the results below there are two discrepancies, obsids 18168 and 18091. These are very hot observations coming right after safe mode recovery. In this case the thermal model is inaccurate and the commanded pointing would have been offset by up to 15 arcsec. Future improvements in thermal modeling could reduce this offset, but it should be understood that pointing accuracy will be degraded in such a situation. CIAO dmcoords tool setup
ciaoenv = Ska.Shell.getenv('source /soft/ciao/bin/ciao.sh') ciaorun = functools.partial(Ska.Shell.bash, env=ciaoenv) dmcoords_cmd = ['dmcoords', 'none', 'asolfile=none', 'detector="{detector}"', 'fpsys="{fpsys}"', 'opt=cel', 'ra={ra_targ}', 'dec={dec_targ}', 'celfmt=deg', 'ra_nom={ra_nom}', 'dec_nom={dec_nom}', 'roll_nom={roll_nom}', 'ra_asp=")ra_nom"', 'dec_asp=")dec_nom"', 'roll_asp=")roll_nom"', 'sim="{sim_x} 0 {sim_z}"', 'displace="0 {dy} {dz} 0 0 0"', 'verbose=0'] dmcoords_cmd = ' '.join(dmcoords_cmd) def dmcoords_chipx_chipy(keys, verbose=False): """ Get the dmcoords-computed chipx and chipy for given event file header keyword params. NOTE: the ``dy`` and ``dz`` inputs to dmcoords are flipped in sign from the ASOL values. Generally the ASOL DY/DZ are positive and dmcoord input values are negative. This sign flip is handled *here*, so input to this is ASOL DY/DZ. :param keys: dict of event file keywords """ # See the absolute_pointing_uncertainty notebook in this repo for the # detailed derivation of this -15.5, 6.0 arcsec offset factor. See the # cell below for the summary version. ciaorun('punlearn dmcoords') fpsys_map = {'HRC-I': 'HI1', 'HRC-S': 'HS2', 'ACIS': 'ACIS'} keys = {key.lower(): val for key, val in keys.items()} det = keys['detnam'] keys['detector'] = (det if det.startswith('HRC') else 'ACIS') keys['dy'] = -keys['dy_avg'] keys['dz'] = -keys['dz_avg'] keys['fpsys'] = fpsys_map[keys['detector']] cmd = dmcoords_cmd.format(**keys) ciaorun(cmd) if verbose: print(cmd) return [float(x) for x in ciaorun('pget dmcoords chipx chipy chip_id')] def get_evt_meta(obsid, detector): """ Get event file metadata (FITS keywords) for ``obsid`` and ``detector`` and cache for later use. Returns a dict of key=value pairs. """ evts = shelve.open('event_meta.shelf') sobsid = str(obsid) if sobsid not in evts: det = 'hrc' if detector.startswith('HRC') else 'acis' arc5gl = Ska.arc5gl.Arc5gl() arc5gl.sendline('obsid={}'.format(obsid)) arc5gl.sendline('get {}2'.format(det) + '{evt2}') del arc5gl files = glob.glob('{}f{}*_evt2.fits.gz'.format(det, obsid)) if len(files) != 1: raise ValueError('Wrong number of files {}'.format(files)) evt2 = Table.read(files[0]) os.unlink(files[0]) evts[sobsid] = {k.lower(): v for k, v in evt2.meta.items()} out = evts[sobsid] evts.close() return out def check_predicted_chipxy(obs): """ Compare the predicted CHIPX/Y values with planned using observed event file data on actual ACA alignment. """ obsid = obs['obsid'] detector = obs['detector'] try: evt = get_evt_meta(obsid, detector) except ValueError as err: print('Obsid={} detector={}: fail {}'.format(obsid, detector, err)) return f_chipx, f_chipy, f_chip_id = dmcoords_chipx_chipy(evt) q_nom_flight = Quat([evt['ra_nom'], evt['dec_nom'], evt['roll_nom']]) q_aca = Quat([obs['aca_ra'], obs['aca_dec'], obs['aca_roll']]) mf = mm[obsid * 100]['final'] q_flight = Quat([mf['q1'], mf['q2'], mf['q3'], mf['q4']]) dq = q_flight.dq(q_aca) q_nom_test = q_nom_flight * dq evt_test = dict(evt) evt_test['ra_nom'] = q_nom_test.ra evt_test['dec_nom'] = q_nom_test.dec evt_test['roll_nom'] = q_nom_test.roll scale = 0.13175 if detector.startswith('HRC') else 0.492 aim_chipx = obs['chipx'] aim_chipy = obs['chipy'] if detector == 'ACIS-S': aim_chipx += -obs['target_offset_y'] / scale aim_chipy += -obs['target_offset_z'] / scale + 20.5 / 0.492 * (-190.14 - evt['sim_z']) elif detector == 'ACIS-I': aim_chipx += -obs['target_offset_z'] / scale + 20.5 / 0.492 * (-233.59 - evt['sim_z']) aim_chipy += +obs['target_offset_y'] / scale chipx, chipy, chip_id = dmcoords_chipx_chipy(evt_test) print('{} {:6s} aimpoint:{:6.1f},{:6.1f} test:{:6.1f},{:6.1f} ' 'flight:{:6.1f},{:6.1f} delta: {:.1f} arcsec' .format(obsid, detector, aim_chipx, aim_chipy, chipx, chipy, f_chipx, f_chipy, np.hypot(aim_chipx - chipx, aim_chipy - chipy) * scale)) for obs in dat: check_predicted_chipxy(obs)
testing/dynam_offsets/verify_dynamic_offsets.ipynb
sot/aimpoint_mon
bsd-2-clause
For pre-NOV1615 products with an empty zero-offsets aimpoint table Check that TEST (JUL0415M) and FLIGHT attitudes from maneuver summary match to within 0.1 arcsec. JUL0415M was constructed with an OR-list zero-offset aimpoint table which exists but has no row entries. This has the effect of telling SAUSAGE to run through the attitude replacement machinery but use 0.0 for the ACA offset y/z values. This should output attitudes that are precisely the same as the FLIGHT attitudes. Results: pass
PRODUCTS = 'JUL0415M' FLIGHT_DIR = '/data/mpcrit1/mplogs/2015' # Must match PRODUCTS # Get FLIGHT maneuver summary filename = glob.glob(os.path.join(FLIGHT_DIR, PRODUCTS[:-1], 'ofls', 'mps', 'mm*.sum'))[0] print('Reading', filename) mmf = parse_cm.maneuver.read_maneuver_summary(filename, structured=True) mmf = {m['final']['id']: m for m in mmf} # Turn into a dict # Get TEST maneuver summary filename = glob.glob(os.path.join(TEST_DIR, PRODUCTS, 'ofls', 'mps', 'mm*.sum'))[0] print('Reading', filename) mmt = parse_cm.maneuver.read_maneuver_summary(filename, structured=True) mmt = {m['final']['id']: m for m in mmt} # Turn into a dict # Make sure set of obsids are the same set(mmf) == set(mmt) # Now do the actual attitude comparison for trace_id, mf in mmf.items(): mt = mmt[trace_id]['final'] mf = mf['final'] qt = Quat([mt['q1'], mt['q2'], mt['q3'], mt['q4']]) qf = Quat([mf['q1'], mf['q2'], mf['q3'], mf['q4']]) print(trace_id, ' ', end='') print_dq(qt, qf)
testing/dynam_offsets/verify_dynamic_offsets.ipynb
sot/aimpoint_mon
bsd-2-clause
b) Load dataset Download customer account data from Wiley's website, RetailMart.xlsx
# find path to your RetailMart.xlsx dataset = pd.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch06/RetailMart.xlsx','rb'), sheetname=0) dataset = dataset.drop('Unnamed: 17', 1) # drop empty col dataset.rename(columns={'PREGNANT':'Pregnant'}, inplace=True) dataset.rename(columns={'Home/Apt/ PO Box':'Residency'}, inplace=True) # add simpler col name dataset.columns = [x.strip().replace(' ', '_') for x in dataset.columns] # python does not like spaces in var names
notebooks/data_smart_project_1.ipynb
craigrshenton/home
mit
The 'Pregnant' column can only take on one of two (in this case) possabilities. Here 1 = pregnant, and 0 = not pregnant 2. Summarize Data a) Descriptive statistics
# shape print(dataset.shape) # types print(dataset.dtypes) # head dataset.head() # feature distribution print(dataset.groupby('Implied_Gender').size()) # target distribution print(dataset.groupby('Pregnant').size()) # correlation r = dataset.corr(method='pearson') id_matrix = np.identity(r.shape[0]) # create identity matrix r = r-id_matrix # remove same-feature correlations np.where( r > 0.7 )
notebooks/data_smart_project_1.ipynb
craigrshenton/home
mit
We can see no features with significant correlation coefficents (i.e., $r$ values > 0.7) 3. Prepare Data a) Data Transforms We need to 'dummify' (i.e., separate out) the catagorical variables: implied gender and residency
# dummify gender variable dummy_gender = pd.get_dummies(dataset['Implied_Gender'], prefix='Gender') print(dummy_gender.head()) # dummify residency variable dummy_resident = pd.get_dummies(dataset['Residency'], prefix='Resident') print(dummy_resident.head()) # Drop catagorical variables dataset = dataset.drop('Implied_Gender', 1) dataset = dataset.drop('Residency', 1) # Add dummy variables dataset = pd.concat([dummy_gender.ix[:, 'Gender_M':],dummy_resident.ix[:, 'Resident_H':],dataset], axis=1) dataset.head() # Make clean dataframe for regression model array = dataset.values n_features = len(array[0]) X = array[:,0:n_features-1] # features y = array[:,n_features-1] # target
notebooks/data_smart_project_1.ipynb
craigrshenton/home
mit
4. Evaluate Algorithms a) Split-out validation dataset
# Split-out validation dataset validation_size = 0.20 seed = 7 X_train, X_validation, Y_train, Y_validation = train_test_split(X, y, test_size=validation_size, random_state=seed)
notebooks/data_smart_project_1.ipynb
craigrshenton/home
mit
b) Spot Check Algorithms
# Spot-Check Algorithms models = [] models.append(('LR', LogisticRegression())) models.append(('LDA', LinearDiscriminantAnalysis())) models.append(('KNN', KNeighborsClassifier())) models.append(('CART', DecisionTreeClassifier())) models.append(('NB', GaussianNB())) models.append(('SVM', SVC())) # evaluate each model in turn results = [] names = [] for name, model in models: kfold = KFold(n_splits=10, random_state=seed) cv_results = cross_val_score(model, X_train, Y_train, cv=kfold, scoring='accuracy') results.append(cv_results) names.append(name) msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std()) print(msg)
notebooks/data_smart_project_1.ipynb
craigrshenton/home
mit
c) Select The Best Model
# Compare Algorithms fig = plt.figure() fig.suptitle('Algorithm Comparison') ax = fig.add_subplot(111) plt.boxplot(results) ax.set_xticklabels(names) plt.show()
notebooks/data_smart_project_1.ipynb
craigrshenton/home
mit
5. Make predictions on validation dataset Linear Discriminant Analysis is just about the most accurate model. Now test the accuracy of the model on the validation dataset.
lda = LinearDiscriminantAnalysis() lda.fit(X_train, Y_train) predictions = lda.predict(X_validation) print(accuracy_score(Y_validation, predictions)) print(confusion_matrix(Y_validation, predictions)) print(classification_report(Y_validation, predictions)) # predict probability of survival y_pred_prob = lda.predict_proba(X_validation)[:, 1] # plot ROC curve fpr, tpr, thresholds = metrics.roc_curve(Y_validation, y_pred_prob) plt.plot(fpr, tpr) plt.plot([0, 1], [0, 1], color='navy', linestyle='--') plt.xlim([-0.05, 1.0]) plt.ylim([0.0, 1.05]) plt.gca().set_aspect('equal', adjustable='box') plt.xlabel('False Positive Rate (1 - Specificity)') plt.ylabel('True Positive Rate (Sensitivity)') plt.show() # calculate AUC print(metrics.roc_auc_score(Y_validation, y_pred_prob))
notebooks/data_smart_project_1.ipynb
craigrshenton/home
mit
A very simple "image"
X = np.zeros((9, 9)) X[::2, 1::2] = 1 X[1::2, ::2] = 1 plt.imshow(X, cmap=plt.cm.gray, interpolation="nearest") camera = data.camera() print(type(camera)) plt.imshow(camera, cmap=plt.cm.gray) print(camera.shape) print(camera.dtype)
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Histogram Equalization Q: What is the histogram of an image?
print(camera.min(), camera.max()) from scipy.stats import uniform bins = np.linspace(0, 256, 20) hist = np.histogram(camera.ravel(), bins, normed=True)[0] bins = 0.5*(bins[1:] + bins[:-1]) plt.plot(bins, hist, label="original") estimate = uniform.pdf(bins) plt.plot(bins, estimate, label="uniform estimate")
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Q: What is wrong with the figure above? Q: What does it mean to improve the contrast of an image?
from skimage import exposure camera_eq = exposure.equalize_hist(camera) print(camera_eq.min(), camera_eq.max()) bins = np.linspace(0, 1, 20) hist = np.histogram(camera_eq.ravel(), bins, normed=True)[0] bins = 0.5*(bins[1:] + bins[:-1]) plt.plot(bins, hist, label="Original") estimate_uni = uniform.pdf(bins) plt.plot(bins, estimate_uni, label="Uniform") plt.legend(loc="best") fig, ax = plt.subplots(2, 2) ax[0, 0].imshow(camera, cmap=plt.cm.gray) ax[0, 0].set_xticks([]) ax[0, 0].set_yticks([]) ax[0, 0].grid(False) ax[0, 0].set_title("Original") # compute bins and histogram for original image bins = np.linspace(0, 256, 20) hist = np.histogram(camera.ravel(), bins, normed=True)[0] bins = 0.5*(bins[1:] + bins[:-1]) ax[0, 1].plot(bins, hist) ax[1, 0].imshow(camera_eq, cmap=plt.cm.gray) ax[1, 0].set_xticks([]) ax[1, 0].set_yticks([]) ax[1, 0].grid(False) ax[1, 0].set_title("High Contrast") # compute bins and histogram for high contrast image bins = np.linspace(0, 1, 20) hist = np.histogram(camera_eq.ravel(), bins, normed=True)[0] bins = 0.5*(bins[1:] + bins[:-1]) ax[1, 1].plot(bins, hist) fig.tight_layout()
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Exercise: Image filtering by thresholding Convert the background in the following image to zero-valued pixels
coins = data.coins() plt.imshow(coins, cmap=plt.cm.gray) plt.xticks([]) plt.yticks([]) plt.grid()
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Hint: Use histogram to pick the right threshold
# enter code here
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Q: What was the problem with this approach of thresholding? Adaptive Thresholding
from skimage import filters threshold = filters.threshold_otsu(coins) coins_low = coins.copy() coins_low[coins_low < threshold] = 0 plt.imshow(coins_low, cmap=plt.cm.gray) plt.xticks([]) plt.yticks([]) plt.grid() bins = np.linspace(0, 256, 50) hist = np.histogram(coins.ravel(), bins, normed=True)[0] bins = 0.5*(bins[1:] + bins[:-1]) plt.plot(bins, hist, label="Histogram") plt.vlines(threshold, 0, hist.max(), label="Threshold") plt.legend()
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Independent labeling of objects Segmentation with boolean masks
plt.imshow(coins_low > 0, cmap=plt.cm.gray) plt.grid()
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Filling in smaller regions
from skimage.morphology import closing, square bw = closing(coins_low > 0, square(3)) plt.imshow(bw, cmap=plt.cm.gray) plt.grid() from skimage.segmentation import clear_border from skimage.color import label2rgb from skimage.measure import label # remove artifacts connected to image border cleared = clear_border(bw) # label image regions label_image = label(cleared) image_label_overlay = label2rgb(label_image, image=coins_low) plt.imshow(image_label_overlay) plt.xticks([]) plt.yticks([]) from skimage.measure import regionprops import matplotlib.patches as mpatches fig, ax = plt.subplots(figsize=(10, 6)) ax.imshow(image_label_overlay) for region in regionprops(label_image): # take regions with large enough areas if region.area >= 100: # draw rectangle around segmented coins minr, minc, maxr, maxc = region.bbox rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr, fill=False, edgecolor='red', linewidth=2) ax.add_patch(rect) ax.set_axis_off() plt.tight_layout()
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Exercise: Color each independent region in the following image Note: Image is entireyly only black and white. It has already been thresholded and binaried
n = 20 l = 256 im = np.zeros((l, l)) points = l * np.random.random((2, n ** 2)) im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1 im = filters.gaussian_filter(im, sigma=l / (4. * n)) blobs = im > im.mean() plt.imshow(blobs, cmap=plt.cm.gray) plt.xticks([]) plt.yticks([]) # enter code here
notebooks/day1/04_intro_scikit_image.ipynb
jaidevd/inmantec_fdp
mit
Data set
# To plot the unction we use a uniform grid X = np.arange(-1, 1, 0.05) Y = np.arange(-1, 1, 0.05) n_samples = len(X) * len(Y) points = np.zeros((n_samples, 2)) i=0 for x in X: for y in Y: points[i][0] = x points[i][1] = y i=i+1 labels = (np.sin(5 * points[:,0] * (3 * points[:,1] + 1.)) + 1. ) / 2. points = points.reshape((n_samples,2)) labels = labels.reshape((n_samples,1)) # To plot the function X, Y = np.meshgrid(X, Y) Z = (np.sin(5 * X * (3 * Y + 1.)) + 1. ) / 2. fig = plt.figure() ax = fig.gca(projection='3d') surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm, linewidth=0, antialiased=False) # We shuffle the points and labels points, labels = shuffle(points, labels, random_state=0) # We create training and test sets X_train = points[:800] Y_train = labels[:800] X_test = points[800:] Y_test = labels[800:] _ = plt.title("function to be learned")
doc/sphinx/notebooks/dCGPANNs_for_function_approximation.ipynb
darioizzo/d-CGP
gpl-3.0
Encoding and training a FFNN using dCGP There are many ways the same FFNN could be encoded into a CGP chromosome. The utility encode_ffnn selects one for you returning the expression.
# We define a 2 input 1 output dCGPANN with sigmoid nonlinearities dcgpann = dcgpy.encode_ffnn(2,1,[50,20],["sig", "sig", "sig"], 5) std = 1.5 # Weight/biases initialization is made using a normal distribution dcgpann.randomise_weights(mean = 0., std = std) dcgpann.randomise_biases(mean = 0., std = std) # We show the initial MSE print("Starting error:", dcgpann.loss(X_test,Y_test, "MSE")) print("Net complexity (number of active weights):", dcgpann.n_active_weights()) print("Net complexity (number of unique active weights):", dcgpann.n_active_weights(unique=True)) print("Net complexity (number of active nodes):", len(dcgpann.get_active_nodes())) x = dcgpann.get() w = dcgpann.get_weights() b = dcgpann.get_biases() res = [] # And show a visualization of the FFNN encoded in a CGP dcgpann.visualize(show_nonlinearities=True) import timeit start_time = timeit.default_timer() lr0 = 0.3 for i in tqdm(range(5000)): lr = lr0 #* np.exp(-0.0001 * i) loss = dcgpann.sgd(X_train, Y_train, lr, 32, "MSE", parallel = 4) res.append(loss) elapsed = timeit.default_timer() - start_time # Print the time taken to train and the final result on the test set print("Time (s): ", elapsed) print("End MSE: ", dcgpann.loss(X_test,Y_test, "MSE"))
doc/sphinx/notebooks/dCGPANNs_for_function_approximation.ipynb
darioizzo/d-CGP
gpl-3.0
Same training is done using Keras (Tensor Flow backend)
import keras from keras.models import Sequential from keras.layers import Dense, Activation from keras import optimizers # We define Stochastic Gradient Descent as an optimizer sgd = optimizers.SGD(lr=0.3) # We define weight initializetion initializerw = keras.initializers.RandomNormal(mean=0.0, stddev=std, seed=None) initializerb = keras.initializers.RandomNormal(mean=0.0, stddev=std, seed=None) model = Sequential([ Dense(50, input_dim=2, kernel_initializer=initializerw, bias_initializer=initializerb), Activation('sigmoid'), Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb), Activation('sigmoid'), Dense(1, kernel_initializer=initializerw, bias_initializer=initializerb), Activation('sigmoid'), ]) # For a mean squared error regression problem model.compile(optimizer=sgd, loss='mse') # Train the model, iterating on the data in batches of 32 samples start_time = timeit.default_timer() history = model.fit(X_train, Y_train, epochs=5000, batch_size=32, verbose=False) elapsed = timeit.default_timer() - start_time # Print the time taken to train and the final result on the test set print("Time (s): ", elapsed) print("End MSE: ", model.evaluate(X_train, Y_train)) # We plot for comparison the MSE during learning in the two cases plt.semilogy(np.sqrt(history.history['loss']), label='Keras') plt.semilogy(np.sqrt(res), label='dCGP') plt.title('dCGP vs Keras') plt.xlabel('epochs') plt.legend() _ = plt.ylabel('RMSE')
doc/sphinx/notebooks/dCGPANNs_for_function_approximation.ipynb
darioizzo/d-CGP
gpl-3.0
Repeating ten times the same comparison
epochs = 5000 for i in range(10): # dCGP dcgpann = dcgpy.encode_ffnn(2,1,[50,20],["sig", "sig", "sig"], 5) dcgpann.randomise_weights(mean = 0., std = std) dcgpann.randomise_biases(mean = 0., std = std) res = [] for i in tqdm(range(epochs)): lr = lr0 #* np.exp(-0.0001 * i) loss = dcgpann.sgd(X_train, Y_train, lr, 32, "MSE", parallel = 4) res.append(loss) # Keras model = Sequential([ Dense(50, input_dim=2, kernel_initializer=initializerw, bias_initializer=initializerb), Activation('sigmoid'), Dense(20, kernel_initializer=initializerw, bias_initializer=initializerb), Activation('sigmoid'), Dense(1, kernel_initializer=initializerw, bias_initializer=initializerb), Activation('sigmoid'), ]) model.compile(optimizer=sgd, loss='mse') history = model.fit(X_train, Y_train, epochs=epochs, batch_size=32, verbose=False) plt.semilogy(np.sqrt(history.history['loss']), color = 'b') plt.semilogy(np.sqrt(res), color = 'C1') plt.title('dCGP vs Keras') plt.xlabel('epochs') _ = plt.ylabel('RMSE')
doc/sphinx/notebooks/dCGPANNs_for_function_approximation.ipynb
darioizzo/d-CGP
gpl-3.0
Expected output: test: Hello World <font color='blue'> What you need to remember: - Run your cells using SHIFT+ENTER (or "Run cell") - Write code in the designated areas using Python 3 only - Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. 1.1 - sigmoid function, np.exp() Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp(). Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function. Reminder: $sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning. <img src="images/Sigmoid.png" style="width:500px;height:228px;"> To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
# GRADED FUNCTION: basic_sigmoid import math def basic_sigmoid(x): """ Compute sigmoid of x. Arguments: x -- A scalar Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = 1/(1+math.exp(-x)) ### END CODE HERE ### return s basic_sigmoid(3)
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Expected Output: <table style = "width:40%"> <tr> <td>** basic_sigmoid(3) **</td> <td>0.9525741268224334 </td> </tr> </table> Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
### One reason why we use "numpy" instead of "math" in Deep Learning ### x = [1, 2, 3] basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
import numpy as np # example of np.exp x = np.array([1, 2, 3]) print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
# example of vector operation x = np.array([1, 2, 3]) print (x + 3)
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Any time you need more info on a numpy function, we encourage you to look at the official documentation. You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation. Exercise: Implement the sigmoid function using numpy. Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now. $$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \ x_2 \ ... \ x_n \ \end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \ \frac{1}{1+e^{-x_2}} \ ... \ \frac{1}{1+e^{-x_n}} \ \end{pmatrix}\tag{1} $$
# GRADED FUNCTION: sigmoid import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function() def sigmoid(x): """ Compute the sigmoid of x Arguments: x -- A scalar or numpy array of any size Return: s -- sigmoid(x) """ ### START CODE HERE ### (≈ 1 line of code) s = 1/(1+np.exp(-x)) ### END CODE HERE ### return s x = np.array([1, 2, 3]) sigmoid(x)
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Expected Output: <table> <tr> <td> **sigmoid([1,2,3])**</td> <td> array([ 0.73105858, 0.88079708, 0.95257413]) </td> </tr> </table> 1.2 - Sigmoid gradient As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function. Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$ You often code this function in two steps: 1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful. 2. Compute $\sigma'(x) = s(1-s)$
# GRADED FUNCTION: sigmoid_derivative def sigmoid_derivative(x): """ Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x. You can store the output of the sigmoid function into variables and then use it to calculate the gradient. Arguments: x -- A scalar or numpy array Return: ds -- Your computed gradient. """ ### START CODE HERE ### (≈ 2 lines of code) s = 1/(1+np.exp(-x)) ds = s*(1-s) ### END CODE HERE ### return ds x = np.array([1, 2, 3]) print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Expected Output: <table> <tr> <td> **sigmoid_derivative([1,2,3])**</td> <td> [ 0.19661193 0.10499359 0.04517666] </td> </tr> </table> 1.3 - Reshaping arrays Two common numpy functions used in deep learning are np.shape and np.reshape(). - X.shape is used to get the shape (dimension) of a matrix/vector X. - X.reshape(...) is used to reshape X into some other dimension. For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector. <img src="images/image2vector_kiank.png" style="width:500px;height:300;"> Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do: python v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c - Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
# GRADED FUNCTION: image2vector def image2vector(image): """ Argument: image -- a numpy array of shape (length, height, depth) Returns: v -- a vector of shape (length*height*depth, 1) """ ### START CODE HERE ### (≈ 1 line of code) v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2],1)) ### END CODE HERE ### return v # This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values image = np.array([[[ 0.67826139, 0.29380381], [ 0.90714982, 0.52835647], [ 0.4215251 , 0.45017551]], [[ 0.92814219, 0.96677647], [ 0.85304703, 0.52351845], [ 0.19981397, 0.27417313]], [[ 0.60659855, 0.00533165], [ 0.10820313, 0.49978937], [ 0.34144279, 0.94630077]]]) print ("image2vector(image) = " + str(image2vector(image)))
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Expected Output: <table style="width:100%"> <tr> <td> **image2vector(image)** </td> <td> [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]]</td> </tr> </table> 1.4 - Normalizing rows Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm). For example, if $$x = \begin{bmatrix} 0 & 3 & 4 \ 2 & 6 & 4 \ \end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \ \sqrt{56} \ \end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \ \end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5. Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
# GRADED FUNCTION: normalizeRows def normalizeRows(x): """ Implement a function that normalizes each row of the matrix x (to have unit length). Argument: x -- A numpy matrix of shape (n, m) Returns: x -- The normalized (by row) numpy matrix. You are allowed to modify x. """ ### START CODE HERE ### (≈ 2 lines of code) # Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True) x_norm = np.linalg.norm(x,axis=1,keepdims=True) # Divide x by its norm. x = x/x_norm ### END CODE HERE ### return x x = np.array([ [0, 3, 4], [1, 6, 4]]) print("normalizeRows(x) = " + str(normalizeRows(x)))
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Expected Output: <table style="width:60%"> <tr> <td> **normalizeRows(x)** </td> <td> [[ 0. 0.6 0.8 ] [ 0.13736056 0.82416338 0.54944226]]</td> </tr> </table> Note: In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! 1.5 - Broadcasting and the softmax function A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation. Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization. Instructions: - $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && ... && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && ... && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} $ $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \ \vdots & \vdots & \vdots & \ddots & \vdots \ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn} \end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \ \vdots & \vdots & \vdots & \ddots & \vdots \ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}} \end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \ softmax\text{(second row of x)} \ ... \ softmax\text{(last row of x)} \ \end{pmatrix} $$
# GRADED FUNCTION: softmax def softmax(x): """Calculates the softmax for each row of the input x. Your code should work for a row vector and also for matrices of shape (n, m). Argument: x -- A numpy matrix of shape (n,m) Returns: s -- A numpy matrix equal to the softmax of x, of shape (n,m) """ ### START CODE HERE ### (≈ 3 lines of code) # Apply exp() element-wise to x. Use np.exp(...). x_exp = np.exp(x) # Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True). x_sum = np.sum(x_exp,axis=1,keepdims=True) # Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting. s = x_exp/x_sum ### END CODE HERE ### return s x = np.array([ [9, 2, 5, 0, 0], [7, 5, 0, 0 ,0]]) print("softmax(x) = " + str(softmax(x)))
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Expected Output: <table style="width:60%"> <tr> <td> **softmax(x)** </td> <td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04 1.21052389e-04] [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04 8.01252314e-04]]</td> </tr> </table> Note: - If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting. Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning. <font color='blue'> What you need to remember: - np.exp(x) works for any np.array x and applies the exponential function to every coordinate - the sigmoid function and its gradient - image2vector is commonly used in deep learning - np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions - broadcasting is extremely useful 2) Vectorization In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
import time x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ### tic = time.process_time() dot = 0 for i in range(len(x1)): dot+= x1[i]*x2[i] toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC OUTER PRODUCT IMPLEMENTATION ### tic = time.process_time() outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros for i in range(len(x1)): for j in range(len(x2)): outer[i,j] = x1[i]*x2[j] toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC ELEMENTWISE IMPLEMENTATION ### tic = time.process_time() mul = np.zeros(len(x1)) for i in range(len(x1)): mul[i] = x1[i]*x2[i] toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ### W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array tic = time.process_time() gdot = np.zeros(W.shape[0]) for i in range(W.shape[0]): for j in range(len(x1)): gdot[i] += W[i,j]*x1[j] toc = time.process_time() print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0] x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0] ### VECTORIZED DOT PRODUCT OF VECTORS ### tic = time.process_time() dot = np.dot(x1,x2) toc = time.process_time() print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED OUTER PRODUCT ### tic = time.process_time() outer = np.outer(x1,x2) toc = time.process_time() print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED ELEMENTWISE MULTIPLICATION ### tic = time.process_time() mul = np.multiply(x1,x2) toc = time.process_time() print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms") ### VECTORIZED GENERAL DOT PRODUCT ### tic = time.process_time() dot = np.dot(W,x1) toc = time.process_time() print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication. 2.1 Implement the L1 and L2 loss functions Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful. Reminder: - The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost. - L1 loss is defined as: $$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
# GRADED FUNCTION: L1 def L1(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L1 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = np.sum(np.abs(yhat-y)) ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L1 = " + str(L1(yhat,y)))
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Expected Output: <table style="width:20%"> <tr> <td> **L1** </td> <td> 1.1 </td> </tr> </table> Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$. L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
# GRADED FUNCTION: L2 def L2(yhat, y): """ Arguments: yhat -- vector of size m (predicted labels) y -- vector of size m (true labels) Returns: loss -- the value of the L2 loss function defined above """ ### START CODE HERE ### (≈ 1 line of code) loss = np.sum(np.dot(y-yhat,y-yhat)) ### END CODE HERE ### return loss yhat = np.array([.9, 0.2, 0.1, .4, .9]) y = np.array([1, 0, 0, 1, 1]) print("L2 = " + str(L2(yhat,y)))
Intro_to_Neural_Networks/Python+Basics+With+Numpy+v3.ipynb
radu941208/DeepLearning
mit
Test environment setup For more details on this please check out examples/utils/testenv_example.ipynb. devlib requires the ANDROID_HOME environment variable configured to point to your local installation of the Android SDK. If you have not this variable configured in the shell used to start the notebook server, you need to run a cell to define where your Android SDK is installed or specify the ANDROID_HOME in your target configuration. In case more than one Android device are connected to the host, you must specify the ID of the device you want to target in my_target_conf. Run adb devices on your host to get the ID.
# Setup target configuration my_conf = { # Target platform and board "platform" : 'android', "board" : 'hikey960', # Device serial ID # Not required if there is only one device connected to your computer "device" : "0123456789ABCDEF", # Android home # Not required if already exported in your .bashrc #"ANDROID_HOME" : "/home/vagrant/lisa/tools/", # Folder where all the results will be collected "results_dir" : "Viewer_example", # Define devlib modules to load "modules" : [ 'cpufreq' # enable CPUFreq support ], # FTrace events to collect for all the tests configuration which have # the "ftrace" flag enabled "ftrace" : { "events" : [ "sched_switch", "sched_wakeup", "sched_wakeup_new", "sched_overutilized", "sched_load_avg_cpu", "sched_load_avg_task", "sched_load_waking_task", "cpu_capacity", "cpu_frequency", "cpu_idle", "sched_tune_config", "sched_tune_tasks_update", "sched_tune_boostgroup_update", "sched_tune_filter", "sched_boost_cpu", "sched_boost_task", "sched_energy_diff" ], "buffsize" : 100 * 1024, }, # Tools required by the experiments "tools" : [ 'trace-cmd', 'taskset'], } # Initialize a test environment using: te = TestEnv(my_conf, wipe=False) target = te.target
ipynb/examples/android/workloads/Android_Viewer.ipynb
bjackman/lisa
apache-2.0
Workload definition The Viewer workload will simply read an URI and let Android pick the best application to view the item designated by that URI. That item could be a web page, a photo, a pdf, etc. For instance, if given an URL to a Google Maps location, the Google Maps application will be opened at that location. If the device doesn't have Google Play Services (e.g. HiKey960), it will open Google Maps through the default web browser. The Viewer class is intended to be subclassed to customize your workload. There are pre_interact(), interact() and post_interact() methods that are made to be overridden. In this case we'll simply execute a script on the target to swipe around a location on Gmaps. This script is generated using the TargetScript class, which is used here on System.{h,v}swipe() calls to accumulate commands instead of executing them directly. Those commands are then outputted to a script on the remote device, and that script is later on executed as the item is being viewed. See ${LISA_HOME}/libs/util/target_script.py
class GmapsViewer(ViewerWorkload): def pre_interact(self): self.script = TargetScript(te, "gmaps_swiper.sh") # Define commands to execute during experiment for i in range(2): System.hswipe(self.script, 40, 60, 100, False) self.script.append('sleep 1') System.vswipe(self.script, 40, 60, 100, True) self.script.append('sleep 1') System.hswipe(self.script, 40, 60, 100, True) self.script.append('sleep 1') System.vswipe(self.script, 40, 60, 100, False) self.script.append('sleep 1') # Push script to the target self.script.push() def interact(self): self.script.run() def experiment(): # Configure governor target.cpufreq.set_all_governors('sched') # Get workload wload = Workload.getInstance(te, 'gmapsviewer') # Run workload wload.run(out_dir=te.res_dir, collect="ftrace", uri="https://goo.gl/maps/D8Sn3hxsHw62") # Dump platform descriptor te.platform_dump(te.res_dir)
ipynb/examples/android/workloads/Android_Viewer.ipynb
bjackman/lisa
apache-2.0
Workload execution
results = experiment() # Load traces in memory (can take several minutes) platform_file = os.path.join(te.res_dir, 'platform.json') with open(platform_file, 'r') as fh: platform = json.load(fh) trace_file = os.path.join(te.res_dir, 'trace.dat') trace = Trace(platform, trace_file, events=my_conf['ftrace']['events'], normalize_time=False)
ipynb/examples/android/workloads/Android_Viewer.ipynb
bjackman/lisa
apache-2.0
Traces visualisation
!kernelshark {trace_file} 2>/dev/null
ipynb/examples/android/workloads/Android_Viewer.ipynb
bjackman/lisa
apache-2.0
The Data For this exercise (and the next one) will will be using the classic MNIST dataset. Like the digits data you had used earlier, these are pictures of numbers. The big difference is the pictures are bigger (28x28 pixels rather than 8x8) and we will be looking at a much larger data set: 60,000 training images and 10,000 to test. See sample pictures by running the code below. Data structure: * As with the digits data earlier, we will "unwrap" the 28x28 square arrays into 784 dimensional vectors * With 60,000 input images this gives us an input matrix x_train of dimensionality 60,000 x 784 (and a 10,000 x 784 matrix x_testfor the test data) * For each of these images we have a true class label y_test and y_train, which is a number between 0 and 9 corresponding to the true digit.Thus num_classes=10 * Below we reshape this label data into a "binary" format for ease of comparison. Each element in y_test is represented by a 10 dimensional vector, with a 1 corresponding to the correct label and all the others zero. * Thus our "output" vector dimensionality will 60,000 x 10 (and a 10,000 x 784 matrix x_testfor the test data)
# Load Mnist data # the data, split between train and test sets num_classes = 10 # we have 10 digits to classify (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train = x_train.reshape(60000, 784) x_test = x_test.reshape(10000, 784) x_train = x_train.astype('float32') x_test = x_test.astype('float32') x_train /= 255 x_test /= 255 print(x_train.shape[0], 'train samples') print(x_test.shape[0], 'test samples') # convert class vectors to binary class matrices y_train = keras.utils.to_categorical(y_train, num_classes) y_test = keras.utils.to_categorical(y_test, num_classes) # Plot sample pictures fig = plt.figure(figsize=(8, 8)) classNum=np.zeros((y_train.shape[0],1)) for i in range(y_train.shape[0]): classNum[i]=np.where(y_train[i,:]) meanVecs=np.zeros((num_classes,784)) nReps=10 counter=1 for num in range(num_classes): idx =np.where(classNum==num)[0] meanVecs[num,:]=np.mean(x_train[idx,:],axis=0) for rep in range(nReps): mat=x_train[idx[rep],:] mat=(mat.reshape(28,28)) ax = fig.add_subplot(num_classes,nReps,counter) plt.imshow(mat,cmap=plt.cm.binary) plt.xticks([]) plt.yticks([]) counter=counter+1 plt.suptitle('Sample Numbers') plt.show() # Plot "Average" Pictures fig = plt.figure(figsize=(8, 8)) for num in range(num_classes): mat=(meanVecs[num,:].reshape(28,28)) ax = fig.add_subplot(3,4,num+1) plt.imshow(mat,cmap=plt.cm.binary) plt.suptitle('Average Numbers') plt.show()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Building the model We're now ready to build our first neural network model. We'll begin with the easiest model imaginable given the data. A network where eeach one of the 784 pixels in the input data is connected to each of the 10 output classes. This network essentially just has a single layer (it is both the first and last layer), and is essentially just a matrix 784x10 matrix, which multiplies the 784 pixel input vector to produve a 10 dimensional output. This is not deep yet, but we'll get to adding more layers later. Step 1:Define the Model In this case there is one set of input going forward to a single set of outputs with no feedback. This is called a sequential model. We can thus create the "shell" for such a model by declaring:
model = Sequential()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
The variable model will contain all the information on the model, and as we add layers and perform calculations, it will all be stored insidle this variable. Step II: Adding a (input) layer In general adding a layer to a mode is achieved by the model.add(layer_definition) command. Where the layer_definition is specific to the kind of layer we want to add. Layers are defined by 4 traits: 1. The layer type: Each layer type will have its own function in keras like Dense for a fully-connected layer, conv2d of a 2D convolutional layer and so on. 2. The output size: This is the number of outputs emerging after the layer, and is specified as the first argument to the layer function. - In our case the output size is num_classes=10 3. The input size: This is the number of inputs coming in to the function. - In general, * except for the input layer, keras is smart enough to figure out the input size based on the preceding layers* - We need to explicitly specify the size of the input layer as an input_shape= parameter. 4. Activation: This is the non-linear function used to transform the integrated inputs into the output. E.g. 'relu is the rectified-linear-unit transform. For a layer connecting to the output, we want to enforce a binary-ish response, and hence the last-layer will often contain a softmax activation. Note: in our super-simple first case, the first layer is essentially the last layer. which why we end up providing both the input_shape= and using softmax. This will not be the case for deeper networks. So we can add our (Dense) fully connected layer with num_classes outputs, softmax activation and 784 pixel inputs as:
model.add(Dense(num_classes, activation='softmax', input_shape=(784,)))
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Note the input_shape=(784,) here. 784 is the size of out input images. The (784,) means we can have an unspecified number of input images coming into the network. Normally, we would keep adding more layers (see examples later). But in this case we only have one layer. So we can see what our model looks like by invoking:
model.summary()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Whenever you build a model be sure to look at the summary. The number of trainable parameters gives you a sense of model complexity. Models with fewer parameters are likely to be easier to train, and likely to generalize better for the same performance. Step III: Compile the model This step describes how keras will update the model during the training phase (this is defining some of the parameters for this calculation). There are two choices we make here: 1. How to define the loss: This defines how we penalize disagreement of the output of the classifier with respect to the true labels. There are several options (https://keras.io/losses/), but for a multi-class problem such as this with a binary encoding categorical_crossentropy is a popular choice that penalizes high-confidence mistakes. 2. The optimizer: Optimization is performed using some form of gradient descent. But there are several choices on exactly how this is performed (https://keras.io/optimizers/ and http://ruder.io/optimizing-gradient-descent/). Here we choose to go with RMSprop() a popular choice. The parameters of the optimizer, such as learning rate, can also be specified as parameters to the optimization function, but we will use default values for now. We also choose to report on the accuracy during the optimization process, to keep track of progress:
model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy'])
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Step IV: Train the model We're now ready to perform the training. A couple more decisions need to be made: 1. How long to train: This is measured in "epochs", each epoch representing the whole training data going through the algorithm. You want to give the optimizer long enough so it can get to a good solution, but you will hit a point of diminishing returns (more on this later, and in the exxercises) 2. Batch Size: the network is updated by splitting the data into mini-batches and this is the number of data points used for each full update of the weights. Increasing batch size, will lead to larger memory requirements etc. With these parameters we can fit the model to our training data as follows. This will produce a nice plot updating the accuracy (% of correctly classified points in the training data):
batch_size = 128 epochs = 20 history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1)
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Step V: Test the model The real test of a model is how well it does on the training data (a severly overfit model could get 100% accuracy on training data, but fail miserably on new data). So the real score we care about is the performance on un-seen data. We do this by evaluating performance on the test data we held out so far. The code below also makes a plot showing the confidence curve on the training data, and a flat line corresponding to the performance on the new data. Questions: What does the deviation between these two curves tell us? What do you think would happen if we continued training? Feel free to test.
score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) fig = plt.figure() trainCurve=plt.plot(history.history['acc'],label='Training') testCurve=plt.axhline(y=score[1],color='k',label='Testing') plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend() plt.show()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Interpreting the results Since we used an incredibly simple network, it is easy to look under the covers and see what is going on. For each class (i.e. digit=1 to 9) this simple network essentially has a filter-matrix the same size as the image, which it "multiplies" an input image against. The digit for which the filter produces the highest response is the chosen digit. We can look at the filter matrices by looking at the weights of the first layer.
denseLayer=model.layers[0] weights=denseLayer.get_weights() topWeights=weights[0] fig = plt.figure(figsize=(15, 15)) meanTop=np.mean(topWeights,axis=1) for num in range(topWeights.shape[1]): mat=topWeights[:,num]-meanTop mat=(mat.reshape(28,28)) ax = fig.add_subplot(3,4,num+1) plt.imshow(mat,cmap=plt.cm.binary) plt.show() #fig=plt.figure() #plt.imshow(meanTop.reshape(28,28),cmap=plt.cm.binary) #plt.show()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Questions: Compare the results of these weights to the mean digit images at the beginning of this notebook: 1. Can you explain the form of these curves? Why does the first one have a white spot (low value) at the center while the second has a high value there? 2. What do you think the limitations of this approach are? What might we gain by adding more layers? Building a deeper network We will build a deeper 3-layer network, following exactly the same steps, but with 2-additional layers thrown in. FYI, this topology comes from one of the examples provided by keras at https://github.com/keras-team/keras/blob/master/examples/mnist_mlp.py * Note the changes to the input layer below (output size and activation). * There is a new intermediate dense layer. This takes the 512 outputs from the input layer and connects them to 512 outputs. * Then the final layer connects the 512 inputs from the previous layer to the 10 needed outputs.
# Model initialization unchanged model = Sequential() # Input layer is similar, but because it doesn't connect to the final layer we are free # to choose number of outputs (here 512) and we use 'relu' activation instead of softmax. It also model.add(Dense(512, activation='relu', input_shape=(784,))) # New intermediate layer connecting the 512 inputs from the previous layer to the 512 new outputs model.add(Dense(512, activation='relu')) # The 512 inputs from the previous layer connect to the final 10 class outputs, with a softmax activation model.add(Dense(num_classes, activation='softmax')) # Compilation as before model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) #plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) #Image("model_plot.png") model.summary()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Note the dramatic increase in the number of parameters achieved with this network. When you train this network, you'll see that this increase in number of parameters really slows things down, but leads to an increase in performance:
batch_size = 128 epochs = 20 history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) fig = plt.figure() trainCurve=plt.plot(history.history['acc'],label='Training') testCurve=plt.axhline(y=score[1],color='k',label='Testing') plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend() plt.show()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
Questions: What do you think about the agreement between the training and testing curves? Do you think there is a qualitative difference in what this classifier picks up? We can get some clues by looking at the input layer as before. Although now there are 512 instead of 10 filters. So we just look at the first 10.
denseLayer=model.layers[0] weights=denseLayer.get_weights() topWeights=weights[0] fig = plt.figure(figsize=(15, 15)) meanTop=np.mean(topWeights,axis=1) for num in range(10): mat=topWeights[:,num]-meanTop mat=(mat.reshape(28,28)) ax = fig.add_subplot(3,4,num+1) plt.imshow(mat,cmap=plt.cm.binary) plt.show()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
What is the quaitative difference between these plots and Overfitting As you can see with the increase in the number of parameters, there is an increase in the mismatch between training and testing, raising the concern that the model is overfitting on the training data. There are a couple of approaches to combat this. 1. Dropout 2. Don't train so hard! Dropout is a way of introducing noise into the model to prevent overfitting. It is implemented by adding dropout layers. These layers randomly set a specified fraction of input units to zeros (at each update during training). Below is the same model implemented with dropout layers added.
model = Sequential() model.add(Dense(512, activation='relu', input_shape=(784,))) # NEW dropout layer dropping 20% of inputs model.add(Dropout(0.2)) model.add(Dense(512, activation='relu')) # NEW dropout layer dropping 20% of inputs model.add(Dropout(0.2)) model.add(Dense(num_classes, activation='softmax')) # Compilation as before model.compile(loss='categorical_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) #plot_model(model, to_file='model_plot.png', show_shapes=True, show_layer_names=True) #Image("model_plot.png") model.summary() batch_size = 128 epochs = 20 history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) fig = plt.figure() trainCurve=plt.plot(history.history['acc'],label='Training') testCurve=plt.axhline(y=score[1],color='k',label='Testing') plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend() plt.show()
bicf_nanocourses/courses/ML_1/exercises/Keras_Tutorial.ipynb
bcantarel/bcantarel.github.io
gpl-3.0
TF-Keras Text Classification Distributed Single Worker GPUs using Vertex Training with Local Mode Container <table align="left"> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Setup
PROJECT_ID = "YOUR PROJECT ID" BUCKET_NAME = "gs://YOUR BUCKET NAME" REGION = "YOUR REGION" SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT" content_name = "tf-keras-txt-cls-dist-single-worker-gpus-local-mode-cont"
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Local Training with Vertex Local Mode and Auto Packaging
BASE_IMAGE_URI = "us-docker.pkg.dev/vertex-ai/training/tf-gpu.2-5:latest" SCRIPT_PATH = "trainer/task.py" OUTPUT_IMAGE_NAME = "gcr.io/{}/{}:latest".format(PROJECT_ID, content_name) ARGS = "--epochs 5 --batch-size 16 --local-mode" ! gcloud ai custom-jobs local-run \ --executor-image-uri=$BASE_IMAGE_URI \ --script=$SCRIPT_PATH \ --output-image-uri=$OUTPUT_IMAGE_NAME \ -- \ $ARGS
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Vertex Training using Vertex SDK and Vertex Local Mode Container Container Built by Vertex Local Mode
custom_container_image_uri = OUTPUT_IMAGE_NAME ! docker push $custom_container_image_uri ! gcloud container images list --repository "gcr.io"/$PROJECT_ID
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Initialize Vertex SDK
! pip install -r requirements.txt from google.cloud import aiplatform aiplatform.init( project=PROJECT_ID, staging_bucket=BUCKET_NAME, location=REGION, )
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Vertex Tensorboard Instance
tensorboard = aiplatform.Tensorboard.create( display_name=content_name, )
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Option: Use a Previously Created Vertex Tensorboard Instance tensorboard_name = "Your Tensorboard Resource Name or Tensorboard ID" tensorboard = aiplatform.Tensorboard(tensorboard_name=tensorboard_name) Run a Vertex SDK CustomContainerTrainingJob
display_name = content_name gcs_output_uri_prefix = f"{BUCKET_NAME}/{display_name}" machine_type = "n1-standard-8" accelerator_count = 4 accelerator_type = "NVIDIA_TESLA_P100" args = [ "--epochs", "100", "--batch-size", "128", "--num-gpus", f"{accelerator_count}", ] custom_container_training_job = aiplatform.CustomContainerTrainingJob( display_name=display_name, container_uri=custom_container_image_uri, ) custom_container_training_job.run( args=args, base_output_dir=gcs_output_uri_prefix, machine_type=machine_type, accelerator_type=accelerator_type, accelerator_count=accelerator_count, tensorboard=tensorboard.resource_name, service_account=SERVICE_ACCOUNT, ) print(f"Custom Training Job Name: {custom_container_training_job.resource_name}") print(f"GCS Output URI Prefix: {gcs_output_uri_prefix}")
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Training Output Artifact
! gsutil ls $gcs_output_uri_prefix
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Clean Up Artifact
! gsutil rm -rf $gcs_output_uri_prefix
community-content/tf_keras_text_classification_distributed_single_worker_gpus_with_gcloud_local_run_and_vertex_sdk/vertex_training_with_local_mode_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Hyperparameter optimization This notebook was made with the following version of george:
import george george.__version__
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
In this tutorial, we’ll reproduce the analysis for Figure 5.6 in Chapter 5 of Rasmussen & Williams (R&W). The data are measurements of the atmospheric CO2 concentration made at Mauna Loa, Hawaii (Keeling & Whorf 2004). The dataset is said to be available online but I couldn’t seem to download it from the original source. Luckily the statsmodels package includes a copy that we can load as follows:
import numpy as np import matplotlib.pyplot as pl from statsmodels.datasets import co2 data = co2.load_pandas().data t = 2000 + (np.array(data.index.to_julian_date()) - 2451545.0) / 365.25 y = np.array(data.co2) m = np.isfinite(t) & np.isfinite(y) & (t < 1996) t, y = t[m][::4], y[m][::4] pl.plot(t, y, ".k") pl.xlim(t.min(), t.max()) pl.xlabel("year") pl.ylabel("CO$_2$ in ppm");
docs/_static/notebooks/hyper.ipynb
dfm/george
mit
In this figure, you can see that there is periodic (or quasi-periodic) signal with a year-long period superimposed on a long term trend. We will follow R&W and model these effects non-parametrically using a complicated covariance function. The covariance function that we’ll use is: $$k(r) = k_1(r) + k_2(r) + k_3(r) + k_4(r)$$ where $$ \begin{eqnarray} k_1(r) &=& \theta_1^2 \, \exp \left(-\frac{r^2}{2\,\theta_2} \right) \ k_2(r) &=& \theta_3^2 \, \exp \left(-\frac{r^2}{2\,\theta_4} -\theta_5\,\sin^2\left( \frac{\pi\,r}{\theta_6}\right) \right) \ k_3(r) &=& \theta_7^2 \, \left [ 1 + \frac{r^2}{2\,\theta_8\,\theta_9} \right ]^{-\theta_8} \ k_4(r) &=& \theta_{10}^2 \, \exp \left(-\frac{r^2}{2\,\theta_{11}} \right) + \theta_{12}^2\,\delta_{ij} \end{eqnarray} $$ We can implement this kernel in George as follows (we'll use the R&W results as the hyperparameters for now):
from george import kernels k1 = 66**2 * kernels.ExpSquaredKernel(metric=67**2) k2 = 2.4**2 * kernels.ExpSquaredKernel(90**2) * kernels.ExpSine2Kernel(gamma=2/1.3**2, log_period=0.0) k3 = 0.66**2 * kernels.RationalQuadraticKernel(log_alpha=np.log(0.78), metric=1.2**2) k4 = 0.18**2 * kernels.ExpSquaredKernel(1.6**2) kernel = k1 + k2 + k3 + k4
docs/_static/notebooks/hyper.ipynb
dfm/george
mit