code
stringlengths
2.5k
150k
kind
stringclasses
1 value
### In this lab, we will implement Linear Regression using Least-square Solution. We will use the same example as we did in the class (Slide 18 from the linear regression slides). There are 5 steps. Let's implement them using only numpy step by step. ![linreg_steps](linreg_slide_18.png) ``` import numpy as np ``` We are given the dataset: {(0,0), (0,1), (1,0)} and asked to find the \ least-squares solution for the parameters in the regression of \ the function: y = w1 +w2^2 ``` # creating the input and target in numpy arrays inputs = np.array([[0], [0], [1]]) targets = np.array([[0], [1], [0]]) print('inputs shape :',np.shape(inputs)) print('targets shape :',np.shape(targets)) # now let's do the steps to find the solution # Step 1: evaluate the basis on the points inputs = np.concatenate((np.ones((np.shape(inputs)[0],1)),inputs),axis=1) print('inputs shape :',np.shape(inputs)) print(inputs) # step 2: compute -> transpose(inputs) * inputs q_matrix = np.dot(np.transpose(inputs),inputs) print('q_matrix shape :',np.shape(q_matrix)) print(q_matrix) # step 3: invert q_matrix q_inverse = np.linalg.inv(q_matrix) print('q_inverse shape :',np.shape(q_inverse)) print(q_inverse) # step 4: Compute the pseudo-inverse -> q_inverse * transpose(inputs) q_pseudo = np.dot(q_inverse,np.transpose(inputs)) print('q_pseudo shape :',np.shape(q_pseudo)) print(q_pseudo.astype(np.float16)) # step 5: compute w = q_pseudo * targets weights = np.dot(q_pseudo,targets) print('w shape :',np.shape(weights)) print(weights) ``` #### Now, let's implement the steps but on a real dataset. we will work on the auto-mpg dataset. This consists of a collection of a number of datapoints about certain cars (weight, horsepower, etc.), with the aim being to predict the fuel efficiency in miles per gallon (mpg) in for each car. ``` """ You are asked to - load the dataset text file (auto-mpg.txt) as numpy array - prerocess the dataset (normalise, split it into train and test sets) - find the least-squares solution for the parameters (weights vector) - test the found parameters on the test set and calculate the error The following comments and codes are meant to guide you. """ """ Please note: This dataset has one problem. There are missing values in it (labelled with question marks ‘?’). The np.loadtxt() method doesn’t like these, and we don’t know what to do with them, anyway,manually edit the file and delete all lines where there is a ? in that line. The linear regressor can’t do much with the names of the cars either, but since they appear in quotes(") we will tell np.loadtxt that they are comments Below are the attribute Information for the dataset: 1. mpg: continuous 2. cylinders: multi-valued discrete 3. displacement: continuous 4. horsepower: continuous 5. weight: continuous 6. acceleration: continuous 7. model year: multi-valued discrete 8. origin: multi-valued discrete 9. car name: string (unique for each instance) Please note: the first column is our target (mpg) """ # TODO: load the dataset file using np.loadtxt() import pandas as pd df = pd.read_csv("auto-mpg.txt", delimiter=' ') df.head() # data = np.loadtxt("auto-mpg.txt", delimiter=' ', usecols=range(5)) # TODO: Normalise the dataset. You can do this easily in numpy # by using np.mean and np.var. The only place where care is needed # is along which axis the mean and variance are computed: # axis=0 sums down the columns and axis=1 sums across the rows. normalised_date = None # TODO: Now separate the data into training and testing sets, training, testing = None, None # And split each set into inputs and targets hint: slicing the array trainin, traintgt = None, None testin, testtgt = None, None # TODO: Use the training set to find the weights vector. # you need to implement the previous 5 steps on the training set # and find the weights vector (this is called training). # To make it simple we define a function that takes # two args: inputs and targets and return the weights vector def linreg(inputs,targets): # you should implement the 5 steps here weights = None return weights # test your implementation weights = linreg(trainin,traintgt) weights # TODO: Testing the found weights on the testing set # you can do this by # - testout = (testin*weights) # - error = sum((testout - testtgt)**2) testout = None error = None """ You can try to re-train the model without the normalising the data and see if this makes any different on the error value """ ```
github_jupyter
<a href="https://colab.research.google.com/github/grzegorzkwolek/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/GKwolek_3rd_assignment_LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Lambda School Data Science - Making Data-backed Assertions This is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. ## Assignment - what's going on here? Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people. Try to figure out which variables are possibly related to each other, and which may be confounding relationships. Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack! ``` # TODO - your code here # Use what we did live in lecture as an example # HINT - you can find the raw URL on GitHub and potentially use that # to load the data with read_csv, or you can upload it yourself import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/0cb02007e8a7f3193cd33daa6cb2ed4158e73aed/module3-databackedassertions/persons.csv') #it looks like "id" is missing as a column label df = df.rename(columns = {'Unnamed: 0':'ID'}) print (df) #putting ID as an index (just removing duplication) df = df.set_index("ID") df.head() df.plot.scatter("exercise_time", "weight") df.plot.scatter("age", "exercise_time") !pip install pandas==0.23.4 #(the charts above are to help me understanding a notch more of the data, and validate soem assumptions) # back to the exercise we did during the class age_bins = pd.cut(df['age'], 5, precision=0) weight_bins = pd.cut(df['weight'], 8, precision = 0) pd.crosstab(weight_bins, age_bins, normalize = "index") pd.crosstab(weight_bins, age_bins, normalize = "columns") #possibly giving up too early, but crosstabs don't seem to be the right approach for this dataset. Back to charts. import matplotlib.pyplot as plt plt.style.use('dark_background') age_weight_col_CT = pd.crosstab(age_bins, weight_bins, normalize = "columns") age_weight_col_CT.plot(kind = "bar") df.head() cols = ['age', 'weight', 'exercise_time'] df2 = df[cols] df.plot(figsize = (15, 10)) df = df.set_index("age") df.sort_index? #(inplace=True) df.head(200) ``` ### Assignment questions After you've worked on some code, answer the following questions in this text block: 1. What are the variable types in the data? All continous variables, represented as discreet ones. 2. What are the relationships between the variables? The higher exercise time the lower is the weight (and the lower maximum weight) 3. Which relationships are "real", and which spurious? The relation between age and weight seems spurious. There is a relation between exercise time and weight as well as the age and exercise time (neagtive, beyond age 60). ## Stretch goals and resources Following are *optional* things for you to take a look at. Focus on the above assignment first, and make sure to commit and push your changes to GitHub. - [Spurious Correlations](http://tylervigen.com/spurious-correlations) - [NIH on controlling for confounding variables](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017459/) Stretch goals: - Produce your own plot inspired by the Spurious Correlation visualizations (and consider writing a blog post about it - both the content and how you made it) - Pick one of the techniques that NIH highlights for confounding variables - we'll be going into many of them later, but see if you can find which Python modules may help (hint - check scikit-learn)
github_jupyter
# Convolutional Neural Networks: Step by Step Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**: - Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters. - Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! ## 1 - Packages Let's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python. - [matplotlib](http://matplotlib.org) is a library to plot graphs in Python. - np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. ``` import numpy as np import h5py import matplotlib.pyplot as plt %matplotlib inline plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' %load_ext autoreload %autoreload 2 np.random.seed(1) ``` ## 2 - Outline of the Assignment You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed: - Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional) - Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model: <img src="images/model.png" style="width:800px;height:300px;"> **Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. ## 3 - Convolutional Neural Networks Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. <img src="images/conv_nn.png" style="width:350px;height:200px;"> In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. ### 3.1 - Zero-Padding Zero-padding adds zeros around the border of an image: <img src="images/PAD.png" style="width:600px;height:400px;"> <caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Zero-Padding**<br> Image (3 channels, RGB) with a padding of 2. </center></caption> The main benefits of padding are the following: - It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image. **Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do: ```python a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..)) ``` ``` # GRADED FUNCTION: zero_pad def zero_pad(X, pad): """ Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, as illustrated in Figure 1. Argument: X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images pad -- integer, amount of padding around each image on vertical and horizontal dimensions Returns: X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C) """ ### START CODE HERE ### (≈ 1 line) npad = ((0, 0), (pad,pad), (pad,pad),(0,0)) X_pad = np.pad(X,npad,'constant',constant_values = 0) ### END CODE HERE ### return X_pad np.random.seed(1) x = np.random.randn(4, 3, 3, 2) x_pad = zero_pad(x, 2) print ("x.shape =", x.shape) print ("x_pad.shape =", x_pad.shape) print ("x[1,1] =", x[1,1]) print ("x_pad[1,1] =", x_pad[1,1]) #first image, first column, every row, every channel: #print ("x_pad[1,1,:,:] =", x_pad[1,1,:,:]) #first image, first column, first row, every channel #print ("x_pad[1,1,1,:] =", x_pad[1,1,1,:]) #first image, first column, every row, first channel #print ("x_pad[1,1,1,1] =", x_pad[1,1,1,1]) fig, axarr = plt.subplots(1, 2) axarr[0].set_title('x') axarr[0].imshow(x[0,:,:,0]) axarr[1].set_title('x_pad') axarr[1].imshow(x_pad[0,:,:,0]) ``` **Expected Output**: <table> <tr> <td> **x.shape**: </td> <td> (4, 3, 3, 2) </td> </tr> <tr> <td> **x_pad.shape**: </td> <td> (4, 7, 7, 2) </td> </tr> <tr> <td> **x[1,1]**: </td> <td> [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] </td> </tr> <tr> <td> **x_pad[1,1]**: </td> <td> [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] </td> </tr> </table> ### 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input - Outputs another volume (usually of different size) <img src="images/Convolution_schematic.gif" style="width:500px;height:300px;"> <caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : **Convolution operation**<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption> In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html). ``` # GRADED FUNCTION: conv_single_step def conv_single_step(a_slice_prev, W, b): """ Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation of the previous layer. Arguments: a_slice_prev -- slice of input data of shape (f, f, n_C_prev) W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev) b -- Bias parameters contained in a window - matrix of shape (1, 1, 1) Returns: Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data """ ### START CODE HERE ### (≈ 2 lines of code) # Element-wise product between a_slice and W. Do not add the bias yet. #GAC:(element-wise multiplication) s = np.multiply(a_slice_prev,W) # Sum over all entries of the volume s. Z = np.sum(s,axis=None) # Add bias b to Z. Cast b to a float() so that Z results in a scalar value. Z = Z+ np.float(b) ### END CODE HERE ### return Z np.random.seed(1) a_slice_prev = np.random.randn(4, 4, 3) W = np.random.randn(4, 4, 3) b = np.random.randn(1, 1, 1) Z = conv_single_step(a_slice_prev, W, b) print("Z =", Z) ``` **Expected Output**: <table> <tr> <td> **Z** </td> <td> -6.99908945068 </td> </tr> </table> ### 3.3 - Convolutional Neural Networks - Forward pass In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: <center> <video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls> </video> </center> **Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do: ```python a_slice_prev = a_prev[0:2,0:2,:] ``` This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define. 2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. <img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;"> <caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** <br> This figure shows only a single channel. </center></caption> **Reminder**: The formulas relating the output shape of the convolution to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$ $$ n_C = \text{number of filters used in the convolution}$$ For this exercise, we won't worry about vectorization, and will just implement everything with for-loops. ``` # GRADED FUNCTION: conv_forward def conv_forward(A_prev, W, b, hparameters): """ Implements the forward propagation for a convolution function Arguments: A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) W -- Weights, numpy array of shape (f, f, n_C_prev, n_C) b -- Biases, numpy array of shape (1, 1, 1, n_C) hparameters -- python dictionary containing "stride" and "pad" Returns: Z -- conv output, numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward() function """ ### START CODE HERE ### # Retrieve dimensions from A_prev's shape (≈1 line) (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape #print("m =", m) #print("n_H_prev =", n_H_prev) #print("n_W_prev =", n_W_prev) #print("n_C_prev =", n_C_prev) # Retrieve dimensions from W's shape (≈1 line) (f, f, n_C_prev, n_C) = W.shape #print("f =", f) #print("n_C =", n_C) # Retrieve information from "hparameters" (≈2 lines) stride = hparameters['stride'] pad = hparameters['pad'] #print("stride =", stride) #print("pad =", pad) # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines) n_H = np.int((n_H_prev-f+2*pad)/stride)+1 n_W = np.int((n_W_prev-f+2*pad)/stride)+1 #print("n_H =", n_H) #print("n_W =", n_W) # Initialize the output volume Z with zeros. (≈1 line) Z = np.zeros((m, n_H, n_W, n_C)) #print ("Z.shape =", Z.shape) # Create A_prev_pad by padding A_prev A_prev_pad = zero_pad(A_prev, pad) #padded image of shape (m, n_H_prev + 2*pad, n_W_prev + 2*pad, n_C_prev) #print ("A_prev_pad.shape =", A_prev_pad.shape) for i in range(m): # loop over the batch of training examples a_prev_pad = A_prev_pad[i,:,:,:] # Select ith training example's padded activation #print ("a_prev_pad.shape =", a_prev_pad.shape) for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over channels (= #filters) of the output volume # Find the corners of the current "slice" (≈4 lines) # (GAC) using h, w, f and s vert_start = h*stride vert_end = h*stride + f horiz_start = w*stride horiz_end = w*stride + f #print ("vert_start =", vert_start) #print ("vert_end =", vert_end) #print ("horiz_start =", horiz_start) #print ("horiz_end =", horiz_end) # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line) a_slice_prev = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:] #print ("a_slice_prev.shape =", a_slice_prev.shape) # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line) Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c]) ### END CODE HERE ### # Making sure your output shape is correct assert(Z.shape == (m, n_H, n_W, n_C)) # Save information in "cache" for the backprop cache = (A_prev, W, b, hparameters) return Z, cache np.random.seed(1) A_prev = np.random.randn(10,4,4,3) W = np.random.randn(2,2,3,8) b = np.random.randn(1,1,1,8) hparameters = {"pad" : 2, "stride": 2} Z, cache_conv = conv_forward(A_prev, W, b, hparameters) print("Z's mean =", np.mean(Z)) print("Z[3,2,1] =", Z[3,2,1]) print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3]) ``` **Expected Output**: <table> <tr> <td> **Z's mean** </td> <td> 0.0489952035289 </td> </tr> <tr> <td> **Z[3,2,1]** </td> <td> [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] </td> </tr> <tr> <td> **cache_conv[0][1][2][3]** </td> <td> [-0.20075807 0.18656139 0.41005165] </td> </tr> </table> Finally, CONV layer should also contain an activation, in which case we would add the following line of code: ```python # Convolve the window to get back one output neuron Z[i, h, w, c] = ... # Apply activation A[i, h, w, c] = activation(Z[i, h, w, c]) ``` You don't need to do it here. ## 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output. - Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output. <table> <td> <img src="images/max_pool1.png" style="width:500px;height:300px;"> <td> <td> <img src="images/a_pool.png" style="width:500px;height:300px;"> <td> </table> These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. ### 4.1 - Forward Pooling Now, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below. **Reminder**: As there's no padding, the formulas binding the output shape of the pooling to the input shape is: $$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$ $$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$ $$ n_C = n_{C_{prev}}$$ ``` # GRADED FUNCTION: pool_forward def pool_forward(A_prev, hparameters, mode = "max"): """ Implements the forward pass of the pooling layer Arguments: A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) hparameters -- python dictionary containing "f" and "stride" mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C) cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters """ # Retrieve dimensions from the input shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve hyperparameters from "hparameters" f = hparameters["f"] stride = hparameters["stride"] # Define the dimensions of the output n_H = int(1 + (n_H_prev - f) / stride) n_W = int(1 + (n_W_prev - f) / stride) n_C = n_C_prev # Initialize output matrix A A = np.zeros((m, n_H, n_W, n_C)) ### START CODE HERE ### for i in range(m): # loop over the training examples for h in range(n_H): # loop on the vertical axis of the output volume for w in range(n_W): # loop on the horizontal axis of the output volume for c in range (n_C): # loop over the channels of the output volume # Find the corners of the current "slice" (≈4 lines) vert_start = h * stride vert_end = vert_start + f horiz_start = w * stride horiz_end = horiz_start + f # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line) #a_prev = A_prev[i,:,:,:] #a_slice_prev = a_prev[vert_start:vert_end,horiz_start:horiz_end,c] a_slice_prev = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c] # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean. #Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:,:,:,c], b[:,:,:,c]) if mode == "max": A[i, h, w, c] = np.max(a_slice_prev) elif mode == "average": A[i, h, w, c] = np.mean(a_slice_prev) ### END CODE HERE ### # Store the input and hparameters in "cache" for pool_backward() cache = (A_prev, hparameters) # Making sure your output shape is correct assert(A.shape == (m, n_H, n_W, n_C)) return A, cache np.random.seed(1) A_prev = np.random.randn(2, 4, 4, 3) hparameters = {"stride" : 2, "f": 3} A, cache = pool_forward(A_prev, hparameters) print("mode = max") print("A =", A) print() A, cache = pool_forward(A_prev, hparameters, mode = "average") print("mode = average") print("A =", A) ``` **Expected Output:** <table> <tr> <td> A = </td> <td> [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] </td> </tr> <tr> <td> A = </td> <td> [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] </td> </tr> </table> Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. ## 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED) In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. ### 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. #### 5.1.1 - Computing dA: This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example: $$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$ Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into: ```python da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] ``` #### 5.1.2 - Computing dW: This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss: $$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$ Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into: ```python dW[:,:,:,c] += a_slice * dZ[i, h, w, c] ``` #### 5.1.3 - Computing db: This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$: $$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$ As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into: ```python db[:,:,:,c] += dZ[i, h, w, c] ``` **Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above. ``` def conv_backward(dZ, cache): """ Implement the backward propagation for a convolution function Arguments: dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C) cache -- cache of values needed for the conv_backward(), output of conv_forward() Returns: dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev), numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev) dW -- gradient of the cost with respect to the weights of the conv layer (W) numpy array of shape (f, f, n_C_prev, n_C) db -- gradient of the cost with respect to the biases of the conv layer (b) numpy array of shape (1, 1, 1, n_C) """ ### START CODE HERE ### # Retrieve information from "cache" (A_prev, W, b, hparameters) = cache # Retrieve dimensions from A_prev's shape (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape # Retrieve dimensions from W's shape (f, f, n_C_prev, n_C) = W.shape # Retrieve information from "hparameters" stride = hparameters['stride'] pad = hparameters['pad'] # Retrieve dimensions from dZ's shape (m, n_H, n_W, n_C) = dZ.shape # Initialize dA_prev, dW, db with the correct shapes dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) dW = np.zeros((f, f, n_C_prev, n_C)) db = np.zeros((1, 1, 1, n_C)) # Pad A_prev and dA_prev A_prev_pad = zero_pad(A_prev, pad) dA_prev_pad = zero_pad(dA_prev, pad) for i in range(m): # loop over the training examples # select ith training example from A_prev_pad and dA_prev_pad a_prev_pad = A_prev_pad[i,:,:,:] da_prev_pad = dA_prev_pad[i,:,:,:] for h in range(n_H): # loop over vertical axis of the output volume for w in range(n_W): # loop over horizontal axis of the output volume for c in range(n_C): # loop over the channels of the output volume # Find the corners of the current "slice" vert_start = h*stride vert_end = vert_start+f horiz_start = w*stride horiz_end = horiz_start+f # Use the corners to define the slice from a_prev_pad a_slice = a_prev_pad[vert_start:vert_end,horiz_start:horiz_end,:] # Update gradients for the window and the filter's parameters using the code formulas given above da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c] dW[:,:,:,c] += a_slice * dZ[i, h, w, c] db[:,:,:,c] += dZ[i, h, w, c] # Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :]) dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :] ### END CODE HERE ### # Making sure your output shape is correct assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev)) return dA_prev, dW, db np.random.seed(1) dA, dW, db = conv_backward(Z, cache_conv) print("dA_mean =", np.mean(dA)) print("dW_mean =", np.mean(dW)) print("db_mean =", np.mean(db)) ``` ** Expected Output: ** <table> <tr> <td> **dA_mean** </td> <td> 1.45243777754 </td> </tr> <tr> <td> **dW_mean** </td> <td> 1.72699145831 </td> </tr> <tr> <td> **db_mean** </td> <td> 7.83923256462 </td> </tr> </table> ## 5.2 Pooling layer - backward pass Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. ### 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix} 1 && 3 \\ 4 && 2 \end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix} 0 && 0 \\ 1 && 0 \end{bmatrix}\tag{4}$$ As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints: - [np.max()]() may be helpful. It computes the maximum of an array. - If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that: ``` A[i,j] = True if X[i,j] = x A[i,j] = False if X[i,j] != x ``` - Here, you don't need to consider cases where there are several maxima in a matrix. ``` def create_mask_from_window(x): """ Creates a mask from an input matrix x, to identify the max entry of x. Arguments: x -- Array of shape (f, f) Returns: mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x. """ ### START CODE HERE ### (≈1 line) mask = (x==np.max(x)) ### END CODE HERE ### return mask np.random.seed(1) x = np.random.randn(2,3) mask = create_mask_from_window(x) print('x = ', x) print("mask = ", mask) ``` **Expected Output:** <table> <tr> <td> **x =** </td> <td> [[ 1.62434536 -0.61175641 -0.52817175] <br> [-1.07296862 0.86540763 -2.3015387 ]] </td> </tr> <tr> <td> **mask =** </td> <td> [[ True False False] <br> [False False False]] </td> </tr> </table> Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. ### 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this. For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix} 1/4 && 1/4 \\ 1/4 && 1/4 \end{bmatrix}\tag{5}$$ This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html) ``` def distribute_value(dz, shape): """ Distributes the input value in the matrix of dimension shape Arguments: dz -- input scalar shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz Returns: a -- Array of size (n_H, n_W) for which we distributed the value of dz """ ### START CODE HERE ### # Retrieve dimensions from shape (≈1 line) (n_H, n_W) = shape # Compute the value to distribute on the matrix (≈1 line) average = dz / (n_H * n_W) # Create a matrix where every entry is the "average" value (≈1 line) a = np.ones((n_H, n_W))*average ### END CODE HERE ### return a a = distribute_value(2, (2,2)) print('distributed value =', a) ``` **Expected Output**: <table> <tr> <td> distributed_value = </td> <td> [[ 0.5 0.5] <br\> [ 0.5 0.5]] </td> </tr> </table> ### 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer. **Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dZ. ``` def pool_backward(dA, cache, mode = "max"): """ Implements the backward pass of the pooling layer Arguments: dA -- gradient of cost with respect to the output of the pooling layer, same shape as A cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters mode -- the pooling mode you would like to use, defined as a string ("max" or "average") Returns: dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev """ ### START CODE HERE ### # Retrieve information from cache (≈1 line) (A_prev, hparameters) = cache # Retrieve hyperparameters from "hparameters" (≈2 lines) stride = hparameters['stride'] f = hparameters['f'] # Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines) m, n_H_prev, n_W_prev, n_C_prev = A_prev.shape m, n_H, n_W, n_C = dA.shape # Initialize dA_prev with zeros (≈1 line) dA_prev = np.zeros((m, n_H_prev, n_W_prev, n_C_prev)) for i in range(m): # loop over the training examples # select training example from A_prev (≈1 line) a_prev = A_prev[i,:,:,:] for h in range(n_H): # loop on the vertical axis for w in range(n_W): # loop on the horizontal axis for c in range(n_C): # loop over the channels (depth) # Find the corners of the current "slice" (≈4 lines) vert_start = h*stride vert_end = vert_start+f horiz_start = w*stride horiz_end = horiz_start+f # Compute the backward propagation in both modes. if mode == "max": # Use the corners and "c" to define the current slice from a_prev (≈1 line) a_prev_slice = a_prev[vert_start:vert_end,horiz_start:horiz_end,c] # Create the mask from a_prev_slice (≈1 line) mask = create_mask_from_window(a_prev_slice) # Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += mask * dA[i,h,w,c] elif mode == "average": # Get the value a from dA (≈1 line) da = dA[i, h, w, c] # Define the shape of the filter as fxf (≈1 line) shape = (f,f) # Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line) dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += distribute_value(da, shape) ### END CODE ### # Making sure your output shape is correct assert(dA_prev.shape == A_prev.shape) return dA_prev np.random.seed(1) A_prev = np.random.randn(5, 5, 3, 2) hparameters = {"stride" : 1, "f": 2} A, cache = pool_forward(A_prev, hparameters) dA = np.random.randn(5, 4, 2, 2) dA_prev = pool_backward(dA, cache, mode = "max") print("mode = max") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) print() dA_prev = pool_backward(dA, cache, mode = "average") print("mode = average") print('mean of dA = ', np.mean(dA)) print('dA_prev[1,1] = ', dA_prev[1,1]) ``` **Expected Output**: mode = max: <table> <tr> <td> **mean of dA =** </td> <td> 0.145713902729 </td> </tr> <tr> <td> **dA_prev[1,1] =** </td> <td> [[ 0. 0. ] <br> [ 5.05844394 -1.68282702] <br> [ 0. 0. ]] </td> </tr> </table> mode = average <table> <tr> <td> **mean of dA =** </td> <td> 0.145713902729 </td> </tr> <tr> <td> **dA_prev[1,1] =** </td> <td> [[ 0.08485462 0.2787552 ] <br> [ 1.26461098 -0.25749373] <br> [ 1.17975636 -0.53624893]] </td> </tr> </table> ### Congratulations ! Congratulation on completing this assignment. You now understand how convolutional neural networks work. You have implemented all the building blocks of a neural network. In the next assignment you will implement a ConvNet using TensorFlow. ``` !tar cvfz notebook.tar.gz * !tar cvfz notebook.tar.gz * ```
github_jupyter
# Ciência de dados - Unidade 3 *Por: Débora Azevedo, Eliseu Jayro, Francisco de Paiva e Igor Brandão* ### Objetivos O objetivo desse projeto é explorar os [datasets da UFRN](http://dados.ufrn.br/group/despesas-e-orcamento) contendo informações sobre requisições de material, requisições de manutenção e empenhos sob o contexto da [diminuição de verba](https://g1.globo.com/educacao/noticia/rio-grande-do-norte-veja-a-evolucao-do-orcamento-repassado-pelo-mec-as-duas-universidades-federais-do-estado.ghtml) que a UFRN recentemente vem sofrendo devido a crise financeira. De acordo com a pesquisa feita pelo nosso grupo, as fontes dizem que os cortes atingem [principalmente serviços terceirizados](https://g1.globo.com/educacao/noticia/90-das-universidades-federais-tiveram-perda-real-no-orcamento-em-cinco-anos-verba-nacional-encolheu-28.ghtml) como limpeza, manutenção e segurança, além de benefícios para estudantes de baixa renda, dado que estas [não são despesas obrigatórias] (https://g1.globo.com/educacao/noticia/salario-de-professores-das-universidades-federais-e-despesa-obrigatoria-mas-auxilio-estudantil-nao-entenda-a-diferenca.ghtml), ao contrário do pagamento de aposentadorias e pensões e o pagamento de pessoal ativo, no entanto, em [entrevista](http://www.tribunadonorte.com.br/noticia/na-s-vamos-receber-o-ma-nimo-diz-reitora-da-ufrn/399980), a atual reitora disse que o setor mais afetado seria o de obras e sua gestão, o que pode ser uma informação mais confiável, visto que até 2017 todo o orçamento era destinado diretamente as universidades federais, portanto eles decidiam como todos os gastos eram feitos. Isso mudou em 2018, já que o Ministério da Educação adotou uma nova metodologia que restringe ainda mais os gastos à "matriz Andifes" de forma que 50% do orçamento passou a ser gerenciado pelo próprio ministério da educação, logo a comparação do orçamento de 2018 com os anteriores deixa de ser possível. <hr> # 0 - Importando as bibliotecas Aqui utilizaremos o *pip* para instalar as bibliotecas necessárias para executar o notebook, sendo estas: - pandas - numpy - matplotlib - wordcloud ``` !pip install pandas !pip install numpy !pip install matplotlib !pip install wordcloud ``` # 1 - Lendo os datasets Nessa seção nós iremos importar os datasets contendo informações sobre requisiçoes de manutenção, requisições de material de serviço e empenhos, todos disponíveis no site de dados da UFRN. Na célula abaixo nós definimos uma lista com os arquivos que iremos precisar, lemos todos eles e os guardamos em um dicionário. ``` import pandas as pd from os import path # Lista com o nome dos arquivos de todos os datasets que iremos utilizar dataset_names = ['requisicaomanutencao.csv', 'requisicaomaterialservico.csv', 'empenhos.csv'] # Pasta em que os datasets se encontram dataset_path = 'datasets' # Dicionário onde eles serão armazenados data = {} # Loop que itera sobre todos os nomes definidos e armazena os dados lidos no dicionário for name in dataset_names: data[name[:-4]] = pd.read_csv(path.join(dataset_path, name), sep=';', low_memory=False) # Mostrando 'requisicaomanutencao.csv' data['requisicaomanutencao'] # Mostrando 'requisicaomaterialservico.csv' data['requisicaomaterialservico'] # Mostrando 'empenhos.csv' data['empenhos'] ``` # 2 - Explorando e limpando os datasets Nessa seção é feita a análise das diferentes colunas dos datasets para identificar seus significados e suas utilidades para os problemas que iremos analisar. Sendo feita essa análise, nós então limpamos os datasets para que eles se tornem mais legíveis e mais fáceis de manusear. ## 2.1 - Requisição de manutenção Trata-se de um dataset listando todas as requisições de manutenções da UFRN desde 2005. Lembrando que iremos analisar apenas dados de 2008 a 2017, que são os anos em que temos o valor da verba total da UFRN. ``` maintenance_data = data['requisicaomanutencao'] print(maintenance_data.head()) print(maintenance_data.divisao.unique()) ``` ### 2.11 - Descrevendo as colunas e valores Observando o resultado da célula acima, podemos fazer as seguintes conclusões sobre as colunas: - <span style="color:red"><b>numero</b></span>: ID da requisição, não é relevante para o problema. - **ano**: Ano em que foi feita requisição de manutenção - **divisão**: Diz a divisão para qual a manutenção foi requisitada, assume os seguintes valores: 'Serviços Gerais', 'Instalações Elétricas e Telecomunicações', 'Instalações Hidráulicas e Sanitárias', 'Viário', 'Ar condicionado', 'Outros'. - **id_unidade_requisitante**: ID da unidade que fez a requisição. - **nome_unidade_requisitante**: Nome da unidade que fez a requisição. - **id_unidade_custo**: ID da unidade para qual o custo será destinado (pode ser igual a requisitante). - **nome_unidade_custo**: Nome da unidade para qual o custo será destinado (poder ser igual a requisitante). - **data_cadastro**: Data em que a requisição foi cadastrada. - **descricao**: Descrição da requisição, geralmente uma justificativa para aquela manutenção. - **local**: Local exato em que será feito a manutenção, pode ser uma sala, laboratório etc - <span style="color:red"><b>usuario</b></span>: Usuário que solicitou a manutenção. Provavelmente não tem utilidade para nosso problema. - **status**: Diz o status atual da requisição. Pode ajudar na análise de custos, considerando apenas as que já foram aprovadas, comparando a proporção de aprovadas e reprovadas para cada setor. ### 2.12 - Removendo colunas desnecessárias - <span style="color:red"><b>numero</b></span>: É apenas o ID da requisição - <span style="color:red"><b>usuario</b></span>: Não precisamos saber o usuário para nossa análise ``` def remove_cols(df_input, dropped_columns): ''' This functions receives a dataframe and a list of column names as input. It checks if each column exist, and if they do, they're removed. ''' for dropped_column in dropped_columns: if dropped_column in df_input: df_input = df_input.drop([dropped_column], axis=1) return df_input maintenance_dropped = ['numero', 'usuario'] maintenance_data = remove_cols(maintenance_data, maintenance_dropped) maintenance_data.head() ``` ### 2.13 - Removendo outliers e valores desnecessários Aqui iremos analisar os valores do nosso dataset e determinar quais podemos remover ou modificar de modo a facilitar a nossa análise. ``` print(maintenance_data.status.value_counts()) ``` **Observação:** Checando os status, podemos perceber que a maioria dos valores ocorrem um número muito pequeno de vezes e não precisamos deles para nossa análise, portanto iremos eliminar os valores com 800 ocorrências ou menos ``` maintenance_data = maintenance_data.groupby('status').filter(lambda x: len(x) > 800) maintenance_data.status.value_counts() ``` **Observação:** Sobram portanto 5 valores possíveis para a coluna **status**. Porém, para nossa análise de custos, precisamos saber apenas se a requisição foi negada ou autorizada. Analisando os status restantes, podemos considerar que toda requisição que tiver valor diferente de negada pode ser considerada como autorizada. ``` def convert_status(status_val): '''Converts the value of all strings in the status column to AUTORIZADA, unless their value is NEGADA.''' if status_val == 'NEGADA': return status_val else: return 'AUTORIZADA' maintenance_data['status'] = maintenance_data['status'].apply(convert_status) maintenance_data.status.value_counts() maintenance_data.info() print(maintenance_data.divisao.value_counts()) print(maintenance_data.nome_unidade_custo.value_counts()) ``` ### 2.14 - Lidando com valores nulos Aqui nós utilizaremos o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) para verificar quais os valores nulos das colunas de nosso dataset. A partir disso, dependendo da quantidade de colunas com valores nulos e do tipo de dado, nós iremos decidir o que fazer com esses valores. ``` maintenance_data.info() maintenance_data.divisao.value_counts() ``` **Observação** Utilizando o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) percebemos que existem muitos valores **NULL** na coluna *local* e alguns poucos na coluna *divisao*. Para a coluna *local*, iremos preencher as linhas **nulas** com seus valores de *nome_unidade_custo*. Para a coluna *divisao*, iremos preencher com o valor 'Outros', que é um dos mais comuns. ``` import numpy as np maintenance_data['local'] = np.where(maintenance_data.local.isnull(), maintenance_data.nome_unidade_custo, maintenance_data.local) maintenance_data['divisao'] = maintenance_data['divisao'].fillna('Outros') maintenance_data.info() # Resultado final da limpeza maintenance_data.head() ``` ## 2.2 - Requisição de material de serviço Esse dataset lista todas as requisições de materiais e serviços contratados pela UFRN desde 2008. ``` material_request_data = data['requisicaomaterialservico'] print('===== Primeiras linhas =====') print(material_request_data.head()) print('===== Contagem de valores de natureza_despesa =====') print(material_request_data.natureza_despesa.value_counts()) print('===== Contagem de valores de status =====') print(material_request_data.status.value_counts()) ``` ### 2.21 - Descrevendo as colunas e valores Observando o resultado da célula acima, podemos fazer as seguintes conclusões sobre as colunas: - <span style="color:red"><b>numero</b></span>: ID da requisição, não é relevante. - **ano**: Ano em que foi feita a requisição. - **id_unidade_requisitante**: ID da unidade que fez a requisição, toda unidade tem um ID único. - **nome_unidade_requisitante**: Nome da unidade que fez a requisição. - **id_unidade_custo**: ID da unidade para qual os custos serão destinados, pode ser diferente da requisitante. - **nome_unidade_custo**: Nome da unidade para qual os custos serão destinados, pode ser diferente da requisitante. - **data_envio**: Data em que a requisição foi enviada. - <span style="color:red"><b>numero_contrato</b></span>: Aparentemente as requisições são feitas por meio de contratos, esse é o número do contrato. - **contratado**: Empresa contratada para fornecer o material. - <span style="color:red"><b>natureza_despesa</b></span>: Em todas as linhas analisadas, essa coluna tem o valor 'SERV. PESSOA JURÍDICA'. - **valor**: Valor pedido pela requisição. - **observacoes**: Comentário feito pela pessoa que fez a requisição, explicando o motivo desta - **status**: O status atual da requisição, está diretamente ligada ao empenho e pode assumir os seguintes valores: 'ENVIADA', 'PENDENTE ATENDIMENTO', 'CADASTRADA', 'ESTORNADA', 'LIQUIDADA', 'PENDENTE AUTORIZAÇÃO', 'FINALIZADA', 'EM_LIQUIDACAO', 'NEGADA', 'A_EMPENHAR', 'EMPENHO_ANULADO', 'AUTORIZADA', 'CANCELADA\n'. ### 2.22 - Removendo colunas desnecessárias As seguintes colunas serão dropadas - <span style="color:red"><b>numero</b></span>: Trata-se apenas do ID da requisição, não é necessário - <span style="color:red"><b>numero_contrato</b></span>: Informação desnecessária para a análise - <span style="color:red"><b>natureza_despesa</b></span>: Possui o mesmo valor em todas as linhas ``` material_dropped = ['numero' ,'natureza_despesa', 'numero_contrato'] material_request_data = remove_cols(material_request_data, material_dropped) print(material_request_data.head()) ``` ### 2.23 - Removendo outliers e valores desnecessários Aqui iremos analisar os dados do nosso dataset e determinar quais podemos remover ou modificar de modo a facilitar a nossa análise. ``` print(material_request_data.status.value_counts()) ``` **Observação:** Verificando a contagem de valores da coluna *status*, percebemos que grande parte dos valores possíveis tem um número muito pequeno de ocorrências no dataset. Esses valores com poucas ocorrências influenciam pouco na nossa análise, portanto iremos eliminá-los. ``` allowed_status = ['LIQUIDADA', 'EM_LIQUIDACAO', 'ENVIADA', 'ESTORNADA', 'FINALIZADA', 'CADASTRADA'] material_request_data = material_request_data[material_request_data.status.isin(allowed_status)] print(material_request_data.status.value_counts()) ``` ### 2.24 - Lidando com valores nulos Aqui nós utilizaremos o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) para verificar quais os valores nulos das colunas de nosso dataset. A partir disso, dependendo da quantidade de colunas com valores nulos e do tipo de dado, nós iremos decidir o que fazer com esses valores. ``` material_request_data.info() material_request_data[material_request_data.data_envio.isnull()].head(n=20) ``` - **data_envio**: Possui vários valores nulos. Como a maioria deles está bem separado um do outro e o dataset está ordenado por data, podemos preenchê-los usando o valor dessa coluna em linhas anteriores. - **observacoes**: Algumas observações também tem valores nulos, iremos simplesmente settar esses para uma string vazia. ``` material_request_data.data_envio = material_request_data.data_envio.fillna(method='ffill') material_request_data.observacoes = material_request_data.observacoes.fillna('') material_request_data.info() ``` ## 2.3 - Empenhos Dataset contendo a relação de todos os empenhos efetuados pela UFRN desde 2001. O empenho da despesa importa em deduzir do saldo de determinada dotação orçamentária a parcela necessária à execução das atividades do órgão. É a forma de comprometimento de recursos orçamentários. Nenhuma despesa poderá ser realizada sem prévio empenho (art. 60 da Lei n° 4.320/64), sendo realizado após autorização do Ordenador de Despesa em cada Unidade Gestora Executora. ``` empenhos_data = data['empenhos'] print(empenhos_data.head()) print(empenhos_data.data.value_counts()) ``` ### 2.31 - Descrevendo as colunas e valores - <span style="color:red"><b>cod_empenho</b></span>: ID do empenho; - **ano**: Ano em que foi solicitado o empenho; - **modalidade**: O empenho da despesa pode assumir três tipos diferentes: - a) Ordinário – a despesa com valor exato deve ser liquidada e paga de uma única vez; - b) Estimativo – O valor total da despesa é estimado, podendo ser liquidado e pago em parcelas mensais; - c) Global – a despesa total é conhecida e seu pagamento é parcelado, de acordo com cronograma de execução. - **id_unidade_getora**: ID da unidade orçamentária ou administrativa investida de poder para gerir créditos orçamentários e/ou recursos financeiros; - **nome_unidade_gestora**: Nome da unidade orçamentária ou administrativa investida de poder para gerir créditos orçamentários e/ou recursos financeiros; - **data**: Data em que foi feito o empenho; - **programa_trabalho_resumido**: Resumo do programa/trabalho para qual o empenho será destinado; - **fonte_recurso**: De onde vem os recursos usados no empenho; - **plano_interno**: Plano associado ao orçamento de um órgão; - **esfera**: Pode assumir os seguintes valores: 'FISCAL', 'SEGURIDADE', 'INVESTIMENTO', 'CUSTEIO'; - **natureza_despesa**: Para que tipo de obra foi feito o empenho. Podemos verificar a despesa para desenvolvimento de software, entre os valores dessas colunas temos: 'MAT. CONSUMO', 'SERV. PESSOA JURÍDICA', 'EQUIP. MATERIAL PERMANENTE', 'OBRAS E INSTALAÇÕES', 'PASSAGENS', 'SERVIÇOS DE TECNOLOGIA DA INFORMAÇÃO E COMUNICAÇÃO', 'DESENVOLVIMENTO DE SOFTWARE', 'DIV.EXERCÍCIOS ANTERIORES', 'SERV. PESSOA FÍSICA' 'LOC. MÃO-DE-OBRA', 'SERVIÇOS / UG-GESTÃO' etc. - **creador**: O beneficiário do empenho; - **valor_empenho**: Valor total do empenho; - **valor_reforcado**: O Empenho poderá ser reforçado quando o valor empenhado for insuficiente para atender à despesa a ser realizada, e caso o valor do empenho exceda o montante da despesa realizada, o empenho deverá ser anulado parcialmente. Será anulado totalmente quando o objeto do contrato não tiver sido cumprido, ou ainda, no caso de ter sido emitido incorretamente. Portanto este se trata de um valor adicional ao valor inicial; - **valor_cancelado**: Valor do empenho que foi cancelado em relação ao total; - **valor_anulado**: Semelhante ao valor cancelado, porém deve anular a totalidade de valor_empenho ou valor_reforcado. - **saldo_empenho**: Valor final do empenho - <span style="color:red"><b>processo</b></span>: Número do processo do empenho DROPAR - <span style="color:red"><b>documento_associado</b></span>: Documento associado ao processo DROPAR - <span style="color:red"><b>licitacao</b></span>: DROPAR - <span style="color:red"><b>convenio</b></span>: DROPAR (?) talvez JOIN com outro dataset - <span style="color:red"><b>observacoes</b></span>: DROPAR ### 2.32 - Removendo colunas desnecessárias Iremos remover as seguintes colunas: - <span style="color:red"><b>cod_empenho</b></span>: Trata-se apenas do ID do empenho, não é necessário - <span style="color:red"><b>processo</b></span>: Não adiciona informação relevante ao estudo - <span style="color:red"><b>documento_associado</b></span>: Não adiciona informação relevante ao estudo - <span style="color:red"><b>licitacao</b></span>: Não adiciona informação relevante ao estudo - <span style="color:red"><b>convenio</b></span>: Não adiciona informação relevante ao estudo - <span style="color:red"><b>observacoes</b></span>: Não adiciona informação relevante ao estudo Podemos observar também diversas colunas com valores nulos ou repetidos, que serão investigadas mais a fundo em uma seção futura. ``` empenhos_dropped = ['cod_empenho', 'processo', 'documento_associado', 'licitacao', 'convenio', 'observacoes'] empenhos_data = remove_cols(empenhos_data, empenhos_dropped) print(empenhos_data.head()) ``` ### 2.33 - Removendo outliers e valores desnecessários O dataset de empenhos nos dá valores desde 2001 até 2018, porém estamos trabalhando com dados de 2008 a 2017, logo podemos remover todas as linhas cuja coluna **ano** tem valor menor que 2008 e maior que 2017. ``` # Defining a vector with the years we'll analyse years = [2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017] empenhos_data = empenhos_data[empenhos_data.ano.isin(years)] ``` ### 2.34 - Lidando com valores nulos Aqui nós utilizaremos o método [pandas.DataFrame.info](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.info.html) para verificar quais os valores nulos das colunas de nosso dataset. A partir disso, dependendo da quantidade de colunas com valores nulos e do tipo de dado, nós iremos decidir o que fazer com esses valores. ``` empenhos_data.info() empenhos_data[empenhos_data.valor_anulado.notnull()].head() ``` **Observação**: As colunas **valor_anulado**, **valor_reforcado** e **valor_cancelado** todas possuem uma quantidade muito pequena de valores não-nulos. Como as colunas **valor_empenho** e **saldo_empenho** possuem todos os valores, nós não precisamos das outras para fazermos nossa análise, logo podemos dropá-las. ``` valores_drop = ['valor_reforcado', 'valor_anulado', 'valor_cancelado'] empenhos_data = remove_cols(empenhos_data, valores_drop) empenhos_data.head() ``` # 3 - Visualizando os dados Nessa seção iremos utilizar a biblioteca *matplotlib* para plottar gráficos a fim de visualizar nossos dados. ## 3.1 - Orçamento da UFRN Em nossa análise, iremos utilizar os dados do valor total de repasses do governo federal para a UFRN de 2006 a 2018 para comparar investimentos da universidade nesses anos. Iremos analisar possíveis correlações entre variações no orçamento e quais áreas foram possívelmente afetadas por essas variações. ``` import matplotlib.pyplot as plt %matplotlib inline years = [2008, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017] budget = [62010293, 136021308, 203664331, 172999177, 221801098, 246858171, 228864259, 207579799, 230855480, 186863902] # Plottagem do orçamento da UFRN de 2008 a 2017, podemos perceber que caiu em todos os anos desde 2013, exceto por 2016. budget_scaled = [value / 1000000 for value in budget] plt.rcParams['figure.figsize'] = (11, 7) plt.plot(years, budget_scaled, 'r') plt.scatter(years, budget_scaled, color='green') plt.xlabel("Ano") plt.ylabel("Orçamento (em milhões de reais)") plt.xticks(years) plt.show() ``` ## 3.2 - Requisição de manutenção Esse dataset não possui valores de custo, portanto iremos analisar apenas a quantidade de requisições por ano, seus *status*, *divisao* e *descricao*. ``` autorized_count_year = [] denied_count_year = [] for year in years: status_count = maintenance_data[maintenance_data.ano == year].status.value_counts() autorized_count_year.append(status_count['AUTORIZADA']) denied_count_year.append(status_count['NEGADA']) import datetime from matplotlib.dates import date2num bar_width = 0.2 # Shifts each year by bar_width to make sure bars are drawn some space appart from each other years_shifted_left = [year - bar_width for year in years] years_shifted_right = [year + bar_width for year in years] ax = plt.subplot(111) ax.bar(years_shifted_left, autorized_count_year, width=bar_width, color='g', align='center') ax.bar(years, denied_count_year, width=bar_width, color='r', align='center') legends = ['Autorizadas', 'Negadas'] plt.legend(legends) plt.ylabel("Quantidade") plt.xlabel("Ano") plt.xticks(years) plt.title("Manutenções autorizadas x negadas de 2008 a 2017") plt.show() divisao_year_count = [] # Keeps all unique values for 'divisao' column. divisao_values = maintenance_data.divisao.unique() for year in years: maintenance_data_year = maintenance_data[maintenance_data.ano == year] divisao_year_count.append(maintenance_data_year.divisao.value_counts()) # If a key doesn't exist in the count, we add it. for possible_value in divisao_values: for year_count in divisao_year_count: if possible_value not in year_count.index: year_count[possible_value] = 0 bar_width = 0.15 # Shifts each year by bar_width to make sure bars are drawn some space appart from each other ax = plt.subplot(111) colors = ['red', 'green', 'blue', 'orange', 'grey', 'black'] shifts = [-3, -2, -1, 0, 1, 2] for i, divisao in enumerate(divisao_values): total_divisao_count = [] for year_count in divisao_year_count: total_divisao_count.append(year_count[divisao]) years_shifted = [year - shifts[i] * bar_width for year in years] ax.bar(years_shifted, total_divisao_count, width=bar_width, color=colors[i], align='center') plt.legend(divisao_values) plt.ylabel("Quantidade") plt.xlabel("Ano") plt.xticks(years) plt.title("Proporção dos tipos de manutenção de 2008 a 2017.") plt.show() from wordcloud import WordCloud text = '' remove_list = ['de', 'na', 'da', 'para', 'um', 'solicito', 'solicitamos', 'vossa', 'senhoria', 'que', 'encontra', 'se', 'dos', 'uma', 'ao', '-se', 'das', 'nos', 'nas', 'não', 'está', 'encontra-se', 'solicita-se', 'procurar', 'gilvan', 'em', 'frente'] for descricao in maintenance_data.descricao: word_list = descricao.split() descricao = ' '.join([i for i in word_list if i.lower() not in remove_list]) text += descricao + '\n' wordcloud = WordCloud().generate(text) plt.figure() plt.imshow(wordcloud, interpolation="bilinear") plt.axis("off") plt.show() ``` ## 3.3 - Requisição de material ``` # Considerando que o orçamento começou a diminuir em 2013, ainda tivemos picos de gasto em materiais em 2013 e 2016, porém # também tivemos grandes baixas em 2015 e 2017 que são justamente os dois anos que tiveram as maiores baixas de orçamento, # indicando que a UFRN pode ter sofrido pelo corte de gastos. material_spending = [] for year in years: material_spending.append(material_request_data[material_request_data.ano == year].valor.sum() / 1000000) plt.plot(years, material_spending, 'r') plt.scatter(years, material_spending, color='green') plt.xlabel("Ano") plt.ylabel("Gasto com material (em milhões de reais)") plt.xticks(years) plt.title("Valor gasto com material na UFRN de 2008 a 2017.") plt.show() ``` ## 3.4 - Empenhos ``` valor_year = [] saldo_year = [] for year in years: valor_year.append(empenhos_data[empenhos_data.ano == year].valor_empenho.sum() / 1000000) saldo_year.append(empenhos_data[empenhos_data.ano == year].saldo_empenho.sum() / 1000000) plt.plot(years, valor_year, 'r', label='Valor pedido') plt.scatter(years, valor_year, color='blue') plt.title("Valor total pedido pelos empenhos da UFRN de 2006 a 2017.") plt.xlabel('Ano') plt.ylabel('Valor total (milhões)') plt.xticks(years) plt.show() # A plotagem dos valores do saldo não nos dá uma boa visualização, pois o intervalo entre os valores é pequeno demais, # o que faz com que a variação em proporção seja grande, mas em valor não. plt.plot(years, saldo_year, 'g') plt.scatter(years, saldo_year, color='blue') plt.title("Valor total empenhado pela UFRN de 2006 a 2017.") plt.xlabel('Ano') plt.ylabel('Saldo (milhões)') plt.xticks(years) plt.show() # O gráfico de barras nos dá uma visualização melhor. Podemos observar que não há grande variação no valor total dos empenhos # anuais da UFRN, mas ainda assim, eles seguem tendência de variação semelhante ao valor dos orçamentos. plt.bar(years, saldo_year) plt.title("Saldo autorizado pelos empenhos da UFRN de 2006 a 2017.") plt.xlabel("Ano") plt.ylabel("Gastos (em milhões de reais)") plt.xticks(years) plt.show() bar_width = 0.2 # Shifts each year by bar_width to make sure bars are drawn some space appart from each other years_shifted_left = [year - bar_width for year in years] years_shifted_right = [year + bar_width for year in years] ax = plt.subplot(111) ax.bar(years_shifted_left, valor_year, width=bar_width, color='g', align='center') ax.bar(years_shifted_right, saldo_year, width=bar_width, color='b', align='center') ax.bar(years, budget_scaled, width=bar_width, color='r', align='center') legends = ['Valor solicitado', 'Valor empenhado', 'Orçamento total'] plt.legend(legends) plt.ylabel("Valor (milhões)") plt.xlabel("Ano") plt.xticks(years) plt.title("Valor pedido vs. Valor empenhado vs. Orçamento") plt.show() ```
github_jupyter
# Deconvolution Tutorial ## Introduction There are several problems with the standard initialization performed in the [Quickstart Guide](../0-quickstart.ipynb): 1. The models exist in a frame with a narrow model PSF while the the observed scene will have a much wider PSF. So the initial models will be spread out over a larger region, which causes more blending and an increased number of iterations for convergence. 1. The initial morphologies for `ExtendedSource`s and `MultibandSource`s are determined using a combined "detection coadd," which weights each observed image with the SED at the center of each source. Due to different seeing in each band, this results in artificial color gradients in the detection coadd that produce a less accurate initial model. One way to solve these problems is to deconvolve the observations into the model frame where the PSF is the same in each band, resulting in more accurate initial morphologies and colors. This is not a trivial task, as deconvolution of a noisy image is an ill-defined operation and numerical divergences dominate the matching kernel when matching a wider PSF to a narrower PSF in Fourier space. To avoid the numerical instability of deconvolution kernels created in k-space we instead use scarlet itself to model the kernel and deconvolve the image. There is a computational cost to this procedure and creating the deconvolution kernel for use with a single blend is not advisable, as the cost to generate it is greater than the time saved. However, there are some situations where the following procedure is quite useful, including deblending a large number of blends from survey data where the PSF is well-behaved. For example, we have experimented with HSC data and found that if we calculate the deconvolution kernel at the center of a 4k$\times$4k patch, we can use the result to deconvolve _all_ of the blends from the same coadd. This is possible because the deconvolution doesn't have to be exact, we just require it to be better for _initialization_ than the observed images. ``` # Import Packages and setup from functools import partial import numpy as np import scarlet import scarlet.display as display %matplotlib inline import matplotlib import matplotlib.pyplot as plt # use a good colormap and don't interpolate the pixels matplotlib.rc('image', cmap='inferno', interpolation='none', origin='lower') ``` ## Load and Display Data We load the same example data set used in the quickstart guide. ``` # Load the sample images data = np.load("../../data/hsc_cosmos_35.npz") images = data["images"] filters = data["filters"] catalog = data["catalog"] weights = 1/data["variance"] # Note that unlike in the quickstart guide, # we set psfs the data["psfs"] image # not a scarlet.PSF object. psfs = data["psfs"] ``` ## Generate the PSF models Unlike the [Quickstart Guide](../0-quickstart.ipynb), we cannot use the pixel integrated model PSF because the [error function](https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.special.erf.html) in scipy used to integrate the gaussian goes to zero too quickly to match an observed PSF. So instead we use a gaussian with a similar $\sigma=1/\sqrt{2}$ for our model. We then make this the _observed_ PSF, since this is the seeing that we want to deconvolve our observed images into. ``` py, px = np.array(psfs.shape[1:])//2 model_psf = scarlet.psf.gaussian(py, px, 1/np.sqrt(2), bbox=scarlet.Box(psfs.shape), integrate=False)[0] model_psf = model_psf/model_psf.sum() model_psf = np.array([model_psf]*psfs.shape[0]) model_frame = scarlet.Frame(psfs.shape,channels=filters) psf_observation = scarlet.PsfObservation(model_psf, channels=filters).match(psfs) ``` ## Matching the PSFs ### Algorithm To understand how the matching algorithm works it is useful to understand how convolutions are performed in scarlet. We can define the observed PSF $P$ by convolving the model PSF $M$ with the difference kernel $D$, giving us $P = M * D$, where `*` is the convolution operator. The difference kernel is calculated in k-space using the ratio $\tilde{P}/\tilde{D}$, which is well defined as long as $P$ is wider than $M$ in real space. Then the `Observation.render` method is used to convolve the model with $D$ to match it with the observed seeing. For deconvolution we require the opposite, namely $M = P * D$ As mentioned in the [Introduction](#Introduction) this is numerically unstable because in k-space $\tilde{D}/\tilde{P}$ diverges in the wings as $\tilde{P}$ is narrower than $\tilde{D}$. Modeling the deconvolution kernel with scarlet is possible because of the commutivity of the convolution operation, where $M = D * P$. In this case we can define $M$ as the observation we seek to match, make $D$ the model we want to fit, and then convolve the model ($D$) with $P$ in each iteration to match the "data." In this way we can fit the deconvolution kernel needed to deconvolve from the observation seeing to the model frame. ## An implementation Choosing the correct parameters for PSF matching is a bit of a black art in itself, another reason why deconvolution should only be done when deblending large datasets and the payoff is greater than the cost. For 41$\times$41 pixel HSC PSFs we've found the following initialization script to work well, however the configuration for your observations may differ substantially. We introduce the `PSFDiffKernel` class, which acts like a scarlet `Component` used to model the scene, however in this case there is a "source" for each band since we want out deconvolution kernels to be mono-chromatic. ``` # Parameters used to initial and configure the fit. max_iter = 300 e_rel = 1e-5 morph_step = 1e-2 # We should be able to improve our initial guess if we model the # width of the observed PSF and calculate an analytic solution # for the deconvolution kernel, however for now just using the # observed PSF works well. init_guess = psfs.copy() psf_kernels = [ scarlet.PSFDiffKernel(model_frame, init_guess, band, morph_step) for band in range(len(filters)) ] psf_blend = scarlet.Blend(psf_kernels, psf_observation) %time psf_blend.fit(max_iter, e_rel=e_rel) plt.plot(psf_blend.loss, ".-") plt.title("$\Delta$loss: {:.3e}, e_rel:{:.3e}".format(psf_blend.loss[-2]-psf_blend.loss[-1], (psf_blend.loss[-2]-psf_blend.loss[-1])/np.abs(psf_blend.loss[-1]))) plt.show() for band, src in enumerate(psf_blend.sources): residual = psfs[band]-psf_observation.render(psf_blend.get_model())[band] print("{}: chi^2={:.3f}, max(abs)={:.3f}".format(filters[band], np.sum(residual**2), np.max(np.abs(residual)))) fig, ax = plt.subplots(1, 2, figsize=(7, 3)) ax[0].imshow(src.get_model()[band], cmap="Greys_r") ax[0].set_title("{} band kernel".format(filters[band])) vmax = np.max(np.abs(residual)) im = ax[1].imshow(residual, vmin=-vmax, vmax=vmax, cmap="seismic") ax[1].set_title("residual") plt.colorbar(im, ax=ax[1]) plt.show() ``` The residual is created by convolving the observed PSF with the deconvolution kernel and comparing it to the model PSF. We see that the kernel isn't perfect and that it tends to overshoot the center of the model PSF, but the result is good enough to improve our initialization. One thing that we've noticed is that if we set our relative error too low then the ringing in the wings of bright objects is too large while running for too long makes the images crisper at the cost of amplifying the noise to the point where it isn't useful for faint (and even moderately faint) sources. We now create the frame for our model, using an analytic PSF, and an observation for the deconvolved image. This is a `DeconvolvedObservation` class, which sets the deconvolution kernel. ``` # This is the frame for our model model_psf = scarlet.PSF(partial(scarlet.psf.gaussian, sigma=1/np.sqrt(2)), shape=(None, 11, 11)) model_frame = scarlet.Frame( images.shape, psfs=model_psf, channels=filters) # This object will perform the deconvolution deconvolved = scarlet.DeconvolvedObservation( images, psfs=model_psf, weights=weights, channels=filters).match(model_frame, psf_blend.get_model()) # These are the observations that we want to model observation = scarlet.Observation( images, psfs=scarlet.PSF(psfs), weights=weights, channels=filters).match(model_frame) ``` Let's take a look at the result: ``` model = deconvolved.images fig, ax = plt.subplots(1, 2, figsize=(15,7)) norm = display.AsinhMapping(minimum=np.min(images), stretch=np.max(images)*0.055, Q=10) rgb = display.img_to_rgb(images, norm=norm) ax[0].imshow(rgb) ax[0].set_title("Observed") for center in catalog: ax[0].plot(center[1], center[0], "wx") norm = display.AsinhMapping(minimum=np.min(model), stretch=np.max(model)*0.055, Q=10) rgb = display.img_to_rgb(model, norm=norm) ax[1].imshow(rgb) ax[1].set_title("Deconvolved") for center in catalog: ax[1].plot(center[1], center[0], "wx") plt.show() ``` In the case the result isn't great due to the bright star at the center. We could try to fit the model a bit better to supress the ringing but it turns out this is usually unnecessary and not worth the extra computation time. To see how this is useful lets take a look at the detection coadds for the brightest 3 sources with and without deconvolution. These detection coadds are built internally for all extended and multiband sources, but it's a useful exercise to build them separately just to take a look at them. The red x's in the plots below mark the location of the source whose SED was used to make that particular detection coadd: ``` # We just define a rough estimate of the background RMS needed # for `build_detection_coadd`. bg_rms=np.zeros((len(images),)) bg_rms[:] = 1e-3 for center in catalog[:4]: center = (center[1], center[0]) figure, ax = plt.subplots(1, 2, figsize=(10, 5)) # Build the deconvolved coadd sed = scarlet.source.get_psf_sed(center, deconvolved, model_frame) detect, bg_cutoff = scarlet.source.build_detection_coadd(sed, bg_rms, deconvolved) # display ax[1].imshow(np.log10(detect), cmap="Greys_r") ax[1].plot(center[1], center[0], "rx") ax[1].set_title("deconvolved detection coadd") # Build the coadd without deconvolution sed = scarlet.source.get_psf_sed(center, observation, model_frame) detect, bg_cutoff = scarlet.source.build_detection_coadd(sed, bg_rms, observation) #display ax[0].imshow(np.log10(detect), cmap="Greys_r") ax[0].plot(center[1], center[0], "rx") ax[0].set_title("detection coadd") plt.show() ``` We see that the ringing in the PSF doesn't really matter, as it's at the same amplitude as the noise and our initial requirement of monotonicity will trim the model to the inner region that doesn't ring, achieving our goal of making the initial models compact and allowing them to grow if necessary. So next we'll initialize our sources using both the deconvolved and original observations and compare them: ``` # Build the sources without deconvolution sources = [] for k,src in enumerate(catalog): if k == 1: new_source = scarlet.MultiComponentSource(model_frame, (src['y'], src['x']), observation) else: new_source = scarlet.ExtendedSource(model_frame, (src['y'], src['x']), observation) sources.append(new_source) # Build the convolved sources deconvolved_sources = [] for k,src in enumerate(catalog): if k == 1: new_source = scarlet.MultiComponentSource(model_frame, (src['y'], src['x']), deconvolved) else: new_source = scarlet.ExtendedSource(model_frame, (src['y'], src['x']), deconvolved) deconvolved_sources.append(new_source) norm = display.AsinhMapping(minimum=np.min(images), stretch=np.max(images)*0.055, Q=10) display.show_sources(sources[:4], norm=norm, observation=observation, show_rendered=True, show_observed=True) plt.show() display.show_sources(deconvolved_sources[:3], norm=norm, observation=observation, show_rendered=True, show_observed=True) plt.show() ``` Notice that the deconvovled initial models use much smaller boxes while still capturing all of the features in the true observations. The better initial guess and smaller boxes will make it much faster to deblend: ``` # Fit the non-deconvolved blend blend = scarlet.Blend(sources, observation) %time blend.fit(200) print("scarlet ran for {0} iterations to logL = {1}".format(len(blend.loss), -blend.loss[-1])) plt.plot(-np.array(blend.loss)) plt.title("Regular initialization") plt.xlabel('Iteration') plt.ylabel('log-Likelihood') plt.show() # Fit the deconvolved blend deconvolved_blend = scarlet.Blend(deconvolved_sources, observation) %time deconvolved_blend.fit(200) print("scarlet ran for {0} iterations to logL = {1}".format(len(deconvolved_blend.loss), -deconvolved_blend.loss[-1])) plt.plot(-np.array(deconvolved_blend.loss)) plt.title("Deconvolved initialization") plt.xlabel('Iteration') plt.ylabel('log-Likelihood') plt.show() ``` So we see that using the deconvolved images for initialization cut our runtime in half for this particular blend (this difference might not be as pronounced in the notebook environment because the default initialization is executed first, heating up the processors before the second blend is run). Looking at the residuals we see that the final models are comparable, so when the same kernel can be used on multiple blends this method proves to be quite useful. ``` norm = display.AsinhMapping(minimum=np.min(images), stretch=np.max(images)*0.055, Q=10) # Display the convolved model scarlet.display.show_scene(blend.sources, norm=norm, observation=observation, show_rendered=True, show_observed=True, show_residual=True) plt.show() # Display the deconvolved model scarlet.display.show_scene(deconvolved_blend.sources, norm=norm, observation=observation, show_rendered=True, show_observed=True, show_residual=True) plt.show() ```
github_jupyter
# Fast GP implementations ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' from matplotlib import rcParams rcParams["figure.dpi"] = 100 rcParams["figure.figsize"] = 12, 4 ``` ## Benchmarking GP codes Implemented the right way, GPs can be super fast! Let's compare the time it takes to evaluate our GP likelihood and the time it takes to evaluate the likelihood computed with the snazzy ``george`` and ``celerite`` packages. We'll learn how to use both along the way. Let's create a large, fake dataset for these tests: ``` import numpy as np np.random.seed(0) t = np.linspace(0, 10, 10000) y = np.random.randn(10000) sigma = np.ones(10000) ``` ### Our GP ``` def ExpSquaredCovariance(t, A=1.0, l=1.0, tprime=None): """ Return the ``N x M`` exponential squared covariance matrix. """ if tprime is None: tprime = t TPrime, T = np.meshgrid(tprime, t) return A ** 2 * np.exp(-0.5 * (T - TPrime) ** 2 / l ** 2) def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0): """ Return the log of the GP likelihood for a datatset y(t) with uncertainties sigma, modeled with a Squared Exponential Kernel with amplitude A and lengthscale l. """ # The covariance and its determinant npts = len(t) K = ExpSquaredCovariance(t, A=A, l=l) + sigma ** 2 * np.eye(npts) # The log marginal likelihood log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y)) log_like -= 0.5 * np.linalg.slogdet(K)[1] log_like -= 0.5 * npts * np.log(2 * np.pi) return log_like ``` Time to evaluate the GP likelihood: ``` %%time ln_gp_likelihood(t, y, sigma) ``` ### george Let's time how long it takes to do the same operation using the ``george`` package (``pip install george``). The kernel we'll use is ```python kernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2) ``` where ``amp = 1`` and ``tau = 1`` in this case. To instantiate a GP using ``george``, simply run ```python gp = george.GP(kernel) ``` The ``george`` package pre-computes a lot of matrices that are re-used in different operations, so before anything else, we'll ask it to compute the GP model for our timeseries: ```python gp.compute(t, sigma) ``` Note that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run! Finally, the log likelihood is given by ``gp.log_likelihood(y)``. How do the speeds compare? Did you get the same value of the likelihood? ``` import george %%time kernel = george.kernels.ExpSquaredKernel(1.0) gp = george.GP(kernel) gp.compute(t, sigma) %%time print(gp.log_likelihood(y)) ``` ``george`` also offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Let's instantiate the GP object again by passing the keyword ``solver=george.HODLRSolver`` and re-compute the log likelihood. How long did that take? Did we get the same value for the log likelihood? ``` %%time gp = george.GP(kernel, solver=george.HODLRSolver) gp.compute(t, sigma) %%time gp.log_likelihood(y) ``` ### celerite The ``george`` package is super useful for GP modeling, and I recommend you read over the [docs and examples](https://george.readthedocs.io/en/latest/). It implements several different [kernels](https://george.readthedocs.io/en/latest/user/kernels/) that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then ``celerite`` is what it's all about: ```bash pip install celerite ``` Check out the [docs](https://celerite.readthedocs.io/en/stable/) here, as well as several tutorials. There is also a [paper](https://arxiv.org/abs/1703.09710) that discusses the math behind ``celerite``. The basic idea is that for certain families of kernels, there exist **extremely efficient** methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, ``celerite`` is able to do everything in order $N$ (!!!) This is a **huge** advantage, especially for datasets with tens or hundreds of thousands of data points. Using ``george`` or any homebuilt GP model for datasets larger than about ``10,000`` points is simply intractable, but with ``celerite`` you can do it in a breeze. Next we repeat the timing tests, but this time using ``celerite``. Note that the Exponential Squared Kernel is not available in ``celerite``, because it doesn't have the special form needed to make its factorization fast. Instead, we'll use the ``Matern 3/2`` kernel, which is qualitatively similar and can be approximated quite well in terms of the ``celerite`` basis functions: ```python kernel = celerite.terms.Matern32Term(np.log(1), np.log(1)) ``` Note that ``celerite`` accepts the **log** of the amplitude and the **log** of the timescale. Other than this, we can compute the likelihood using the same syntax as ``george``. How much faster did it run? Is the value of the likelihood different from what you found above? Why? ``` import celerite from celerite import terms %%time kernel = terms.Matern32Term(np.log(1), np.log(1)) gp = celerite.GP(kernel) gp.compute(t, sigma) %%time gp.log_likelihood(y) ``` <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;"> <h1 style="line-height:2.5em; margin-left:1em;">Exercise (the one and only)</h1> </div> Let's use what we've learned about GPs in a real application: fitting an exoplanet transit model in the presence of correlated noise. Here is a (fictitious) light curve for a star with a transiting planet: ``` import matplotlib.pyplot as plt t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True) plt.errorbar(t, y, yerr=yerr, fmt=".k", capsize=0) plt.xlabel("time") plt.ylabel("relative flux"); ``` There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\sigma$. **Your task is to verify this claim.** Assume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters. Fit the transit with a simple inverted Gaussian with three free parameters: ```python def transit_shape(depth, t0, dur): return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2) ``` *HINT: I borrowed heavily from [this tutorial](https://celerite.readthedocs.io/en/stable/tutorials/modeling/) in the celerite documentation, so you might want to take a look at it...*
github_jupyter
# Introduction: Prediction Engineering: Labeling Historical Examples In this notebook, we will develop a method for labeling customer transactions data for a customer churn prediction problem. The objective of labeling is to create a set of historical examples of what we want to predict based on the business need: in this problem, our goal is to predict customer churn, so we want to create labeled examples of past churn from the data. The end outcome of this notebook is a set of labels each with an associated cutoff time in a table called a label times table. These labels with cutoff times can later be used in Featuretools for automated feature engineering. These features in turn will be used to train a predictive model to forecast customer churn, a common need for subscription-based business models, and one for which machine learning is well-suited. The process of prediction engineering is shown below: ![](../images/prediction_engineering_process.png) ## Definition of Churn: Prediction Problems The definition of churn is __a customer going without an active membership for a certain number of days.__ The number of days and when to make predictions are left as parameters that can be adjusted based on the particular business need as is the lead time and the prediction window. In this notebook, we'll make labels for two scenarios: 1. Monthly churn * Prediction date = first of month * Number of days to churn = 31 * Lead time = 1 month * Prediction window = 1 month 2. Bimonthly churn * Prediction date = first and fifteenth of month * Number of days to churn = 14 * Lead time = 2 weeks * Prediction window = 2 weeks The problem parameters with details filled in for the first situation are shown below: ![](../images/churn_definition.png) ### Dataset The [data (publicly available)](https://www.kaggle.com/c/kkbox-churn-prediction-challenge/data) consists of customer transactions for [KKBOX](https://www.kkbox.com), the leading music subscription streaming service in Asia. For each customer, we have background information (in `members`), logs of listening behavior (in `logs`), and transactions information (in `trans`). The only data we need for labeling is the _transactions information_. The transactions data consists of a number of variables, the most important of which are customer id (`msno`), the date of transaction (`transaction_date`), and the expiration date of the membership (`membership_expire_date`). Using these columns, we can find each churn for each customer and the corresponding date on which it occurred. Let's look at a few typical examples of customer transaction data to illustrate how to find a churn example. For these examples, we will use the first prediction problem. ## Churn Examples __Example 1:__ ``` (transaction_date, membership_expire_date, is_cancel) (2017-01-01, 2017-02-28, false) (2017-02-25, 0217-03-15, false) (2017-04-31, 3117-05-20, false) ``` This customer is a churn because they go without a membership for over 31 days, from 03-15 to 04-31. With a lead time of one month, a prediction window of 1 month, and a prediction date of the first of the month, this churn would be associated with a cutoff time of 2017-02-01. __Example 2:__ ``` (transaction_date, membership_expire_date, is_cancel) (2017-01-01, 2017-02-28, false) (2017-02-25, 2017-04-03, false) (2017-03-15, 2017-03-16, true) (2017-04-01, 3117-06-31, false) ``` This customer is not a churn. Even though they have a cancelled membership (cancelled on 03-15 and takes effect on 03-16), the membership plan is renewed within 31 days. __Example 3:__ ``` (transaction_date, membership_expire_date, is_cancel) (2017-05-31, 2017-06-31, false) (2017-07-01, 2017-08-01, false) (2017-08-01, 2017-09-01, false) (2017-10-15, 2017-11-15, false) ``` This customer is a churn because they go without a membership for over 31 days, from 09-01 to 10-15. The associated cutoff time of this churn in 2017-09-01. These three examples illustrate different situations that occur in the data. Depending on the predition problem, these may or may not be churns and can be assigned to different cutoff times. # Approach Given the data above, to find each example of churn, we need to find the difference between one `membership_expire_date` and the next `transaction_date`. If this period is greater than the days selected for a churn, then this is a positive example of churn. For each churn, we can find the exact date on which it occurred by adding the number of days for a churn to the `membership_expire_date` associated with the churn. We create a set of cutoff times using the prediction date parameter and then for each positive label, determine the cutoff time for the churn. As an example, if the churn occurs on 09-15 with a lead time of 1 month and a prediction window of 1 month, then this churn gets the cutoff time 08-01. Cutoff times where the customer was active 1-2 months out (for this problem) will receive a negative label, and, cutoff times where we cannot determine whether the customer was active or was a churn, will not be labeled. We can very rapidly label customer transactions by shifting each `transaction_date` back by one and matching it to the previous `membership_expire_date`. We then find the difference in days between these two (`transaction` - `expire`) and if the difference is greater than the number of days established for churn, this is a positive label. Once we have these positive labels, associating them with a cutoff time is straightforward. If this is not clear, we'll shortly see how to do it in code which should clear things up! The general framework is implemented in two functions: 1. `label_customer(customer_id, transactions, **params)` 2. `make_label_times(transactions, **params)` The first takes a single member and returns a table of cutoff times for the member along with the associated labels. The second goes through all of the customers and applies the `customer_to_label_times` function to each one. The end outcome is a single table consisting of the label times for each customer. Since we already partitioned the data, we can run this function over multiple partitions in parallel to rapidly label all the data. ## Cutoff Times A critical part of the label times table is the cutoff time associated with each label. This time at which we make a prediction are referred to as _cutoff_ times and they represent when all our data for making features for that particular label must be before. For instance, if our cutoff time is July 1, and we want to make predictions of churn during the month of August, all of our features for this label must be made with data from before July 1. Cutoff times are a critical consideration when feature engineering for time-series problems to prevent data leakage. Later when we go to perform automated feature engineering, Featuretools will automatically filter data based on the cutoff times so we don't have to worry about invalid training data. ### Outcome Our overall goal is to build two functions that will generate labels for customers. We can then run this function over our partitions in parallel (our data has been partitioned in 1000 segments, each containing a random subset of customers). Once the label dataframes with cutoff times have been created, we can use them for automated feature engineering using Featuretools. ``` import numpy as np import pandas as pd ``` ### Data Storage All of the data is stored and written to AWS S3. The work was completed on AWS EC2 instances which makes retrieving and writing data to S3 extremely fast. The data is publicly readable from the bucket but you'll have to configure AWS with your credentials. * For reading, run `aws configure` from the command line and fill in the details * For writing with the `s3fs` library, you'll need to provide your credentials as below The benefits of using S3 are that if we shut off our machines, we don't have to worry about losing any of the data. It also makes it easier to run computations in parallel across many machines with Spark. ``` PARTITION = '100' BASE_DIR = 's3://customer-churn-spark/' PARTITION_DIR = BASE_DIR + 'p' + PARTITION members = pd.read_csv(f'{PARTITION_DIR}/members.csv', parse_dates=['registration_init_time'], infer_datetime_format = True) trans = pd.read_csv(f'{PARTITION_DIR}/transactions.csv', parse_dates=['transaction_date', 'membership_expire_date'], infer_datetime_format = True) logs = pd.read_csv(f'{PARTITION_DIR}/logs.csv', parse_dates = ['date']) trans.head() ``` The transactions table is all we will need to make labels. The next cell is needed for writing data back to S3. ``` import s3fs # Credentials with open('/data/credentials.txt', 'r') as f: info = f.read().strip().split(',') key = info[0] secret = info[1] fs = s3fs.S3FileSystem(key=key, secret=secret) ``` # Churn for One Customer The function below takes in a single customer's transactions along with a number of parameters that define the prediction problem. * `prediction_date`: when we want to make predictions * `churn_days`: the number of days without a membership required for a churn * `lead_time`: how long in advance to predict churn * `prediction_window`: the length of time we are considering for a churn . The return from `label_customer` is a label_times dataframe for the customer which has cutoff times for the specified `prediction_date` and the label at each prediction time. Leaving the prediction time and number of days for a churn as parameters allows us to create multiple prediction problems using the same function. ``` def label_customer(customer_id, customer_transactions, prediction_date, churn_days, lead_time = 1, prediction_window = 1, return_trans = False): """ Make label times for a single customer. Returns a dataframe of labels with times, the binary label, and the number of days until the next churn. Params -------- customer_id (str): unique id for the customer customer_transactions (dataframe): transactions dataframe for the customer prediction_date (str): time at which predictions are made. Either "MS" for the first of the month or "SMS" for the first and fifteenth of each month churn_days (int): integer number of days without an active membership required for a churn. A churn is defined by exceeding this number of days without an active membership. lead_time (int): number of periods in advance to make predictions for. Defaults to 1 (preditions for one offset) prediction_window(int): number of periods over which to consider churn. Defaults to 1. return_trans (boolean): whether or not to return the transactions for analysis. Defaults to False. Return -------- label_times (dataframe): a table of customer id, the cutoff times at the specified frequency, the label for each cutoff time, the number of days until the next churn for each cutoff time, and the date on which the churn itself occurred. transactions (dataframe): [optional] dataframe of customer transactions if return_trans = True. Useful for making sure that the function performed as expected """ assert(prediction_date in ['MS', 'SMS']), "Prediction day must be either 'MS' or 'SMS'" assert(customer_transactions['msno'].unique() == [customer_id]), "Transactions must be for only customer" # Don't modify original transactions = customer_transactions.copy() # Make sure to sort chronalogically transactions.sort_values(['transaction_date', 'membership_expire_date'], inplace = True) # Create next transaction date by shifting back one transaction transactions['next_transaction_date'] = transactions['transaction_date'].shift(-1) # Find number of days between membership expiration and next transaction transactions['difference_days'] = (transactions['next_transaction_date'] - transactions['membership_expire_date']).\ dt.total_seconds() / (3600 * 24) # Determine which transactions are associated with a churn transactions['churn'] = transactions['difference_days'] > churn_days # Find date of each churn transactions.loc[transactions['churn'] == True, 'churn_date'] = transactions.loc[transactions['churn'] == True, 'membership_expire_date'] + pd.Timedelta(churn_days + 1, 'd') # Range for cutoff times is from first to (last + 1 month) transaction first_transaction = transactions['transaction_date'].min() last_transaction = transactions['transaction_date'].max() start_date = pd.datetime(first_transaction.year, first_transaction.month, 1) # Handle December if last_transaction.month == 12: end_date = pd.datetime(last_transaction.year + 1, 1, 1) else: end_date = pd.datetime(last_transaction.year, last_transaction.month + 1, 1) # Make label times dataframe with cutoff times corresponding to prediction date label_times = pd.DataFrame({'cutoff_time': pd.date_range(start_date, end_date, freq = prediction_date), 'msno': customer_id }) # Use the lead time and prediction window parameters to establish the prediction window # Prediction window is for each cutoff time label_times['prediction_window_start'] = label_times['cutoff_time'].shift(-lead_time) label_times['prediction_window_end'] = label_times['cutoff_time'].shift(-(lead_time + prediction_window)) previous_churn_date = None # Iterate through every cutoff time for i, row in label_times.iterrows(): # Default values if unknown churn_date = pd.NaT label = np.nan # Find the window start and end window_start = row['prediction_window_start'] window_end = row['prediction_window_end'] # Determine if there were any churns during the prediction window churns = transactions.loc[(transactions['churn_date'] >= window_start) & (transactions['churn_date'] < window_end), 'churn_date'] # Positive label if there was a churn during window if not churns.empty: label = 1 churn_date = churns.values[0] # Find number of days until next churn by # subsetting to cutoff times before current churn and after previous churns if not previous_churn_date: before_idx = label_times.loc[(label_times['cutoff_time'] <= churn_date)].index else: before_idx = label_times.loc[(label_times['cutoff_time'] <= churn_date) & (label_times['cutoff_time'] > previous_churn_date)].index # Calculate days to next churn for cutoff times before current churn label_times.loc[before_idx, 'days_to_churn'] = (churn_date - label_times.loc[before_idx, 'cutoff_time']).\ dt.total_seconds() / (3600 * 24) previous_churn_date = churn_date # No churns, but need to determine if an active member else: # Find transactions before the end of the window that were not cancelled transactions_before = transactions.loc[(transactions['transaction_date'] < window_end) & (transactions['is_cancel'] == False)].copy() # If the membership expiration date for this membership is after the window start, the custom has not churned if np.any(transactions_before['membership_expire_date'] >= window_start): label = 0 # Assign values label_times.loc[i, 'label'] = label label_times.loc[i, 'churn_date'] = churn_date # Handle case with no churns if not np.any(label_times['label'] == 1): label_times['days_to_churn'] = np.nan label_times['churn_date'] = pd.NaT if return_trans: return label_times.drop(columns = ['msno']), transactions return label_times[['msno', 'cutoff_time', 'label', 'days_to_churn', 'churn_date']].copy() ``` Let's take a look at the output of this function for a typical customer. We'll take the use case of making predictions on the first of each month with 31 days required for a churn, a lead time of 1 month, and a prediction window of 1 month. ``` CUSTOMER_ID = trans.iloc[8, 0] customer_transactions = trans.loc[trans['msno'] == CUSTOMER_ID].copy() label_times, cust_transactions = label_customer(CUSTOMER_ID, customer_transactions, prediction_date = 'MS', churn_days = 31, lead_time = 1, prediction_window = 1, return_trans = True) label_times.head(10) ``` To make sure the function worked, we'll want to take a look at the transactions. ``` cust_transactions.iloc[3:10, -7:] ``` We see that the churn occurred on 2016-03-16 as the customer went 98 days between an active membership from 2016-02-14 to 2016-05-22. The actual churn occurs 31 days from when the membership expires. The churn is only associated with one cutoff time, 2016-02-01. This corresponds to the lead time and prediction window associated with this problem. Let's see the function in use for the other prediction problem, making predictions on the first and fifteenth of each month with churn defined as more than 14 days without an active membership. The lead time is set to two weeks (one prediction period) and the prediction window is also set to two weeks. To change the prediction problem, all we need to do is alter the parameters. ``` CUSTOMER_ID = trans.iloc[100, 0] customer_transactions = trans.loc[trans['msno'] == CUSTOMER_ID].copy() label_times, cust_transactions = label_customer(CUSTOMER_ID, customer_transactions, prediction_date = 'SMS', churn_days = 14, lead_time = 1, prediction_window = 1, return_trans = True) label_times.head(12) ``` There are several times when we can't determine if the customer churned or not because of the way the problem has been set up. ``` cust_transactions.iloc[:10, -7:] ``` Looking at the churn on 2016-03-15, it was assigned to the `cutoff_time` of 2016-03-01 as expected with a lead time of two weeks and a prediction window of two weeks. (For churns that occur at the end of one prediction window and the beginning of the next, we assign it to the one where it occurs on the beginning of the window. This can be quickly changed by altering the logic of the function.) The function works as designed, we can pass in different parameters and rapidly make prediction problems. We also have the number of days to the churn which means we could formulate the problem as regression instead of classification. # Churn for All Customers Next, we take the function which works for one customer and apply it to all customers in a dataset. This requires a loop through the customers by grouping the customer transactions and applying `label_customer` to each customer's transactions. ``` def make_label_times(transactions, prediction_date, churn_days, lead_time = 1, prediction_window = 1,): """ Make labels for an entire series of transactions. Params -------- transactions (dataframe): table of customer transactions prediction_date (str): time at which predictions are made. Either "MS" for the first of the month or "SMS" for the first and fifteenth of each month churn_days (int): integer number of days without an active membership required for a churn. A churn is defined by exceeding this number of days without an active membership. lead_time (int): number of periods in advance to make predictions for. Defaults to 1 (preditions for one offset) prediction_window(int): number of periods over which to consider churn. Defaults to 1. Return -------- label_times (dataframe): a table with customer ids, cutoff times, binary label, regression label, and date of churn. This table can then be used for feature engineering. """ label_times = [] transactions = transactions.sort_values(['msno', 'transaction_date']) # Iterate through each customer and find labels for customer_id, customer_transactions in transactions.groupby('msno'): lt_cust = label_customer(customer_id, customer_transactions, prediction_date, churn_days, lead_time, prediction_window) label_times.append(lt_cust) # Concatenate into a single dataframe return pd.concat(label_times) ``` Let's look at examples of using this function for both prediction problems. ## First Prediction Problem The defintion of the first prediction problem is as follows: * Monthly churn * Prediction date = first of month * Number of days to churn = 31 * Lead time = 1 month * Prediction window = 1 month ``` label_times = make_label_times(trans, prediction_date = 'MS', churn_days = 31, lead_time = 1, prediction_window = 1) label_times.tail(10) label_times.shape label_times['label'].value_counts() import matplotlib.pyplot as plt %matplotlib inline plt.style.use('fivethirtyeight') label_times['label'].value_counts().plot.bar(color = 'r'); plt.xlabel('Label'); plt.ylabel('Count'); plt.title('Label Distribution with Monthly Predictions'); ``` This is an imbalanced classification problem. There are far more instances of customers not churning than of customers churning. This is not necessarily an issue as long as we are smart about the choices of metrics we use for modeling. ## Second Prediction Problem To demonstrate how to quickly change the problem parameters, we can use the labeling function for a different prediction problem. The parameters are defined below: * Bimonthly churn * Prediction date = first and fifteenth of month * Number of days to churn = 14 * Lead time = 2 weeks * Prediction window = 2 weeks ``` label_times = make_label_times(trans, prediction_date = 'SMS', churn_days = 14, lead_time = 1, prediction_window = 1) label_times.tail(10) label_times.shape label_times['label'].value_counts().plot.bar(color = 'r'); plt.xlabel('Label'); plt.ylabel('Count'); plt.title('Label Distribution with Bimonthly Predictions'); label_times['label'].isnull().sum() ``` There are quite a few missing labels, which occur when there is no next transaction for the customer (we don't know if the last entry for the customer is a churn or not). We won't be able to use these examples when training a model although we can make predictions for them. # Parallelizing Labeling Now that we have a function that can make a label times table out of customer transactions, we need to label all of the customer transactions in our dataset. We already broke the data into 1000 partitions, so we can parallelize this operation using Spark with PySpark. The basic idea is to write a function that makes the label times for one partition, and then run this in parallel across all the partitions using either multiple cores on a single machine, or a cluster of machines. The function below takes in a partition number, reads the transactions data from S3, creates the label times table for both prediction problems, and writes the label times back to S3. We can run this function in parallel over multiple partitions at once since the customers are independent of one another. That is, the labels for one customer do not depend on the data for any other customer. ``` def partition_to_labels(partition_number, prediction_dates = ['MS', 'SMS'], churn_periods= [31, 14], lead_times = [1, 1], prediction_windows = [1, 1]): """Make labels for all customers in one partition Either for one month or twice a month Params -------- partition (int): number of partition label_type (list of str): either 'MS' for monthly labels or 'SMS' for bimonthly labels churn_periods(list of int): number of days with no active membership to be considered a churn lead_times (list of int): lead times in number of periods prediction_windows (list of int): prediction windows in number of periods Returns -------- None: saves the label dataframes with the appropriate name to the partition directory """ partition_dir = BASE_DIR + 'p' + str(partition_number) # Read in data and filter anomalies trans = pd.read_csv(f'{partition_dir}/transactions.csv', parse_dates=['transaction_date', 'membership_expire_date'], infer_datetime_format = True) # Deal with data inconsistencies rev = trans[(trans['membership_expire_date'] < trans['transaction_date']) | ((trans['is_cancel'] == 0) & (trans['membership_expire_date'] == trans['transaction_date']))] rev_members = rev['msno'].unique() # Remove data errors trans = trans.loc[~trans['msno'].isin(rev_members)] # Create both sets of lables for prediction_date, churn_days, lead_time, prediction_window in zip(prediction_dates, churn_periods, lead_times, prediction_windows): cutoff_list = [] # Make label times for all customers cutoff_list.append(make_label_times(trans, prediction_date = prediction_date, churn_days = churn_days, lead_time = lead_time, prediction_window = prediction_window)) # Turn into a dataframe cutoff_times = pd.concat(cutoff_list) cutoff_times = cutoff_times.drop_duplicates(subset = ['msno', 'cutoff_time']) # Encode in order to write to s3 bytes_to_write = cutoff_times.to_csv(None, index = False).encode() # Write cutoff times to S3 with fs.open(f'{partition_dir}/{prediction_date}-{churn_days}_labels.csv', 'wb') as f: f.write(bytes_to_write) partition_to_labels(1, prediction_dates = ['MS'], churn_periods = [31], lead_times = [1], prediction_windows = [1]) label_times = pd.read_csv('s3://customer-churn-spark/p1/MS-31_labels.csv') label_times.tail(10) partition_to_labels(1, prediction_dates = ['SMS'], churn_periods = [14], lead_times = [1], prediction_windows = [1]) label_times = pd.read_csv('s3://customer-churn-spark/p1/SMS-14_labels.csv') label_times.head(10) ``` ## Spark for Parallelization The below code uses Spark to parallelize the label making. This particular implementation uses a single machine although the same idea can be extended to a cluster of machines. ``` import findspark findspark.init('/usr/local/spark/') import pyspark conf = pyspark.SparkConf() # Enable logging conf.set('spark.eventLog.enabled', True); conf.set('spark.eventLog.dir', '/data/churn/tmp/'); # Use all cores on a single machine conf.set('spark.num.executors', 1) conf.set('spark.executor.memory', '56g') conf.set('spark.executor.cores', 15) # Make sure to specify correct spark master ip sc = pyspark.SparkContext(master = 'spark://ip-172-31-23-133.ec2.internal:7077', appName = 'labeling', conf = conf) sc from timeit import default_timer as timer # Parallelize making all labels in Spark start = timer() sc.parallelize(list(range(1000)), numSlices=1000).\ map(partition_to_labels).collect() sc.stop() end = timer() ``` While Spark is running, you can navigate to localhost:4040 to see the details of the particular job, or to localhost:8080 to see the overview of the cluster. This is useful for diagnosing the state of a spark operation. ``` print(f'{round(end - start)} seconds elapsed.') labels = pd.read_csv(f's3://customer-churn-spark/p980/MS-31_labels.csv') labels.tail(10) labels = pd.read_csv(f's3://customer-churn-spark/p980/SMS-14_labels.csv') labels.tail(10) ``` # Conclusions In this notebook, we implemented prediction engineering for the customer churn use case. After defining the business need, we translated it into a task that can be solved with machine learning and created a set of label times. We saw how to define functions with parameters so we could solve multiple prediction problems without needing to re-write the entire code. Although we only worked through two problems, there are numerous others that could be solved with the same data and approach. The label times contain cutoff times for a specific prediction problem along with the associated label. The label times can now be used to make features for each label by filtering the data to before the cutoff time. This ensures that any features made are valid and will automatically be taken care of in Featuretools. The general procedure for making labels is: 1. Define the business requirement: predict customers who will churn during a specified period of time 2. Translate the business requirement into a machine learning problem: given historical customer data, build a model to predict which customers will churn depending on several parameters 3. Make labels along with cutoff times corresponding to the machine learning problem: develop functions that take in parameters so the same function can be used for multiple prediction problems. 4. Label all past historical data: parallelize operations by partitioning data into independent subsets This approach can be extended to other problems. Although the exact syntax is specific to this use case, the overall approach is designed to be general purpose. ## Next Steps With a complete set of label times, we can now make features for each label using the cutoff times to ensure our features are valid. However, instead of the painstaking and error-prone process of making features by hand, we can use automated feature engineering in [Featuretools](https://github.com/Featuretools/featuretools) to automated this process. Featuretools will build hundreds of relevant features using only a few lines of code and will automatically filter the data to ensure that all of our features are valid. The feature engineering pipeline is developed in the `Feature Engineering` notebook.
github_jupyter
## Reference Data Camp course ## Course Description * A typical organization loses an estimated 5% of its yearly revenue to fraud. * Apply supervised learning algorithms to detect fraudulent behavior similar to past ones,as well as unsupervised learning methods to discover new types of fraud activities. * Deal with highly imbalanced datasets. * The course provides a mix of technical and theoretical insights and shows you hands-on how to practically implement fraud detection models. * Tips and advise from real-life experience to help you prevent making common mistakes in fraud analytics. * Examples of fraud: insurance fraud, credit card fraud, identify theft, money laundering, tax evasion, product warranty, healthcare fraud. ## Introduction and preparing your data * Typical challenges associated with fraud detection. * Resample your data in a smart way, to tackle problems with imbalanced data. ### Checking the fraud to non-fraud ratio * Fraud occurrences are fortunately an extreme minority in these transactions. * However, Machine Learning algorithms usually work best when the different classes contained in the dataset are more or less equally present. If there are few cases of fraud, then there's little data to learn how to identify them. This is known as **class imbalance** (or skewed class), and it's one of the main challenges of fraud detection. ``` import pandas as pd df = pd.read_csv("creditcard_sampledata_3.csv") #This is dieferent from the data in the course. But it will be corrected #in the following cells. occ = df['Class'].value_counts() #good for counting categorical data print(occ) print(occ / len(df.index)) ``` ### Plotting your data Visualize the fraud to non-fraud ratio. ``` import matplotlib.pyplot as plt import pandas as pd df = pd.read_csv("creditcard_sampledata_3.csv") #print(df.columns) #It is not df.colnames. df = df.drop(['Unnamed: 0'],axis = 1) # print(df.head()) y=df['Class'].values X=df.drop(['Class'],axis = 1).values def plot_data(X, y): plt.scatter(X[y == 0, 0], X[y == 0, 1], label="Class #0", alpha=0.5, linewidth=0.15) plt.scatter(X[y == 1, 0], X[y == 1, 1], label="Class #1", alpha=0.5, linewidth=0.15, c='r') plt.legend() return plt.show() # X, y = prep_data(df) #original code plot_data(X, y) len(X[y==0,0]) ``` ### Applying SMOTE * Re-balance the data using the Synthetic Minority Over-sampling Technique (SMOTE). * Unlike ROS, SMOTE does not create exact copies of observations, but creates new, synthetic, samples that are quite similar to the existing observations in the minority class. * Visualize the result and compare it to the original data, such that we can see the effect of applying SMOTE very clearly. ``` import matplotlib.pyplot as plt import pandas as pd from imblearn.over_sampling import SMOTE df = pd.read_csv("creditcard_sampledata_3.csv") #print(df.columns) #It is not df.colnames. df = df.drop(['Unnamed: 0'],axis = 1) # print(df.head()) y=df['Class'].values X=df.drop(['Class'],axis = 1).values #my code above method = SMOTE(kind='regular') X_resampled, y_resampled = method.fit_sample(X, y) plot_data(X_resampled, y_resampled) print(X.shape) print(y.shape) ``` ### Compare SMOTE to original data * Compare those results of SMOTE to the original data, to get a good feeling for what has actually happened. * Have a look at the value counts again of our old and new data, and let's plot the two scatter plots of the data side by side. * Use the function compare_plot() (not defined here), which takes the following arguments: X, y, X_resampled, y_resampled, method=''. The function plots the original data in a scatter plot, along with the resampled side by side. ``` print(pd.value_counts(pd.Series(y))) print(pd.value_counts(pd.Series(y_resampled))) compare_plot(X, y, X_resampled, y_resampled, method='SMOTE') # This fundtion is not defined here. But the result picture is as below #The compare_plot should be implemented by the subplot defined on dataframe, or by the subplot way summarized elsewhere. ``` ![1.png](attachment:1.png) ### Exploring the traditional way to catch fraud * Try finding fraud cases in our credit card dataset the "old way". First you'll define threshold values using common statistics, to split fraud and non-fraud. Then, use those thresholds on your features to detect fraud. This is common practice within fraud analytics teams. * Statistical thresholds are often determined by looking at the mean values of observations. * Check whether feature means differ between fraud and non-fraud cases. Then, use that information to create common sense thresholds. ``` import numpy as np import matplotlib.pyplot as plt import pandas as pd from imblearn.over_sampling import SMOTE df = pd.read_csv("creditcard_sampledata_3.csv") #print(df.columns) #It is not df.colnames. df = df.drop(['Unnamed: 0'],axis = 1) #print(df.head()) y=df['Class'].values X=df.drop(['Class'],axis = 1).values #my code above # Run a groupby command on our labels and obtain the mean for each feature df.groupby('Class').mean() # Implement a rule for stating which cases are flagged as fraud df['flag_as_fraud'] = np.where(np.logical_and(df['V1'] < -3, df['V3'] < -5), 1, 0) # Create a crosstab of flagged fraud cases versus the actual fraud cases print(pd.crosstab(df.Class, df.flag_as_fraud, rownames=['Actual Fraud'], colnames=['Flagged Fraud'])) ``` Not bad, with this rule, we detect 22 out of 50 fraud cases, but can't detect the other 28, and get 16 false positives. In the next exercise, we'll see how this measures up to a machine learning model. ### Using ML classification to catch fraud * Use a simple machine learning model on our credit card data instead. * Implement a Logistic Regression model. ``` from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix from sklearn.metrics import classification_report X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) # Fit a logistic regression model to our data model = LogisticRegression() model.fit(X_train, y_train) # Obtain model predictions predicted = model.predict(X_test) # Print the classifcation report and confusion matrix print('Classification report:\n', classification_report(y_test, predicted)) conf_mat = confusion_matrix(y_true=y_test, y_pred=predicted) print('Confusion matrix:\n', conf_mat) ``` * We are getting much less false positives, so that's an improvement. * We're catching a higher percentage of fraud cases, so that is also better than before. ### Logistic regression combined with SMOTE ``` # This is the pipeline module we need for this from imblearn from imblearn.pipeline import Pipeline # Define which resampling method and which ML model to use in the pipeline resampling = SMOTE(kind='borderline2') model = LogisticRegression() # Define the pipeline, tell it to combine SMOTE with the Logistic Regression model pipeline = Pipeline([('SMOTE', resampling), ('Logistic Regression', model)]) ``` ### Using a pipeline Treat the pipeline as if it were a single machine learning model. Our data X and y are already defined, and the pipeline is defined in the previous exercise. ``` # Split your data X and y, into a training and a test set and fit the pipeline onto the training data X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) # Fit your pipeline onto your training set and obtain predictions by fitting the model onto the test data pipeline.fit(X_train, y_train) predicted = pipeline.predict(X_test) # Obtain the results from the classification report and confusion matrix print('Classifcation report:\n', classification_report(y_test, predicted)) conf_mat = confusion_matrix(y_true=y_test, y_pred=predicted) print('Confusion matrix:\n', conf_mat) ``` * The SMOTE slightly improves our results. We now manage to find all cases of fraud, but we have a slightly higher number of false positives, albeit only 7 cases. * Remember, not in all cases does resampling necessarily lead to better results. **When the fraud cases are very spread and scattered over the data, using SMOTE can introduce a bit of bias.** Nearest neighbors aren't necessarily also fraud cases, so the synthetic samples might 'confuse' the model slightly. * In the next chapters, we'll learn how to also adjust our machine learning models to better detect the minority fraud cases. ## Fraud detection using labelled data * Flag fraudulent transactions with supervised learning. * Use classifiers, adjust them and compare them to find the most efficient fraud detection model. ### Natural hit rate * Explore how prevalent fraud is in the dataset, to understand what the "natural accuracy" is, if we were to predict everything as non-fraud. * It's is important to understand which level of "accuracy" you need to "beat" in order to get a better prediction than by doing nothing. * Create a random forest classifier for fraud detection. That will serve as the "baseline" model that you're going to try to improve in the upcoming exercises. ``` import matplotlib.pyplot as plt import pandas as pd from imblearn.over_sampling import SMOTE df = pd.read_csv("creditcard_sampledata_2.csv") #print(df.columns) #It is not df.colnames. df = df.drop(['Unnamed: 0'],axis = 1) # print(df.head()) y=df['Class'].values X=df.drop(['Class'],axis = 1).values #extra code above # Count the total number of observations from the length of y total_obs = len(y) # Count the total number of non-fraudulent observations non_fraud = [i for i in y if i == 0] count_non_fraud = non_fraud.count(0) # Calculate the percentage of non fraud observations in the dataset percentage = (float(count_non_fraud)/float(total_obs)) * 100 # Print the percentage: this is our "natural accuracy" by doing nothing print(percentage) ``` This tells us that by doing nothing, we would be correct in 95.9% of the cases. So now you understand, that if we get an accuracy of less than this number, our model does not actually add any value in predicting how many cases are correct. ### Random Forest Classifier - part 1 ``` print(X.shape) print(y.shape) from sklearn.ensemble import RandomForestClassifier X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0) model = RandomForestClassifier(random_state=5) ``` ### Random Forest Classifier - part 2 See how our Random Forest model performs without doing anything special to it. ``` model.fit(X_train, y_train) predicted = model.predict(X_test) from sklearn.metrics import accuracy_score model.fit(X_train, y_train) predicted = model.predict(X_test) print(accuracy_score(y_test, predicted)) ``` ### Performance metrics for the RF model * In the previous exercises you obtained an accuracy score for your random forest model. This time, we know accuracy can be misleading in the case of fraud detection. * With highly imbalanced fraud data, the AUROC curve is a more reliable performance metric, used to compare different classifiers. Moreover, the classification report tells you about the precision and recall of your model, whilst the confusion matrix actually shows how many fraud cases you can predict correctly. So let's get these performance metrics. * Continue working on the same random forest model from the previous exercise. The model, defined as model = RandomForestClassifier(random_state=5) has been fitted to the training data already, and X_train, y_train, X_test, y_test are available. ``` from sklearn.metrics import classification_report, confusion_matrix, roc_auc_score predicted = model.predict(X_test) probs = model.predict_proba(X_test) # Print the ROC curve, classification report and confusion matrix print(roc_auc_score(y_test, probs[:,1])) print(classification_report(y_test, predicted)) print(confusion_matrix(y_test, predicted)) ``` You have now obtained more meaningful performance metrics that tell us how well the model performs, given the highly imbalanced data that you're working with. The model predicts 76 cases of fraud, out of which 73 are actual fraud. You have only 3 false positives. This is really good, and as a result you have a very high precision score. You do however, don't catch 18 cases of actual fraud. Recall is therefore not as good as precision. Let's try to improve that in the following exercises. ### Plotting the Precision Recall Curve * Plot a Precision-Recall curve, to investigate the trade-off between the two in your model. In this curve Precision and Recall are inversely related; as Precision increases, Recall falls and vice-versa. A balance between these two needs to be achieved in your model, otherwise you might end up with many false positives, or not enough actual fraud cases caught. To achieve this and to compare performance, the precision-recall curves come in handy. * The Random Forest Classifier is available as model, and the predictions as predicted. You can simply obtain the average precision score and the PR curve from the sklearn package. T * The function plot_pr_curve() plots the results. ``` from sklearn.metrics import average_precision_score # Calculate average precision and the PR curve average_precision = average_precision_score(y_test, predicted) # Calculate average precision and the PR curve average_precision = average_precision_score(y_test, predicted) # Obtain precision and recall precision, recall, _ = precision_recall_curve(y_test, predicted) # Plot the recall precision tradeoff plot_pr_curve(recall, precision, average_precision) #This function is unavailable. ``` ![2.png](attachment:2.png) ### Model adjustments * A simple way to adjust the random forest model to deal with highly imbalanced fraud data, is to use the **class_weights option **when defining your sklearn model. However, as you will see, it is a bit of a blunt force mechanism and might not work for your very special case. * Explore the weight = "balanced_subsample" mode the Random Forest model from the earlier exercise. ``` model = RandomForestClassifier(class_weight='balanced_subsample', random_state=5) model.fit(X_train, y_train) # Obtain the predicted values and probabilities from the model predicted = model.predict(X_test) probs = model.predict_proba(X_test) print(roc_auc_score(y_test, probs[:,1])) print(classification_report(y_test, predicted)) print(confusion_matrix(y_test, predicted)) ``` * The model results don't improve drastically. We now have 3 less false positives, but now 19 in stead of 18 false negatives, i.e. cases of fraud we are not catching. If we mostly care about catching fraud, and not so much about the false positives, this does actually not improve our model at all, albeit a simple option to try. * In the next exercises we will see how to more smartly tweak your model to focus on reducing false negatives and catch more fraud. ### Adjusting your Random Forest to fraud detection * Explore the options for the random forest classifier, as we'll assign weights and tweak the shape of the decision trees in the forest. * Define weights manually, to be able to off-set that imbalance slightly. In our case we have 300 fraud to 7000 non-fraud cases, so by setting the weight ratio to 1:12, we get to a 1/3 fraud to 2/3 non-fraud ratio, which is good enough for training the model on. ``` # Change the model options model = RandomForestClassifier(bootstrap=True, class_weight={0:1, 1:12}, criterion='entropy', max_depth=10, min_samples_leaf=10, # Change the number of trees to use n_estimators=20, n_jobs=-1, random_state=5) # Run the function get_model_results # get_model_results(X_train, y_train, X_test, y_test, model) #This function fits the model to your training data, predicts and obtains performance metrics #similar to the steps you did in the previous exercises. ``` * By smartly defining more options in the model, you can obtain better predictions. You have effectively reduced the number of false negatives, i.e. you are catching more cases of fraud, whilst keeping the number of false positives low. * In this exercise you've manually changed the options of the model. There is a smarter way of doing it, by using GridSearchCV, which you'll see in the next exercise! ### GridSearchCV to find optimal parameters With GridSearchCV you can define which performance metric to score the options on. Since for fraud detection we are mostly interested in catching as many fraud cases as possible, you can optimize your model settings to get the **best possible Recall score.** If you also cared about reducing the number of false positives, you could optimize on F1-score, this gives you that nice Precision-Recall trade-off. ``` from sklearn.model_selection import GridSearchCV # Define the parameter sets to test param_grid = {'n_estimators': [1, 30], 'max_features': ['auto', 'log2'], 'max_depth': [4, 8], 'criterion': ['gini', 'entropy'] } model = RandomForestClassifier(random_state=5) CV_model = GridSearchCV(estimator=model, param_grid=param_grid, cv=5, scoring='recall', n_jobs=-1) CV_model.fit(X_train, y_train) CV_model.best_params_ ``` ### Model results using GridSearchCV * You discovered that the best parameters for your model are that the split criterion should be set to 'gini', the number of estimators (trees) should be 30, the maximum depth of the model should be 8 and the maximum features should be set to "log2". * Let's give this a try and see how well our model performs. You can use the get_model_results() function again to save time. ``` # Input the optimal parameters in the model model = RandomForestClassifier(class_weight={0:1,1:12}, criterion='gini', max_depth=8, max_features='log2', min_samples_leaf=10, n_estimators=30, n_jobs=-1, random_state=5) # Get results from your model # get_model_results(X_train, y_train, X_test, y_test, model) ``` <script.py> output: precision recall f1-score support 0.0 0.99 1.00 1.00 2099 1.0 0.95 0.84 0.89 91 micro avg 0.99 0.99 0.99 2190 macro avg 0.97 0.92 0.94 2190 weighted avg 0.99 0.99 0.99 2190 [[2095 4] [ 15 76]] * The number of false positives has now been slightly reduced even further, which means we are catching more cases of fraud. * However, you see that the number of false positives actually went up. That is that Precision-Recall trade-off in action. * To decide which final model is best, you need to take into account how bad it is not to catch fraudsters, versus how many false positives the fraud analytics team can deal with. Ultimately, this final decision should be made by you and the fraud team together. ### Logistic Regression * Combine three algorithms into one model with the VotingClassifier. This allows us to benefit from the different aspects from all models, and hopefully improve overall performance and detect more fraud. The first model, the Logistic Regression, has a slightly higher recall score than our optimal Random Forest model, but gives a lot more false positives. * You'll also add a Decision Tree with balanced weights to it. The data is already split into a training and test set, i.e. X_train, y_train, X_test, y_test are available. * In order to understand how the Voting Classifier can potentially improve your original model, you should check the standalone results of the Logistic Regression model first. ``` # Define the Logistic Regression model with weights model = LogisticRegression(class_weight={0:1, 1:15}, random_state=5) # Get the model results # get_model_results(X_train, y_train, X_test, y_test, model) ``` precision recall f1-score support 0.0 0.99 0.98 0.99 2099 1.0 0.63 0.88 0.73 91 ` micro avg 0.97 0.97 0.97 2190 macro avg 0.81 0.93 0.86 2190 weighted avg 0.98 0.97 0.98 2190 ` ` [[2052 47] [ 11 80]] ` The Logistic Regression has quite different performance from the Random Forest. More false positives, but also a better Recall. It will therefore will a useful addition to the Random Forest in an ensemble model. ### Voting Classifier * Combine three machine learning models into one, to improve our Random Forest fraud detection model from before. You'll combine our usual Random Forest model, with the Logistic Regression from the previous exercise, with a simple Decision Tree. * Use the short cut get_model_results() to see the immediate result of the ensemble model. ``` from sklearn.ensemble import VotingClassifier from sklearn.tree import DecisionTreeClassifier # Define the three classifiers to use in the ensemble clf1 = LogisticRegression(class_weight={0:1, 1:15}, random_state=5) clf2 = RandomForestClassifier(class_weight={0:1, 1:12}, criterion='gini', max_depth=8, max_features='log2', min_samples_leaf=10, n_estimators=30, n_jobs=-1, random_state=5) clf3 = DecisionTreeClassifier(random_state=5, class_weight="balanced") # Combine the classifiers in the ensemble model ensemble_model = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('dt', clf3)], voting='hard') # Get the results # get_model_results(X_train, y_train, X_test, y_test, ensemble_model) ``` <script.py> output: precision recall f1-score support 0.0 0.99 1.00 0.99 2099 1.0 0.90 0.86 0.88 91 micro avg 0.99 0.99 0.99 2190 macro avg 0.95 0.93 0.94 2190 weighted avg 0.99 0.99 0.99 2190 [[2090 9] [ 13 78]] * By combining the classifiers, you can take the best of multiple models. You've increased the cases of fraud you are catching from 76 to 78, and you only have 5 extra false positives in return. * If you do care about catching as many fraud cases as you can, whilst keeping the false positives low, this is a pretty good trade-off. * The Logistic Regression as a standalone was quite bad in terms of false positives, and the Random Forest was worse in terms of false negatives. By combining these together you indeed managed to improve performance. ### Adjust weights within the Voting Classifier * The Voting Classifier allows you to improve your fraud detection performance, by combining good aspects from multiple models. Now let's try to adjust the weights we give to these models. By increasing or decreasing weights you can play with how much emphasis you give to a particular model relative to the rest. This comes in handy when a certain model has overall better performance than the rest, but you still want to combine aspects of the others to further improve your results. * The data is already split into a training and test set, and clf1, clf2 and clf3 are available and defined as before, i.e. they are the Logistic Regression, the Random Forest model and the Decision Tree respectively. ``` ensemble_model = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)], voting='soft', weights=[1, 4, 1], flatten_transform=True) # Get results # get_model_results(X_train, y_train, X_test, y_test, ensemble_model) ``` <script.py> output: precision recall f1-score support 0.0 0.99 1.00 1.00 2099 1.0 0.94 0.85 0.89 91 micro avg 0.99 0.99 0.99 2190 macro avg 0.97 0.92 0.94 2190 weighted avg 0.99 0.99 0.99 2190 [[2094 5] [ 14 77]] The weight option allows you to play with the individual models to get the best final mix for your fraud detection model. Now that you have finalized fraud detection with supervised learning, let's have a look at how fraud detection can be done when you don't have any labels to train on. ## Fraud detection using unlabelled data * Use unsupervised learning techniques to detect fraud. * Segment customers, use K-means clustering and other clustering algorithms to find suspicious occurrences in your data. ### Exploring your data * Look at bank payment transaction data. * Distinguish normal from abnormal (thus potentially fraudulent) behavior. As a fraud analyst to understand what is "normal", you need to have a good understanding of the data and its characteristics. ``` import pandas as pd df = pd.read_csv('banksim.csv') df = df.drop(['Unnamed: 0'],axis = 1) print(df.head()) print(df.groupby('category').mean()) ``` Even from simple group by, we can find that the majority of fraud is observed in travel, leisure and sports related transactions. ### Customer segmentation * Check whether there are any obvious patterns for the clients in this data, thus whether you need to segment your data into groups, or whether the data is rather homogenous. * There is not a lot client information available; However, there is data on **age ** available, so let's see whether there is any significant difference between behavior of age groups. ``` # Group by age groups and get the mean print(df.groupby('age').mean()) # Group by age groups and get the mean df.groupby('age').mean() # Count the values of the observations in each age group print(df['age'].value_counts()) ``` * Does it make sense to divide your data into age segments before running a fraud detection algorithm? * No, the age groups who are the largest are relatively similar. As you can see the average amount spent as well as fraud occurrence is rather similar across groups. Age group '0' stands out but since there are only 40 cases, it does not make sense to split these out in a separate group and run a separate model on them. ### Using statistics to define normal behavior * In the previous exercises we saw that fraud is more prevalent in certain transaction categories, but that there is no obvious way to segment our data into for example age groups. * This time, let's investigate the average amounts spend in normal transactions versus fraud transactions. This gives you an idea of how fraudulent transactions differ structurally from normal transactions. ``` # Create two dataframes with fraud and non-fraud data df_fraud = df.loc[df.fraud == 1] df_non_fraud = df.loc[df.fraud == 0] # Plot histograms of the amounts in fraud and non-fraud data plt.hist(df_fraud.amount, alpha=0.5, label='fraud') plt.hist(df_non_fraud.amount, alpha=0.5, label='nonfraud') plt.legend() plt.show() ``` * As the number fraud observations is much smaller, it is difficult to see the full distribution. * Nonetheless, you can see that the fraudulent transactions tend to be on the larger side relative to normal observations. * This helps us later in detecting fraud from non-fraud. In the next chapter you're going to implement a clustering model to distinguish between normal and abnormal transactions, when the fraud labels are no longer available. ### Scaling the data For ML algorithms using distance based metrics, it is crucial to always scale your data, as features using different scales will distort your results. K-means uses the Euclidian distance to assess distance to cluster centroids, therefore you first need to scale your data before continuing to implement the algorithm. ``` import pandas as pd df = pd.read_csv('banksim_adj.csv') X = df.drop(['Unnamed: 0'],axis = 1).values y = df['fraud'].values print(df.head()) #extra code above. The data might not be same as the DataCamp from sklearn.preprocessing import MinMaxScaler X = np.array(df).astype(np.float) scaler = MinMaxScaler() X_scaled = scaler.fit_transform(X) ``` ### K-means clustering * For fraud detection, K-means clustering is straightforward to implement and relatively powerful in predicting suspicious cases. It is a good algorithm to start with when working on fraud detection problems. * However, fraud data is oftentimes very large, especially when you are working with transaction data. MiniBatch K-means is an efficient way to implement K-means on a large dataset, which you will use in this exercise. ``` # Import MiniBatchKmeans from sklearn.cluster import MiniBatchKMeans kmeans = MiniBatchKMeans(n_clusters=8, random_state=0) kmeans.fit(X_scaled) ``` ### Elbow method * It is important to get the number of clusters right, especially when you want to **use the outliers of those clusters as fraud predictions**. * Apply the Elbow method and see what the optimal number of clusters should be based on this method. ``` clustno = range(1, 10) kmeans = [MiniBatchKMeans(n_clusters=i) for i in clustno] score = [kmeans[i].fit(X_scaled).score(X_scaled) for i in range(len(kmeans))] plt.plot(clustno, score) plt.xlabel('Number of Clusters') plt.ylabel('Score') plt.title('Elbow Curve') plt.show() ``` The optimal number of clusters should probably be at around 3 clusters, as that is where the elbow is in the curve. ### Detecting outliers * Use the K-means algorithm to predict fraud, and compare those predictions to the actual labels that are saved, to sense check our results. * The fraudulent transactions are typically flagged as the observations that are furthest aways from the cluster centroid. * How to determine the cut-off in this exercise. ``` X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, test_size=0.3, random_state=0) kmeans = MiniBatchKMeans(n_clusters=3, random_state=42).fit(X_train) X_test_clusters = kmeans.predict(X_test) X_test_clusters_centers = kmeans.cluster_centers_ dist = [np.linalg.norm(x-y) for x, y in zip(X_test, X_test_clusters_centers[X_test_clusters])] # np.linagl.norm calculate the 'norm' of a vector or a matrix. # Create fraud predictions based on outliers on clusters km_y_pred = np.array(dist) km_y_pred[dist >= np.percentile(dist, 95)] = 1 km_y_pred[dist < np.percentile(dist, 95)] = 0 print(len(X_test)) print(len(X_test_clusters)) print(X_test_clusters) print('--------------------') print(X_test_clusters_centers) print(len(dist)) ``` ### Checking model results In the previous exercise you've flagged all observations to be fraud, if they are in the top 5th percentile in distance from the cluster centroid. I.e. these are the very outliers of the three clusters. For this exercise you have the scaled data and labels already split into training and test set, so y_test is available. The predictions from the previous exercise, km_y_pred, are also available. Let's create some performance metrics and see how well you did. ``` # Obtain the ROC score print(roc_auc_score(y_test, km_y_pred)) #output: 0.8197704982668266 # Obtain the ROC score print(roc_auc_score(y_test, km_y_pred)) # Create a confusion matrix km_cm = confusion_matrix(y_test, km_y_pred) # Plot the confusion matrix in a figure to visualize results # plot_confusion_matrix(km_cm) ``` ![3.png](attachment:3.png) Question If you were to decrease the percentile used as a cutoff point in the previous exercise to 93% instead of 95%, what would that do to your prediction results? The number of fraud cases caught increases, but false positives also increase. ### DB scan * Explore using a density based clustering method (DBSCAN) to detect fraud. The advantage of DBSCAN is that you do not need to define the number of clusters beforehand. Also, DBSCAN can handle weirdly shaped data (i.e. non-convex) much better than K-means can. * This time, you are **not going to take the outliers of the clusters and use that for fraud, but take the smallest clusters in the data and label those as fraud**. You again have the scaled dataset, i.e. X_scaled available. ``` from sklearn.cluster import DBSCAN # Initialize and fit the DBscan model db = DBSCAN(eps=0.9, min_samples=10, n_jobs=-1).fit(X_scaled) # Obtain the predicted labels and calculate number of clusters pred_labels = db.labels_ n_clusters = len(set(pred_labels)) - (1 if -1 in labels else 0) # # Print performance metrics for DBscan # print('Estimated number of clusters: %d' % n_clusters) # print("Homogeneity: %0.3f" % homogeneity_score(labels, pred_labels)) # print("Silhouette Coefficient: %0.3f" % silhouette_score(X_scaled, pred_labels)) ``` output: ` Estimated number of clusters: 18 Homogeneity: 0.633 Silhouette Coefficient: 0.707 ` The number of clusters is much higher than with K-means. For fraud detection this is for now OK, as we are only interested in the smallest clusters, since those are considered as abnormal. Now let's have a look at those clusters and decide which one to flag as fraud. ### Assessing smallest clusters * Check the clusters that came out of DBscan, and flag certain clusters as fraud: * Figure out how big the clusters are, and filter out the smallest. Then take the smallest ones and flag those as fraud. * Check with the original labels whether this does actually do a good job in detecting fraud. Available are the DBscan model predictions, so n_clusters is available as well as the cluster labels, which are saved under pred_labels. ``` counts = np.bincount(pred_labels[pred_labels >= 0]) print(counts) ``` output: [3252 145 2714 55 174 119 122 98 54 15 76 15 43 25 51 47 42 15 25 20 19 10] ``` # Count observations in each cluster number counts = np.bincount(pred_labels[pred_labels>=0]) # Sort the sample counts of the clusters and take the top 3 smallest clusters smallest_clusters = np.argsort(counts)[:3] # Print the results print("The smallest clusters are clusters:") print(smallest_clusters) ``` output: The smallest clusters are clusters: [21 17 9] ``` # Count observations in each cluster number counts = np.bincount(pred_labels[pred_labels>=0]) # Sort the sample counts of the clusters and take the top 3 smallest clusters smallest_clusters = np.argsort(counts)[:3] # Print the counts of the smallest clusters only print("Their counts are:") print(counts[smallest_clusters]) ``` <script.py> output: Their counts are: [10 15 15] So now we know which smallest clusters you could flag as fraud. If you were to take more of the smallest clusters, you cast your net wider and catch more fraud, but most likely also more false positives. It is up to the fraud analyst to find the right amount of cases to flag and to investigate. In the next exercise you'll check the results with the actual labels. ### Checking results In this exercise you're going to check the results of your DBscan fraud detection model. In reality, you often don't have reliable labels and this where a fraud analyst can help you validate the results. He/She can check your results and see whether the cases you flagged are indeed suspicious. You can also check historically known cases of fraud and see whether your model flags them. In this case, you'll use the fraud labels to check your model results. The predicted cluster numbers are available under pred_labels as well as the original fraud labels labels. ``` # Create a dataframe of the predicted cluster numbers and fraud labels df = pd.DataFrame({'clusternr':pred_labels,'fraud':labels}) # Create a condition flagging fraud for the smallest clusters df['predicted_fraud'] = np.where((df['clusternr']==21) | (df['clusternr']==17) | (df['clusternr']==9), 1, 0) # Run a crosstab on the results print(pd.crosstab(df.fraud, df.predicted_fraud, rownames=['Actual Fraud'], colnames=['Flagged Fraud'])) ``` output: ` Flagged Fraud 0 1 Actual Fraud 0 6973 16 1 176 24 ` How does this compare to the K-means model? * The good thing is: our of all flagged cases, roughly 2/3 are actually fraud! Since you only take the three smallest clusters, by definition you flag less cases of fraud, so you catch less but also have less false positives. However, you are missing quite a lot of fraud cases. * Increasing the amount of smallest clusters you flag could improve that, at the cost of more false positives of course. ## Fraud detection using text Use text data, text mining and topic modeling to detect fraudulent behavior. ### Word search with dataframes * Work with text data, containing emails from Enron employees. * Using string operations on dataframes, you can easily sift through messy email data and create flags based on word-hits. ``` import pandas as pd df = pd.read_csv('enron_emails_clean.csv',index_col = 0) # Find all cleaned emails that contain 'sell enron stock' mask = df['clean_content'].str.contains('sell enron stock', na=False) # Select the data from df that contain the searched for words print(df.loc[mask]) ``` ### Using list of terms * Search on more than one term. * Create a full "fraud dictionary" of terms that could potentially flag fraudulent clients and/or transactions. Fraud analysts often will have an idea what should be in such a dictionary. In this exercise you're going to flag a multitude of terms, and in the next exercise you'll create a new flag variable out of it. The 'flag' can be used either directly in a machine learning model as a feature, or as an additional filter on top of your machine learning model results. ``` # Create a list of terms to search for searchfor = ['enron stock', 'sell stock', 'stock bonus', 'sell enron stock'] # Filter cleaned emails on searchfor list and select from df filtered_emails = df.loc[df['clean_content'].str.contains('|'.join(searchfor), na=False)] # print(filtered_emails) ``` ### Creating a flag This time you are going to create an actual flag variable that gives a 1 when the emails get a hit on the search terms of interest, and 0 otherwise. This is the last step you need to make in order to actually use the text data content as a feature in a machine learning model, or as an actual flag on top of model results. You can continue working with the dataframe df containing the emails, and the searchfor list is the one defined in the last exercise. ``` import numpy as np # Create flag variable where the emails match the searchfor terms df['flag'] = np.where((df['clean_content'].str.contains('|'.join(searchfor)) == True), 1, 0) # Count the values of the flag variable count = df['flag'].value_counts() print(count) ``` You have now managed to search for a list of strings in several lines of text data. These skills come in handy when you want to flag certain words based on what you discovered in your topic model, or when you know beforehand what you want to search for. In the next exercises you're going to learn how to clean text data and to create your own topic model to further look for indications of fraud in your text data. ### Removing stopwords In the following exercises you're going to clean the Enron emails, in order to be able to use the data in a topic model. Text cleaning can be challenging, so you'll learn some steps to do this well. The dataframe containing the emails df is available. In a first step you need to define the list of stopwords and punctuations that are to be removed in the next exercise from the text data. Let's give it a try. ``` # Import nltk packages and string from nltk.corpus import stopwords import string # Define stopwords to exclude stop = set(stopwords.words('english')) # stop.update(("to","cc","subject","http","from","sent", "ect", "u", "fwd", "www", "com")) # Define punctuations to exclude and lemmatizer exclude = set(string.punctuation) ``` The following is the stop contents. However, stop = set(stopwords('english')) has problems to run. {'a', 'about', 'above', 'after', 'again', 'against', 'ain', 'all', 'am', . . . 'y', 'you', "you'd", "you'll", "you're", "you've", 'your', 'yours', 'yourself', 'yourselves' } ### Cleaning text data Now that you've defined the stopwords and punctuations, let's use these to clean our enron emails in the dataframe df further. The lists containing stopwords and punctuations are available under stop and exclude There are a few more steps to take before you have cleaned data, such as "lemmatization" of words, and stemming the verbs. The verbs in the email data are already stemmed, and the lemmatization is already done for you in this exercise. ``` # Import the lemmatizer from nltk from nltk.stem.wordnet import WordNetLemmatizer lemma = WordNetLemmatizer() # Define word cleaning function def clean(text, stop): text = text.rstrip() # Remove stopwords stop_free = " ".join([word for word in text.lower().split() if ((word not in stop) and (not word.isdigit()))]) # Remove punctuations punc_free = ''.join(word for word in stop_free if word not in exclude) # Lemmatize all words normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split()) return normalized # Import the lemmatizer from nltk from nltk.stem.wordnet import WordNetLemmatizer lemma = WordNetLemmatizer() # Import the lemmatizer from nltk from nltk.stem.wordnet import WordNetLemmatizer lemma = WordNetLemmatizer() # Define word cleaning function def clean(text, stop): text = text.rstrip() stop_free = " ".join([i for i in text.lower().split() if((i not in stop) and (not i.isdigit()))]) punc_free = ''.join(i for i in stop_free if i not in exclude) normalized = " ".join(lemma.lemmatize(i) for i in punc_free.split()) return normalized # Clean the emails in df and print results text_clean=[] for text in df['clean_content']: text_clean.append(clean(text, stop).split()) print(text_clean) ``` Now that you have cleaned your data entirely with the necessary steps, including splitting the text into words, removing stopwords and punctuations, and lemmatizing your words. You are now ready to run a topic model on this data. In the following exercises you're going to explore how to do that. ### Create dictionary and corpus In order to run an LDA topic model, you first need to define your dictionary and corpus first, as those need to go into the model. You're going to continue working on the cleaned text data that you've done in the previous exercises. That means that text_clean is available for you already to continue working with, and you'll use that to create your dictionary and corpus. This exercise will take a little longer to execute than usual. ``` # Import the packages import gensim from gensim import corpora # Define the dictionary dictionary = corpora.Dictionary(text_clean) # Define the corpus corpus = [dictionary.doc2bow(text) for text in text_clean] # Print corpus and dictionary print(dictionary) print(corpus) ``` Dictionary(8948 unique tokens: ['conducted', 'read', 'wil', 'daniel', 'piazze']...) [[(0, 1), (1, 2), (2, 1), (3, 1), (4, 2), (5, 1), (6, 2), (7, 1), (8, 1), (9, 1), (10, 5), (11, 2), (12, 1), (13, 1), (14, 1), (15, 1), (16, 1), (17, 1), (18, 1), (19, 1), (20, 1), (21, 1), (22, 1), (23, 1), (24, 1),....] total length. Note doc2bow is doc to bag of words. ### LDA model (It is is not linear discriminant analysis) Now it's time to build the LDA model. Using the dictionary and corpus, you are ready to discover which topics are present in the Enron emails. With a quick print of words assigned to the topics, you can do a first exploration about whether there are any obvious topics that jump out. Be mindful that the topic model is heavy to calculate so it will take a while to run. Let's give it a try! ``` ldamodel = gensim.models.ldamodel.LdaModel(corpus, num_topics=5, id2word=dictionary, passes=5) # Save the topics and top 5 words topics = ldamodel.print_topics(num_words=5) # Print the results for topic in topics: print(topic) ``` `(0, '0.024*"enron" + 0.015*"ect" + 0.011*"com" + 0.007*"hou" + 0.005*"company"') (1, '0.032*"enron" + 0.011*"com" + 0.009*"diabetes" + 0.008*"message" + 0.006*"please"') (2, '0.031*"enron" + 0.011*"company" + 0.010*"said" + 0.007*"mr" + 0.005*"partnership"') (3, '0.021*"enron" + 0.012*"employee" + 0.010*"company" + 0.009*"million" + 0.009*"com"') (4, '0.040*"error" + 0.021*"database" + 0.018*"borland" + 0.018*"engine" + 0.018*"initialize"') ` You have now successfully created your first topic model on the Enron email data. However, the print of words doesn't really give you enough information to find a topic that might lead you to signs of fraud. You'll therefore need to closely inspect the model results in order to be able to detect anything that can be related to fraud in your data. Below are visualisation results from the pyLDAvis library available. Have a look at topic 1 and 3 from the LDA model on the Enron email data. Which one would you research further for fraud detection purposes and why? ![5.png](attachment:5.png) Topic 1 seems to discuss the employee share option program, and seems to point to internal conversation (with "please, may, know" etc), so this is more likely to be related to the internal accounting fraud and trading stock with insider knowledge. Topic 3 seems to be more related to general news around Enron. ### Finding fraudsters based on topic In this exercise you're going to link the results from the topic model back to your original data. You now learned that you want to flag everything related to topic 3. As you will see, this is actually not that straightforward. You'll be given the function get_topic_details() which takes the arguments ldamodel and corpus. It retrieves the details of the topics for each line of text. With that function, you can append the results back to your original data. If you want to learn more detail on how to work with the model results, which is beyond the scope of this course, you're highly encouraged to read this article (https://www.machinelearningplus.com/nlp/topic-modeling-gensim-python/). Available for you are the dictionary and corpus, the text data text_clean as well as your model results ldamodel. Also defined is get_topic_details(). ``` # Run get_topic_details function and check the results print(get_topic_details(ldamodel, corpus)) # Add original text to topic details in a dataframe contents = pd.DataFrame({'Original text': text_clean}) topic_details = pd.concat([get_topic_details(ldamodel, corpus), contents], axis=1) topic_details.head() # Add original text to topic details in a dataframe contents = pd.DataFrame({'Original text':text_clean}) topic_details = pd.concat([get_topic_details(ldamodel, corpus), contents], axis=1) # Create flag for text highest associated with topic 3 topic_details['flag'] = np.where((topic_details['Dominant_Topic'] == 3.0), 1, 0) print(topic_details.head()) ``` You have now flagged all data that is highest associated with topic 3, that seems to cover internal conversation about enron stock options. You are a true detective. With these exercises you have demonstrated that text mining and topic modeling can be a powerful tool for fraud detection. ### Summary * We may apply all types of machine learning algorithms to handle anomaly and fraud detection. * Supervised learning such as classification algorithms, neural network, etc. * Unsupervised learning such as clustering algorithms. * All the linear or nonlinear dimension reduction techniques that can be used directly to handle anomaly detection, or can be combined with other supervised/unsupervised learning algorithm. * Natural language processing. * Directly constructing Gaussian distribution (or other contributions) and flag outliers. * Use network analysis for fraud or anomaly detection.
github_jupyter
<a href="https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/profiling_tpus_in_colab.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ##### Copyright 2018 The TensorFlow Hub Authors. Copyright 2019-2020 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. # Profiling TPUs in Colab&nbsp; <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a> Adapted from [TPU colab example](https://colab.sandbox.google.com/notebooks/tpu.ipynb). ## Overview This example works through training a model to classify images of flowers on Google's lightning-fast Cloud TPUs. Our model takes as input a photo of a flower and returns whether it is a daisy, dandelion, rose, sunflower, or tulip. A key objective of this colab is to show you how to set up and run TensorBoard, the program used for visualizing and analyzing program performance on Cloud TPU. This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select **File > View on GitHub**. ## Instructions <h3><a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a>&nbsp;&nbsp;Train on TPU&nbsp;&nbsp; </h3> * Create a Cloud Storage bucket for your TensorBoard logs at http://console.cloud.google.com/storage. Give yourself Storage Legacy Bucket Owner permission on the bucket. You will need to provide the bucket name when launching TensorBoard in the **Training** section. Note: User input is required when launching and viewing TensorBoard, so do not use **Runtime > Run all** to run through the entire colab. ## Authentication for connecting to GCS bucket for logging. ``` import os IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence if IS_COLAB_BACKEND: from google.colab import auth # Authenticates the Colab machine and also the TPU using your # credentials so that they can access your private GCS buckets. auth.authenticate_user() ``` ## Updating tensorboard_plugin_profile ``` !pip install -U pip install -U tensorboard_plugin_profile==2.3.0 ``` ## Enabling and testing the TPU First, you'll need to enable TPUs for the notebook: - Navigate to Edit→Notebook Settings - select TPU from the Hardware Accelerator drop-down Next, we'll check that we can connect to the TPU: ``` %tensorflow_version 2.x import tensorflow as tf print("Tensorflow version " + tf.__version__) try: tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection print('Running on TPU ', tpu.cluster_spec().as_dict()['worker']) except ValueError: raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!') tf.config.experimental_connect_to_cluster(tpu) tf.tpu.experimental.initialize_tpu_system(tpu) tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu) import re import numpy as np from matplotlib import pyplot as plt ``` ## Input data Our input data is stored on Google Cloud Storage. To more fully use the parallelism TPUs offer us, and to avoid bottlenecking on data transfer, we've stored our input data in TFRecord files, 230 images per file. Below, we make heavy use of `tf.data.experimental.AUTOTUNE` to optimize different parts of input loading. All of these techniques are a bit overkill for our (small) dataset, but demonstrate best practices for using TPUs. ``` AUTO = tf.data.experimental.AUTOTUNE IMAGE_SIZE = [331, 331] batch_size = 16 * tpu_strategy.num_replicas_in_sync gcs_pattern = 'gs://flowers-public/tfrecords-jpeg-331x331/*.tfrec' validation_split = 0.19 filenames = tf.io.gfile.glob(gcs_pattern) split = len(filenames) - int(len(filenames) * validation_split) train_fns = filenames[:split] validation_fns = filenames[split:] def parse_tfrecord(example): features = { "image": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring "class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar "one_hot_class": tf.io.VarLenFeature(tf.float32), } example = tf.io.parse_single_example(example, features) decoded = tf.image.decode_jpeg(example['image'], channels=3) normalized = tf.cast(decoded, tf.float32) / 255.0 # convert each 0-255 value to floats in [0, 1] range image_tensor = tf.reshape(normalized, [*IMAGE_SIZE, 3]) one_hot_class = tf.reshape(tf.sparse.to_dense(example['one_hot_class']), [5]) return image_tensor, one_hot_class def load_dataset(filenames): # Read from TFRecords. For optimal performance, we interleave reads from multiple files. records = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO) return records.map(parse_tfrecord, num_parallel_calls=AUTO) def get_training_dataset(): dataset = load_dataset(train_fns) # Create some additional training images by randomly flipping and # increasing/decreasing the saturation of images in the training set. def data_augment(image, one_hot_class): modified = tf.image.random_flip_left_right(image) modified = tf.image.random_saturation(modified, 0, 2) return modified, one_hot_class augmented = dataset.map(data_augment, num_parallel_calls=AUTO) # Prefetch the next batch while training (autotune prefetch buffer size). return augmented.repeat().shuffle(2048).batch(batch_size).prefetch(AUTO) training_dataset = get_training_dataset() validation_dataset = load_dataset(validation_fns).batch(batch_size).prefetch(AUTO) ``` Let's take a peek at the training dataset we've created: ``` CLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'] def display_one_flower(image, title, subplot, color): plt.subplot(subplot) plt.axis('off') plt.imshow(image) plt.title(title, fontsize=16, color=color) # If model is provided, use it to generate predictions. def display_nine_flowers(images, titles, title_colors=None): subplot = 331 plt.figure(figsize=(13,13)) for i in range(9): color = 'black' if title_colors is None else title_colors[i] display_one_flower(images[i], titles[i], 331+i, color) plt.tight_layout() plt.subplots_adjust(wspace=0.1, hspace=0.1) plt.show() def get_dataset_iterator(dataset, n_examples): return dataset.unbatch().batch(n_examples).as_numpy_iterator() training_viz_iterator = get_dataset_iterator(training_dataset, 9) # Re-run this cell to show a new batch of images images, classes = next(training_viz_iterator) class_idxs = np.argmax(classes, axis=-1) # transform from one-hot array to class number labels = [CLASSES[idx] for idx in class_idxs] display_nine_flowers(images, labels) ``` ## Model To get maxmimum accuracy, we leverage a pretrained image recognition model (here, [Xception](http://openaccess.thecvf.com/content_cvpr_2017/papers/Chollet_Xception_Deep_Learning_CVPR_2017_paper.pdf)). We drop the ImageNet-specific top layers (`include_top=false`), and add a max pooling and a softmax layer to predict our 5 classes. ``` def create_model(): pretrained_model = tf.keras.applications.Xception(input_shape=[*IMAGE_SIZE, 3], include_top=False) pretrained_model.trainable = True model = tf.keras.Sequential([ pretrained_model, tf.keras.layers.GlobalAveragePooling2D(), tf.keras.layers.Dense(5, activation='softmax') ]) model.compile( optimizer='adam', loss = 'categorical_crossentropy', metrics=['accuracy'] ) return model with tpu_strategy.scope(): # creating the model in the TPUStrategy scope means we will train the model on the TPU model = create_model() model.summary() ``` ## Training Calculate the number of images in each dataset. Rather than actually load the data to do so (expensive), we rely on hints in the filename. This is used to calculate the number of batches per epoch. ``` def count_data_items(filenames): # The number of data items is written in the name of the .tfrec files, i.e. flowers00-230.tfrec = 230 data items n = [int(re.compile(r"-([0-9]*)\.").search(filename).group(1)) for filename in filenames] return np.sum(n) n_train = count_data_items(train_fns) n_valid = count_data_items(validation_fns) train_steps = count_data_items(train_fns) // batch_size print("TRAINING IMAGES: ", n_train, ", STEPS PER EPOCH: ", train_steps) print("VALIDATION IMAGES: ", n_valid) ``` Calculate and show a learning rate schedule. We start with a fairly low rate, as we're using a pre-trained model and don't want to undo all the fine work put into training it. ``` EPOCHS = 12 start_lr = 0.00001 min_lr = 0.00001 max_lr = 0.00005 * tpu_strategy.num_replicas_in_sync rampup_epochs = 5 sustain_epochs = 0 exp_decay = .8 def lrfn(epoch): if epoch < rampup_epochs: return (max_lr - start_lr)/rampup_epochs * epoch + start_lr elif epoch < rampup_epochs + sustain_epochs: return max_lr else: return (max_lr - min_lr) * exp_decay**(epoch-rampup_epochs-sustain_epochs) + min_lr lr_callback = tf.keras.callbacks.LearningRateScheduler(lambda epoch: lrfn(epoch), verbose=True) rang = np.arange(EPOCHS) y = [lrfn(x) for x in rang] plt.plot(rang, y) print('Learning rate per epoch:') ``` Train the model. While the first epoch will be quite a bit slower as we must XLA-compile the execution graph and load the data, later epochs should complete in ~5s. ``` # Load the TensorBoard notebook extension. %load_ext tensorboard # Get TPU profiling service address. This address will be needed for capturing # profile information with TensorBoard in the following steps. service_addr = tpu.get_master().replace(':8470', ':8466') print(service_addr) # Launch TensorBoard. %tensorboard --logdir=gs://bucket-name # Replace the bucket-name variable with your own gcs bucket ``` The TensorBoard UI is displayed in a browser window. In this colab, perform the following steps to prepare to capture profile information. 1. Click on the dropdown menu box on the top right side and scroll down and click PROFILE. A new window appears that shows: **No profile data was found** at the top. 1. Click on the CAPTURE PROFILE button. A new dialog appears. The top input line shows: **Profile Service URL or TPU name**. Copy and paste the Profile Service URL (the service_addr value shown before launching TensorBoard) into the top input line. While still on the dialog box, start the training with the next step. 1. Click on the next colab cell to start training the model. 1. Watch the output from the training until several epochs have completed. This allows time for the profile data to start being collected. Return to the dialog box and click on the CAPTURE button. If the capture succeeds, the page will auto refresh and redirect you to the profiling results. ``` history = model.fit(training_dataset, validation_data=validation_dataset, steps_per_epoch=train_steps, epochs=EPOCHS, callbacks=[lr_callback]) final_accuracy = history.history["val_accuracy"][-5:] print("FINAL ACCURACY MEAN-5: ", np.mean(final_accuracy)) def display_training_curves(training, validation, title, subplot): ax = plt.subplot(subplot) ax.plot(training) ax.plot(validation) ax.set_title('model '+ title) ax.set_ylabel(title) ax.set_xlabel('epoch') ax.legend(['training', 'validation']) plt.subplots(figsize=(10,10)) plt.tight_layout() display_training_curves(history.history['accuracy'], history.history['val_accuracy'], 'accuracy', 211) display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212) ``` Accuracy goes up and loss goes down. Looks good! ## Next steps More TPU/Keras examples include: - [Shakespeare in 5 minutes with Cloud TPUs and Keras](https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb) - [Fashion MNIST with Keras and TPUs](https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb) We'll be sharing more examples of TPU use in Colab over time, so be sure to check back for additional example links, or [follow us on Twitter @GoogleColab](https://twitter.com/googlecolab).
github_jupyter
# Deep learning hands-on ![title](figures/sl0.jpeg) ![frameworks](figures/sl1.jpeg) ## What frameworks do - Tensor math - Common network operations/layers - Gradient of common operations - Backpropagation - Optimizer - GPU implementation of the above - usally: data loading, serialization (saving/loading) of models - sometime: distributed computing - why not: production deployment on clusters/end devices/low power nodes **TensorFlow** is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML powered applications. [ref](https://www.tensorflow.org/tutorials): tf documentation # Tensors A Tensor is a multi-dimensional array. Similar to NumPy ndarray objects, Tensor objects have a data type and a shape. Additionally, Tensors can reside in accelerator (like GPU) memory. TensorFlow offers a rich library of operations (tf.add, tf.matmul, tf.linalg.inv etc.) that consume and produce Tensors. These operations automatically convert native Python types. ![tensors](figures/sl2.jpeg) ``` %pylab inline import tensorflow as tf print(tf.add(1, 2)) print(tf.add([1, 2], [3, 4])) print(tf.square(5)) print(tf.reduce_sum([1, 2, 3])) # Operator overloading is also supported print(tf.square(2) + tf.square(3)) ``` Each Tensor has a shape and a datatype ``` x = tf.matmul([[1]], [[2, 3]]) print(x.shape) print(x.dtype) ``` The most obvious differences between NumPy arrays and TensorFlow Tensors are: - Tensors can be backed by accelerator memory (like GPU, TPU). - Tensors are immutable. ## GPU acceleration Many TensorFlow operations can be accelerated by using the GPU for computation. Without any annotations, TensorFlow automatically decides whether to use the GPU or CPU for an operation (and copies the tensor between CPU and GPU memory if necessary). Tensors produced by an operation are typically backed by the memory of the device on which the operation executed. For example: ``` x = tf.random.uniform([3, 3]) print("Is there any GPUs available?"), print(tf.config.list_physical_devices('GPU')) print("Is the Tensor on GPU #0: "), print(x.device.endswith('GPU:0')) ``` ## Explicit Device Placement The term "placement" in TensorFlow refers to how individual operations are assigned (placed on) a device for execution. As mentioned above, when there is no explicit guidance provided, TensorFlow automatically decides which device to execute an operation, and copies Tensors to that device if needed. However, TensorFlow operations can be explicitly placed on specific devices using the tf.device context manager. For example: ``` def time_matmul(x): %timeit tf.matmul(x, x) # Force execution on CPU print("On CPU:") with tf.device("CPU:0"): x = tf.random.uniform([1000, 1000]) assert x.device.endswith("CPU:0") time_matmul(x) # Force execution on GPU #0 if available if tf.test.is_gpu_available(): with tf.device("GPU:0"): # Or GPU:1 for the 2nd GPU, GPU:2 for the 3rd etc. x = tf.random.uniform([1000, 1000]) assert x.device.endswith("GPU:0") time_matmul(x) ``` # Derivatives of a function - Computing gradients TensorFlow provides APIs for automatic differentiation - computing the derivative of a function. To differentiate automatically, TensorFlow needs to remember what operations happen in what order during the forward pass. Then, during the backward pass, TensorFlow traverses this list of operations in reverse order to compute gradients. ## Gradient tapes TensorFlow provides `tf.GradientTape` API for automatic differentiation; that is, computing the gradient of a computation with respect to its input variables. TensorFlow "records" all operations executed inside the context of a `tf.GradientTape` onto a "tape". TensorFlow then uses that tape and the gradients associated with each recorded operation to compute the gradients of a "recorded" computation using reverse mode differentiation. ![differenciation](figures/sl3.jpeg) ``` # With scalars: x = tf.constant(3.0) # y = x ^ 2 with tf.GradientTape() as t: t.watch(x) y = x * x # dy = 2x dy_dx = t.gradient(y, x) dy_dx.numpy() # Using matrices: x = tf.constant([3.0, 3.0]) with tf.GradientTape() as t: t.watch(x) z = tf.multiply(x, x) print(z) # Find derivative of z with respect to the original input tensor x print(t.gradient(z, x)) x = tf.constant([3.0, 3.0]) with tf.GradientTape() as t: t.watch(x) y = tf.multiply(x, x) z = tf.multiply(y, y) # Use the tape to compute the derivative of z with respect to the # intermediate value y. # dz_dx = 2 * y, where y = x ^ 2 print(t.gradient(z, y)) ``` ## Recording control flow Because tapes record operations as they are executed, Python control flow (using ifs and whiles for example) is naturally handled: ``` def f(x, y): output = 1.0 for i in range(y): if i > 1 and i < 5: output *= x return output def grad(x, y): with tf.GradientTape() as t: t.watch(x) out = f(x, y) return t.gradient(out, x) x = tf.constant(2.0) print(grad(x, 6).numpy()) # 12.0 print(grad(x, 5).numpy()) # 12.0 print(grad(x, 4).numpy()) # 4.0 ``` ## Higher-order gradients Operations inside of the `GradientTape` context manager are recorded for automatic differentiation. If gradients are computed in that context, then the gradient computation is recorded as well. As a result, the exact same API works for higher-order gradients as well. For example: ``` x = tf.Variable(1.0) # Create a Tensorflow variable initialized to 1.0 with tf.GradientTape() as t: with tf.GradientTape() as t2: y = x * x * x # Compute the gradient inside the 't' context manager # which means the gradient computation is differentiable as well. dy_dx = t2.gradient(y, x) d2y_dx2 = t.gradient(dy_dx, x) print("dy_dx:", dy_dx.numpy()) # 3.0 print("d2y_dx2:", d2y_dx2.numpy()) # 6.0 def f(x): return tf.square(tf.sin(x)) x = tf.Variable(3.1415/2) with tf.GradientTape() as t: y = f(x) t.gradient(y,x) import numpy as np import matplotlib.pyplot as plt x = tf.constant(np.linspace(0,2*np.pi,100)) with tf.GradientTape() as t: t.watch(x) y = f(x) g = t.gradient(y,x) plt.plot(x.numpy(), g.numpy(), x.numpy(), y.numpy()) ``` # Canny edge Canny, John. "A computational approach to edge detection." IEEE Transactions on pattern analysis and machine intelligence 6 (1986): 679-698. (27000+ citations) Canny: - assume noisy step edges - construct an edge detector using an optimal linear filter This is actually a simple neural network... ``` # package loading, helper functions import os import pickle import numpy as np import scipy.ndimage as ndi import sklearn import tensorflow.keras import tensorflow.keras.backend as K import tensorflow as tf import matplotlib.pyplot as plt #Source("digraph {X->Y; Y->Z;}") from graphviz import Source from scipy.stats import kde from sklearn import decomposition def tshow(v, ax=None, **keys): if isinstance(v, tf.Tensor): v = v.float_val if v.ndim == 1: v = v.reshape(28, 28) if v.ndim == 3 and v.shape[0] == 1: v = v[0] if v.ndim == 3 and v.shape[2] == 1: v = v[:, :, 0] if v.ndim == 4: v = v[0, 0] v = v - amin(v) v /= amax(v) if ax is not None: ax.imshow(v, **keys) else: imshow(v, **keys) def showrow(*args, **kw): figsize(*kw.get("fs", (15, 5))) if "fs" in kw: del kw["fs"] for i, im in enumerate(args): subplot(1, len(args), i+1) tshow(im, **kw) def showgrid(images, rows=4, cols=4, cmap=plt.cm.gray, size=(7, 7)): if size is not None: figsize(*size) for i in range(rows*cols): subplot(rows, cols, i+1) xticks([]) yticks([]) tshow(images[i], cmap=cmap) ``` The Process of Canny edge detection algorithm can be broken down to 5 different steps: - Apply Gaussian filter to smooth the image in order to remove the noise - Find the intensity gradients of the image - Apply non-maximum suppression to get rid of spurious response to edge detection - Apply double threshold to determine potential edges - Track edge by hysteresis: Finalize the detection of edges by suppressing all the other edges that are weak and not connected to strong edges. ``` length = 50 pos = np.random.randint(length, size=10000) signal = np.random.rand(10000,length) / 5 gt = np.zeros((10000,length)) for j,i in enumerate(pos): signal[j, i:] = signal[j, i:] + 0.5 gt[j, i] = 1 figure(figsize=(14,4)) plt.plot(signal[15]) plt.plot(gt[15]) from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv1D, Flatten from tensorflow.keras.optimizers import SGD # Canny model model = Sequential() model.add(Conv1D(1, 33, input_shape=(length, 1), activation="sigmoid", padding='same')) model.add(Flatten()) model.compile(SGD(lr=.1, momentum=0.9, decay=1e-5), loss='categorical_crossentropy') model.summary() K.set_value(model.optimizer.lr, 1e-3) model.fit(x = np.expand_dims(signal, 3), y=gt, batch_size=100, epochs=100) figure(figsize=(14,4)) ylim(ymax=1) a = 128 plt.plot(signal[a,:].reshape((length))) plt.plot(gt[a,:], '-.') plt.plot(model.predict(signal[a,:].reshape((1,length,1))).reshape(length)) ``` ## Kanny kernel ``` plt.plot(model.get_layer(index=0).get_weights( )[0].flatten()) ``` There is a large and rich theory of linear filters in signal processing and image processing: - mathematical properties: superposition, decomposition, impulse response - frequency domain analysis - optimal design for given tasks NB: "optimal linear" is not the same as "optimal" # No free lunch theorem! A model, a family of function, a neural net architecture+regularizer, a likelihood model+prior, are all forms of inductive bias. And we know that there is no free lunch, meaning that: - There is no learning without inductive bias. - There is no neural net training without an architecture. - There is no statistical estimation without a likelihood model. - There is no non-parametric estimation without regularization. Without some sort of inductive bias - There is no estimation of probability distribution. - There is no estimation of entropy. - There is no estimation of mutual information. - There is no estimation of conditional independence. - There is no measure of complexity - There is no estimation of minimum description length. Which means that none of these things are well defined quantities (except perhaps in the asymptotic case of infinite data. But who cares about that). The estimation of all of these quantities is subjectively dependent upon your choice of model. You may say: "the entropy of my data is well defined. It's `H = -SUM_x P(x) log P(x)` Yes, but what is `P(x)`? You only know `P(x)` through a bunch of samples. Which mean you need to estimate a model of `P(x)` from your data. Which means your model will necessarily have some sort of inductive bias, some sort of arbitrariness in it. Ultimately, all measures of distributions, information, entropy, complexity and dependency are in the eye of the beholder. The subjectivity of those quantities also exists when applied to physical systems. The entropy of a physical system is also in the eyes of the beholder. ``` from tensorflow.keras.layers import BatchNormalization, Activation, ZeroPadding1D # DL model model = Sequential() model.add(ZeroPadding1D(input_shape=(length, 1), padding=(8,1))) model.add(Conv1D(4, 7)) model.add(BatchNormalization(momentum=0.1)) model.add(Activation('relu')) model.add(ZeroPadding1D(padding=(2,1))) model.add(Conv1D(1, 7)) model.add(Flatten()) model.add(Activation('sigmoid')) model.compile(SGD(lr=.1, momentum=0.9, decay=1e-5), loss='categorical_crossentropy') model.summary() K.set_value(model.optimizer.lr, 1e-3) model.fit(x = np.expand_dims(signal, 3), y=gt, batch_size=200, epochs=50) figure(figsize=(14,4)) ylim(ymax=1) a = 10 plt.plot(signal[a,:].reshape((length))) plt.plot(gt[a,:], '-.') plt.plot(model.predict(signal[a,:].reshape((1,length,1))).reshape(length)) for i, l in enumerate(model.layers): print (i, l.name) ``` ## How does it works? ``` figure(figsize=(14,4)) x = signal[a,:].reshape((1,length,1)) layer = model.layers[1] f = K.function([model.get_input_at(0)], [layer.get_output_at(0)]) plt.plot(f([x])[0][0,:,:]) # your activation tensor plt.plot(gt[a,:], '-.') figure(figsize=(14,6)) layer = model.layers[2] f = K.function([model.get_input_at(0)], [layer.get_output_at(0)]) plt.plot(f([x])[0][0,:,:]) # your activation tensor plt.plot(gt[a,:], '-.') figure(figsize=(14,4)) layer = model.layers[5] f = K.function([model.get_input_at(0)], [layer.get_output_at(0)]) plt.plot(f([x])[0][0,:,:]) # your activation tensor plt.plot(gt[a,:], '-.') ``` - Zero padding on the boundary creates spurious edge responses. - DL can automatically suppress these for you (beware when benchmarking) - How would you pick weights to get rid of such spurious responses? ``` # DL model model = Sequential() model.add(ZeroPadding1D(input_shape=(length, 1), padding=(8,1))) model.add(Conv1D(16, 7)) model.add(BatchNormalization(momentum=0.1)) model.add(Activation('relu')) model.add(ZeroPadding1D(padding=(2,1))) model.add(Conv1D(1, 7)) model.add(Flatten()) model.add(Activation('sigmoid')) model.compile(SGD(lr=1e-2, momentum=0.9, decay=1e-5), loss='categorical_crossentropy') model.fit(x = np.expand_dims(signal, 3), y=gt, batch_size=200, epochs=50) figure(figsize=(18,6)) x = np.zeros(length) x[int(length/2):] = 1 x = x.reshape((1,length,1)) layer = model.layers[4] f = K.function([model.get_input_at(0)], [layer.get_output_at(0)]) plt.plot(f([x])[0][0,6:,:]) # your activation tensor plt.plot(7*x.flatten(), '-.', linewidth=4) ``` - DL discovered another neat trick: Instead of a single edge localizer, train two localization filters in the first layer (plus multiple suppression filters again). - Each localization filter is offset from the desired peak by one pixel. - The padding on the second convolutional layer means that the spurious edges on the boundary are only surrounded by one peak. # SVM Let's try to implement a standard L2-regularized support vector machine (SVM) ![smv](figures/L.png) where: ![](figures/D.png) ``` from sklearn import svm from tensorflow.keras.layers import Input, Dense, Activation from tensorflow.keras.models import Model from tensorflow.keras.regularizers import L1L2 import tensorflow.keras.backend as K def make_meshgrid(x, y, h=.02): """Create a mesh of points to plot in Parameters ---------- x: data to base x-axis meshgrid on y: data to base y-axis meshgrid on h: stepsize for meshgrid, optional Returns ------- xx, yy : ndarray """ x_min, x_max = x.min() - 1, x.max() + 1 y_min, y_max = y.min() - 1, y.max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) return xx, yy def plot_contours(ax, clf, xx, yy, **params): """Plot the decision boundaries for a classifier. Parameters ---------- ax: matplotlib axes object clf: a classifier xx: meshgrid ndarray yy: meshgrid ndarray params: dictionary of params to pass to contourf, optional """ Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) > .5 Z = Z.reshape(xx.shape) out = ax.contourf(xx, yy, Z, **params) return out rng = np.random.RandomState(0) n_samples_1 = 1000 n_samples_2 = 1000 X = np.r_[1. * rng.randn(n_samples_1, 2), 1. * rng.randn(n_samples_2, 2) + [2, 2]] y = [0] * (n_samples_1) + [1] * (n_samples_2) # we create an instance of SVM and fit out data. We do not scale our # data since we want to plot the support vectors C = 1.0 # SVM regularization parameter model = svm.LinearSVC(C=C) clf = model.fit(X, y) X0, X1 = X[:, 0], X[:, 1] xx, yy = make_meshgrid(X0, X1) plot_contours(plt, clf, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8) plt.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k') plt.show() def hinge(y_true, y_pred): return K.mean(K.maximum(1. - y_true * y_pred, 0.), axis=-1) l2 = 1/float(C) inputs = Input(shape=(2,)) x = Dense(1, activation=None, kernel_regularizer=L1L2(l1=0,l2=l2))(inputs) predictions = Activation('linear')(x) model = Model(inputs=inputs, outputs=predictions) model.compile(optimizer='sgd', loss=hinge, metrics=['accuracy']) model.summary() # Wx + b K.set_value(model.optimizer.lr, 1e-3) history = model.fit(x=X, y=np.array(y), batch_size=100, epochs=100, shuffle=True, validation_split=.3) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.plot(history.history['accuracy']) plt.plot(history.history['val_accuracy']) plt.title('model loss') plt.ylabel('loss') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right'); plot_contours(plt, model, xx, yy, cmap=plt.cm.coolwarm, alpha=0.8) plt.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k') ``` # IIR Filters and their DL Equivalents ``` # simple IIR filter def simple_iir(xs): value = xs[0] output = zeros(len(xs)) for i in range(len(xs)): output[i] = value value = 0.7 * value + 0.3 * xs[i] return output xs = rand(100) * (rand(100) < 0.1) figure(figsize=(14,5)) plot(xs) plot(simple_iir(xs)) ``` ## IIR Filters IIR filters are simple linear filters. Unlike FIR filters, the output is a linear function of both inputs and past output values. IIR filters can approximate FIR filters well. IIR filters are the linear equivalent of *recurrent neural networks*. ``` inputs = array([(rand(200) < 0.1).astype('f') for i in range(10000)]).reshape(-1, 1, 200) outputs = array([4.0*ndi.gaussian_filter(s, 3.0) for s in inputs]).reshape(-1, 1, 200) figure(figsize=(14,5)) plot(inputs[0,0]); plot(outputs[0,0]) ``` ### Simple NN ``` length = 200 model = Sequential() model.add(Conv1D(1, 33, input_shape=(length, 1), activation="sigmoid", padding='same')) model.add(Flatten()) model.compile(SGD(lr=.1, momentum=0.9, decay=1e-5), loss='mse') model.summary() #K.set_value(model.optimizer.lr, 1e-2) model.fit(x=inputs.reshape((-1, 200,1)), y=outputs.reshape((-1, 200)), batch_size=100, epochs=100) pred = model.predict(inputs.reshape((-1, 200,1))) figure(figsize=(15,5)) plot(inputs[0,0]) plot(pred[0], linewidth=2, color="green") plot(outputs[0,0], linewidth=8, alpha=0.3, color="red") ``` ### Recurrent neural network - LSTM ``` from tensorflow.keras.layers import LSTM, Bidirectional length = 200 model = Sequential() model.add(Bidirectional(LSTM(4, input_shape=(length, 1), activation="sigmoid", return_sequences=True))) model.add(Conv1D(1, 8, padding='same')) model.compile(SGD(lr=.1, momentum=0.9, decay=1e-5), loss='mse') model.build(input_shape=(None, 200,1)) model.summary() K.set_value(model.optimizer.lr, 1e-1) model.fit(x=inputs.reshape((-1, 200,1)), y=outputs.reshape((-1, 200,1)), batch_size=1000, epochs=15) pred = model.predict(inputs.reshape((-1, 200,1))) figure(figsize=(15,5)) plot(inputs[0,0]) plot(pred[0], linewidth=2, color="green") plot(outputs[0,0], linewidth=8, alpha=0.3, color="red") ``` # Handsome input pipelines with the `tf.data` API The `tf.data` API offers functions for data pipelining and related operations. We can build pipelines, map preprocessing functions, shuffle or batch a dataset and much more. ## From tensors ``` dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) iter(dataset).next().numpy() ``` ## Batch and Shuffle ``` # Shuffle dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).shuffle(6) iter(dataset).next().numpy() # Batch dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).batch(2) iter(dataset).next().numpy() # Shuffle and Batch dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).shuffle(6).batch(2) iter(dataset).next().numpy() ``` ## Zipping Two Datsets ``` dataset0 = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]) dataset1 = tf.data.Dataset.from_tensor_slices([1, 2, 3, 4, 5, 6]) dataset = tf.data.Dataset.zip((dataset0, dataset1)) iter(dataset).next() ``` ## Mapping External Functions ``` def into_2(num): return num * 2 dataset = tf.data.Dataset.from_tensor_slices([8, 3, 0, 8, 2, 1]).map(into_2) ``` ## ImageDataGenerator This is one of the best features of the `tensorflow.keras` API (in my opinion). The `ImageDataGenerator` is capable of generating dataset slices while batching and preprocessing along with data augmentation in real-time. The Generator allows data flow directly from directories or from dataframes. A misconception about data augmentation in `ImageDataGenerator` is that, it adds more data to the existing dataset. Although that is the actual definition of data augmentation, in `ImageDataGenerator`, the images in the dataset are transformed dynamically at different steps in training so that the model can be trained on noisy data it hasn’t seen. ``` from tensorflow.keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator( rescale=1./255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True ) ``` Here, the rescaling is done on all the samples (for normalizing), while the other parameters are for augmentation. ``` train_generator = train_datagen.flow_from_directory( 'data/train', target_size=(150, 150), batch_size=32, class_mode='binary' ) ``` # Custom Layers Neural Nets are known for many layer deep networks wherein the layers can be of different types. TensorFlow contains many predefined layers (like Dense, LSTM, etc.). But for more complex architectures, the logic of a layer is much more complex than a primary layer. For such instances, TensorFlow allows building custom layers. This can be done by subclassing the tf.keras.layers.Layer class. ``` class CustomDense(tf.keras.layers.Layer): def __init__(self, num_outputs): super(CustomDense, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_weight( "kernel", shape=[int(input_shape[-1]), self.num_outputs] ) def call(self, input): return tf.matmul(input, self.kernel) ``` As stated in the documentation, The best way to implement your own layer is extending the `tf.keras.Layer` class and implementing: - `__init__` , where you can do all input-independent initialization. - `build`, where you know the shapes of the input tensors and can do the rest of the initialization. - `call`, where you do the forward computation. Although the kernel initialization can be done in `__init__` itself, it is considered better to be initialized in build as otherwise, you would have to explicitly specify the input_shape on every instance of a new layer creation. # Custom Training The `tf.keras` Sequential and the Model API makes training models easier. However, most of the time while training complex models, custom loss functions are used. Moreover, the model training can also differ from the default (for eg. applying gradients separately to different model components). TensorFlow’s automatic differentiation helps calculating gradients in an efficient way. These primitives are used in defining custom training loops. ``` def train(model, inputs, outputs, learning_rate): with tf.GradientTape() as t: # Computing Losses from Model Prediction current_loss = loss(outputs, model(inputs)) # Gradients for Trainable Variables with Obtained Losses dW, db = t.gradient(current_loss, [model.W, model.b]) # Applying Gradients to Weights model.W.assign_sub(learning_rate * dW) model.b.assign_sub(learning_rate * db) ``` This loop can be repeated for multiple epochs and with a more customised setting as per the use case. # Autoencoders predict the input, given that same input. \begin{equation}\label{eq:} F_{W,b}(x) \approx x \end{equation} The goal is to learn a compressed representation of the data, thus find structure. This can be done by limiting the number of hidden units in the model. Those kind of autoencoders are called *undercomplete*.
github_jupyter
``` from typing import List, Union import numpy as np ``` # Lasso just adds L1 regularization ![image.png](attachment:image.png) ``` class LinRegLasso: def _init(self, n_feat: int) -> None: self.weights = np.ones((n_feat + 1)) # (n_feat + 1,) weights + bias def predict(self, feature_vector: Union[np.ndarray, List[int]]) -> float: ''' feature_vector may be a list or have shape (n_feat,) or it may be a bunch of vectors (n_vec, nfeat) ''' feature_vector = np.array(feature_vector) assert feature_vector.shape[-1] == self.weights.shape[0] - 1 if len(feature_vector.shape) == 1: feature_vector = feature_vector[np.newaxis,] return self.weights @ np.concatenate((feature_vector.T, [[1]*feature_vector.shape[0]])) def mse(self, X, Y): Y_hat = self.predict(X) return np.sum((Y - Y_hat)**2) / Y.shape[0] def _update_weights(self, X, Y, lr, wd): ''' X: (n_samples, n_features) Y: (n_samples,) self.weights: (n_features + 1) Cost function is MSE: (y - W*X - b)**2; its derivative with resp to any x is -2*X*(y - W*X - b), and with resp to b is -2*(y - W*X - b). Regularisation function is L1 |W|; its derivative is SIGN(w) ''' predictions = self.predict(X) error = Y - predictions # (n_samples,) X_with_bias = np.concatenate((X.T, [[1]*X.shape[0]])).T updates = -2 * X_with_bias.T @ error / Y.shape[0] regression_term = np.sign(self.weights) self.weights -= lr * updates + wd * regression_term def fit(self, X, Y, n_epochs: int, lr: float, wd: float): self._init(X.shape[-1]) for i in range(n_epochs): self._update_weights(X, Y, lr, wd) mse = self.mse(X, Y) print(f'epoch: {i}, \t MSE: {mse}') import random import matplotlib.pyplot as plt %matplotlib inline from matplotlib.pylab import rcParams rcParams['figure.figsize'] = 12, 10 #Define input array with angles from 60deg to 300deg converted to radians x = np.array([i*np.pi/180 for i in range(60,300,4)]) np.random.seed(10) #Setting seed for reproducibility y = np.sin(x) + np.random.normal(0,0.15,len(x)) plt.plot(x,y,'.') lrs = LinRegLasso() alpha = 0.0001 lr = 0.08 epochs = 100 lrs.fit(x[:,np.newaxis],y, epochs, lr, alpha) lrs.weights ``` # Just to check the result we can use scikit-learn The difference may be explained by lr scheduler in scikit probably ``` from sklearn.linear_model import Lasso lassoreg = Lasso(alpha=alpha, max_iter=epochs) lassoreg.fit(x[:,np.newaxis],y) y_pred = lassoreg.predict(x[:,np.newaxis]) y_hat = lrs.predict(x[:,np.newaxis]) plt.plot(x,y,'.') plt.plot(x,y_hat,'.') plt.plot(x,y_pred,'.') ```
github_jupyter
# Newton's Method for Logistic Regression - The Math of Intelligence (Week 2) ![alt text](https://plot.ly/~florianh/140/logistic-regression-1-feature.png "Logo Title Text 1") ## Our Task We're going to compute the probability that someone has Diabetes given their height, weight, and blood pressure. We'll generate this data ourselves (toy data), plot it, learn a logistic regression curve using Newton's Method for Optimization, then use that curve to predict the probability someone new with these 3 features has diabetes. We'll use Calculus, Probability Theory, Statistics, and Linear Algebra to do this. Get ready, ish is about to go down. ## What is Logistic regression? Logistic regression is named for the function used at the core of the method, the logistic function. In linear regression, the outcome (dependent variable) is continuous. It can have any one of an infinite number of possible values. In logistic regression, the outcome (dependent variable) has only a limited number of possible values. Logistic Regression is used when response variable is categorical in nature. The logistic function, also called the sigmoid function is an S-shaped curve that can take any real-valued number and map it into a value between 0 and 1, but never exactly at those limits. ![alt text](https://qph.ec.quoracdn.net/main-qimg-05edc1873d0103e36064862a45566dba "Logo Title Text 1") Where e is the base of the natural logarithms (Euler’s number or the EXP() function in your spreadsheet) and value is the actual numerical value that you want to transform. E is a really convenient number for math, for example Whenever you take the derivative of e^x (that's e to the x), you get e^x back again. It's the only function on Earth that will do that. Logistic regression uses an equation as the representation, similar to linear regression. The central premise of Logistic Regression is the assumption that your input space can be separated into two nice ‘regions’, one for each class, by a linear(read: straight) boundary. Your data must be linearly seperable in n dimensions ![alt text](https://codesachin.files.wordpress.com/2015/08/linearly_separable_4.png "Logo Title Text 1") So if we had the following function ![alt text](https://s0.wp.com/latex.php?latex=%5Cbeta_0+%2B+%5Cbeta_1+x_1+%2B+%5Cbeta_2+x_2&bg=ffffff&fg=555555&s=0&zoom=2 "Logo Title Text 1") Given some point (a,b), if we plugged it in, the equation could output a positive result (for one class), negative result (for the other class), or 0 (the point lies right on the decision boundary). So we have a function that outputs a value in (-infinity, +infinity) given an input data point. But how do we map this to the probability P_+, that goes from [0, 1]? The answer, is in the odds function. ![alt text](http://i.imgur.com/bz8XqI8.png "Logo Title Text 1") Let $P(X)$ denote the probability of an event X occurring. In that case, the odds ratio (OR(X)) is defined by ![alt text](https://s0.wp.com/latex.php?latex=%5Cfrac%7BP%28X%29%7D%7B1-P%28X%29%7D&bg=ffffff&fg=555555&s=0&zoom=2 "Logo Title Text 1") which is essentially the ratio of the probability of the event happening, vs. it not happening. It is clear that probability and odds convey the exact same information. But as $P(X)$ goes from 0 to 1, OR(X) goes from 0 to infinity Input values (x) are combined linearly using weights or coefficient values (referred to as the Greek capital letter Beta) to predict an output value (y). A key difference from linear regression is that the output value being modeled is a binary values (0 or 1) (discrete) rather than a numeric value (continuous) ![alt text](https://image.slidesharecdn.com/ihcclogisticregression-130728061733-phpapp02/95/logistic-regression-in-casecontrol-study-14-638.jpg?cb=1374992365 "Logo Title Text 1") However, we are still not quite there yet, since our boundary function gives a value from –infinity to infinity. So what we do, is take the logarithm of OR(X), called the log-odds function. Mathematically, as OR(X) goes from 0 to infinity, log(OR(X)) goes from –infinity to infinity We are modeling the probability that an input (X) belongs to the default class (Y=1). The probability prediction must be transformed into a binary values (0 or 1) in order to actually make a probability prediction. Logistic regression is a linear method, but the predictions are transformed using the logistic function. The impact of this is that we can no longer understand the predictions as a linear combination of the inputs as we can with linear regression. Wait, how is the boundary function computed? Well we want to maximize the likelihood that a random data point gets classified correctly. We call this Maximimum likelihood estimation. Maximum likelihood estimation is a general approach to estimating parameters in statistical models by maximizing the likelihood function. MLE applied to deep networks gets a special name “Backpropagation". MLE is defined as L(θ|X)=f(X|θ) Newton's Method is an optimization algorithm. You can use this algorithm to find maximum (or minimum) of many different functions, including the likelihood function. You can obtain maximum likelihood estimates using different methods and using an optimization algorithm is one of them. ## Why use Newton's Method for optimizing? - Newton’s method usually converges faster than gradient descent when maximizing logistic regression log likelihood. - Each iteration is more expensive than gradient descent because of calculating inverse of Hessian - As long as data points are not very large, Newton’s methods are preferred ## What are some other good examples of logistic regression + newton's method? 1 https://github.com/hhl60492/newton_logistic/blob/master/main.py (Spam Classification) 2 https://github.com/yangarbiter/logistic_regression_newton-cg (Click Through Rate Classification) ``` %matplotlib inline #matrix math import numpy as np #data manipulation import pandas as pd #matrix data structure from patsy import dmatrices #for error logging import warnings ``` <a name="data_setup"></a> ## Setup ### Parameter / Data Setup In the below cells, there are various parameters and options to play with involving data creation, algorithm settings, and what model you want to try and fit. ``` #outputs probability between 0 and 1, used to help define our logistic regression curve def sigmoid(x): '''Sigmoid function of x.''' return 1/(1+np.exp(-x)) ``` ![alt text](http://i.imgur.com/TfPVnME.png "Logo Title Text 1") ``` #makes the random numbers predictable #(pseudo-)random numbers work by starting with a number (the seed), #multiplying it by a large number, then taking modulo of that product. #The resulting number is then used as the seed to generate the next "random" number. #When you set the seed (every time), it does the same thing every time, giving you the same numbers. #good for reproducing results for debugging np.random.seed(0) # set the seed ##Step 1 - Define model parameters (hyperparameters) ## algorithm settings #the minimum threshold for the difference between the predicted output and the actual output #this tells our model when to stop learning, when our prediction capability is good enough tol=1e-8 # convergence tolerance lam = None # l2-regularization #how long to train for? max_iter = 20 # maximum allowed iterations ## data creation settings #Covariance measures how two variables move together. #It measures whether the two move in the same direction (a positive covariance) #or in opposite directions (a negative covariance). r = 0.95 # covariance between x and z n = 1000 # number of observations (size of dataset to generate) sigma = 1 # variance of noise - how spread out is the data? ## model settings beta_x, beta_z, beta_v = -4, .9, 1 # true beta coefficients var_x, var_z, var_v = 1, 1, 4 # variances of inputs ## the model specification you want to fit formula = 'y ~ x + z + v + np.exp(x) + I(v**2 + z)' ## Step 2 - Generate and organize our data #The multivariate normal, multinormal or Gaussian distribution is a generalization of the one-dimensional normal #distribution to higher dimensions. Such a distribution is specified by its mean and covariance matrix. #so we generate input values - (x, v, z) using normal distributions #A probability distribution is a function that provides us the probabilities of all #possible outcomes of a stochastic process. #lets keep x and z closely related (height and weight) x, z = np.random.multivariate_normal([0,0], [[var_x,r],[r,var_z]], n).T #blood presure v = np.random.normal(0,var_v,n)**3 #create a pandas dataframe (easily parseable object for manipulation) A = pd.DataFrame({'x' : x, 'z' : z, 'v' : v}) #compute the log odds for our 3 independent variables #using the sigmoid function A['log_odds'] = sigmoid(A[['x','z','v']].dot([beta_x,beta_z,beta_v]) + sigma*np.random.normal(0,1,n)) #compute the probability sample from binomial distribution #A binomial random variable is the number of successes x has in n repeated trials of a binomial experiment. #The probability distribution of a binomial random variable is called a binomial distribution. A['y'] = [np.random.binomial(1,p) for p in A.log_odds] #create a dataframe that encompasses our input data, model formula, and outputs y, X = dmatrices(formula, A, return_type='dataframe') #print it X.head(10) ``` <a name="algorithms"></a> <hr> ### Algorithm Setup We begin with a quick function for catching singular matrix errors that we will use to decorate our Newton steps. ``` #like dividing by zero (Wtff omgggggg universe collapses) def catch_singularity(f): '''Silences LinAlg Errors and throws a warning instead.''' def silencer(*args, **kwargs): try: return f(*args, **kwargs) except np.linalg.LinAlgError: warnings.warn('Algorithm terminated - singular Hessian!') return args[0] return silencer ``` <a name="newton"></a> <hr> ### Explanation of a Single Newton Step Recall that Newton's method for maximizing / minimizing a given function $f(\beta)$ iteratively computes the following estimate: $$ \beta^+ = \beta - Hf(\beta)^{-1}\nabla f(\beta) $$ The Hessian of the log-likelihood for logistic regression is given by: hessian of our function = negative tranpose of (N times (p+1) times (N x N diagional matrix of weights, each is p*(1-p) times X again $$ Hf(\beta) = -X^TWX $$ and the gradient is: gradient of our function = tranpose of X times (column vector - N vector of probabilities) $$ \nabla f(\beta) = X^T(y-p) $$ where $$ W := \text{diag}\left(p(1-p)\right) $$ and $p$ are the predicted probabilites computed at the current value of $\beta$. ### Connection to Iteratively Reweighted Least Squares (IRLS) *For logistic regression, this step is actually equivalent to computing a weighted least squares estimator at each iteration!* The method of least squares is about estimating parameters by minimizing the squared discrepancies between observed data, on the one hand, and their expected values on the other I.e., $$ \beta^+ = \arg\min_\beta (z-X\beta)^TW(z-X\beta) $$ with $W$ as before and the *adjusted response* $z$ is given by $$ z := X\beta + W^{-1}(y-p) $$ **Takeaway:** This is fun, but in fact it can be leveraged to derive asymptotics and inferential statistics about the computed MLE $\beta^*$! ### Our implementations Below we implement a single step of Newton's method, and we compute $Hf(\beta)^{-1}\nabla f(\beta)$ using `np.linalg.lstsq(A,b)` to solve the equation $Ax = b$. Note that this does not require us to compute the actual inverse of the Hessian. ``` @catch_singularity def newton_step(curr, X, lam=None): '''One naive step of Newton's Method''' #how to compute inverse? http://www.mathwarehouse.com/algebra/matrix/images/square-matrix/inverse-matrix.gif ## compute necessary objects #create probability matrix, miniminum 2 dimensions, tranpose (flip it) p = np.array(sigmoid(X.dot(curr[:,0])), ndmin=2).T #create weight matrix from it W = np.diag((p*(1-p))[:,0]) #derive the hessian hessian = X.T.dot(W).dot(X) #derive the gradient grad = X.T.dot(y-p) ## regularization step (avoiding overfitting) if lam: # Return the least-squares solution to a linear matrix equation step, *_ = np.linalg.lstsq(hessian + lam*np.eye(curr.shape[0]), grad) else: step, *_ = np.linalg.lstsq(hessian, grad) ## update our beta = curr + step return beta ``` Next, we implement Newton's method in a *slightly* different way; this time, at each iteration, we actually compute the full inverse of the Hessian using `np.linalg.inv()`. ``` @catch_singularity def alt_newton_step(curr, X, lam=None): '''One naive step of Newton's Method''' ## compute necessary objects p = np.array(sigmoid(X.dot(curr[:,0])), ndmin=2).T W = np.diag((p*(1-p))[:,0]) hessian = X.T.dot(W).dot(X) grad = X.T.dot(y-p) ## regularization if lam: #Compute the inverse of a matrix. step = np.dot(np.linalg.inv(hessian + lam*np.eye(curr.shape[0])), grad) else: step = np.dot(np.linalg.inv(hessian), grad) ## update our weights beta = curr + step return beta ``` <a name="conv"></a> <hr> ### Convergence Setup First we implement coefficient convergence; this stopping criterion results in convergence whenever $$ \|\beta^+ - \beta\|_\infty < \epsilon $$ where $\epsilon$ is a given tolerance. ``` def check_coefs_convergence(beta_old, beta_new, tol, iters): '''Checks whether the coefficients have converged in the l-infinity norm. Returns True if they have converged, False otherwise.''' #calculate the change in the coefficients coef_change = np.abs(beta_old - beta_new) #if change hasn't reached the threshold and we have more iterations to go, keep training return not (np.any(coef_change>tol) & (iters < max_iter)) ``` <a name="numerics"></a> <hr> ## Numerical Examples ### Standard Newton with Coefficient Convergence ``` ## initial conditions #initial coefficients (weight values), 2 copies, we'll update one beta_old, beta = np.ones((len(X.columns),1)), np.zeros((len(X.columns),1)) #num iterations we've done so far iter_count = 0 #have we reached convergence? coefs_converged = False #if we haven't reached convergence... (training step) while not coefs_converged: #set the old coefficients to our current beta_old = beta #perform a single step of newton's optimization on our data, set our updated beta values beta = newton_step(beta, X, lam=lam) #increment the number of iterations iter_count += 1 #check for convergence between our old and new beta values coefs_converged = check_coefs_convergence(beta_old, beta, tol, iter_count) print('Iterations : {}'.format(iter_count)) print('Beta : {}'.format(beta)) ```
github_jupyter
# Variables and data types Variables are used to temporarily store a value in the computer's memory. We can then use it later, when we need it in the program. We can compare that to a small box in which we could store information. Since Python is a dynamically typed language, Python values, not variables, carry type. This has implications for many aspects of the way the language functions. All variables in Python hold references to objects, and these references are passed to functions; a function cannot change the value of variable references in its calling function (but see below for exceptions). Some people (including Guido van Rossum himself) have called this parameter-passing scheme "Call by object reference." An object reference means a name, and the passed reference is an "alias", i.e. a copy of the reference to the same object, just as in C/C++. The object's value may be changed in the called function with the "alias", for example: ## Naming and code writing conventions As in many programming languages, to name a variable, some conventions must be respected. #### 1. The name of the variable must start with a **letter or an underscore**. The variable can not start with a number or a hyphen. ❌ Bad example : ```py # Do not do this 2Name = "James" # Do not do this -name = "James" ``` ✅ Good example : ```py # Do this name = "James" # Do this _name = "James" ``` #### 2. **Never put space between words.** ❌ Bad example : ```py # Do not do this My name = "Odile" ``` ✅ Good example : ```py # DO is my_name = "Odile" ``` #### 3. No accents on the names of variables. **Use only English** ❌ Bad example : ```py # Do not do this prénom = "Odile" ``` ✅ Good example : ```py # Do this first_name = "Odile" ``` #### 4. **Always give an explicit name** to the variable. ❌ Bad example : ```py # Do not do this a = "Odile" # Do not do this fstnme = "Odile" ``` ✅ Good example : ```py # Do this first_name = "Odile" # Do this magic_potion = 42 ``` **Try it for yourself by clicking on the Run button** ``` # BAD 2name = "James" # BAD -name = "Bond" # BAD My name = "bond" ``` The print() function allows us to display the result. ``` # GOOD name = "Bond" print("Your name is", name) ``` To format the text, you can use the format() method in the class string ``` last_name = "Bond" first_name = "James" text = "My name is {}, {} {}.".format(last_name, first_name, last_name) print(text) ``` Another example. Replace the value of the variable ``age`` with your age and the variable ``firstname`` with your First name. ``` age = "34" first_name ="Ludovic" text = "Hello, my name is {} and i am {}".format(first_name, age) print(text) ``` ## Data types Since Python is a high-level language, it has a dynamic variable typing. By dynamics, understand that it is the computer that deals with defining what type of variable should be used. To be perfectly accurate, **it is not the variable that is typed (unlike Java) but its content** In java we declare a variable like this: ```java String fisrtName = "James" ``` We define the type of variable ourselves. With python we declare a variable like this: ```py first_name = "James" ``` And so, it's python that will define what type will be used. ``` first_name = "James" # String last_name = "Bond" # String age = 39 # Integer weigth = 81.56 # Float double_agent = True # Boolean login = "007" # String agent = [first_name, last_name, age, weigth, double_agent, login] # List print(agent) ``` 1. Note that ``True`` and ``False`` are written with the first letter uppercase. 2. ``login`` is a string. 3. ``List`` is like an array, but you can store values of different types. Here is a not limited list of the types we use most often. These are the most frequently used. For tuples, dictionaries and sets we'll see them later |Name | Type | Description | |:---------|:---------:|------------:| |Integers | int | Whole numbers, such as : **1, 67, 5000** | |Floating point | float | Decimal point numbers, such as : **1.89, 0.67, 9.99999** | |Strings | str | Ordered sequence of characters : **"Hello", "10"** | |Lists | list | Ordered sequence of objects : **["hello", 10, 56.89]** | |Dictionaries | dict | Unordered (Key : Value) pairs : **{"Key1": value, "name" : "Peter}** | |Tuples | tuple | Ordered sequence of objects (immutable) : **("hello", 10, 56.89)** | |Sets | set | Unordered collections of unique objects : **{1,2}** | |Booleans |bool| Logical value : **True or False** | **There is a native pyhton function that allows you to know what type of data you have. This is the type() function** ``` print(first_name, type(first_name)) print(last_name, type(last_name)) print(age, type(age)) print(weigth, type(weigth)) print(agent, type(agent)) ``` ## [Finished? Practice with these exercises](./drill-variables-data-types.ipynb).
github_jupyter
``` !pip install /home/knikaido/work/Cornell-Birdcall-Identification/data/resnest50-fast-package/resnest-0.0.6b20200701/resnest/ !pip install torch==1.4.0 !pip install opencv-python !pip install slackweb !pip install torchvision==0.2.2 !pip install torch_summary from pathlib import Path import numpy as np import pandas as pd import typing as tp import yaml import random import os import sys import soundfile as sf import librosa import cv2 import matplotlib.pyplot as plt import time import pickle import torch import torch.nn as nn import torch.nn.functional as F import torch.utils.data as data import resnest.torch as resnest_torch from torchvision import models from sklearn.model_selection import StratifiedKFold from sklearn.metrics import f1_score from radam import RAdam from resnet import ResNet, Bottleneck pd.options.display.max_rows = 500 pd.options.display.max_columns = 500 with open('0909_2_config.yml', 'r') as yml: settings = yaml.safe_load(yml) def set_seed(seed: int = 42): random.seed(seed) np.random.seed(seed) os.environ["PYTHONHASHSEED"] = str(seed) torch.manual_seed(seed) torch.cuda.manual_seed(seed) # type: ignore # torch.backends.cudnn.deterministic = True # type: ignore # torch.backends.cudnn.benchmark = True # type: ignore # def progress_bar(i): # pro_bar = ('=' * i) + (' ' * (pro_size - i)) # print('\r[{0}] {1}%'.format(pro_bar, i / pro_size * 100.), end='') # ROOT = Path.cwd().parent # INPUT_ROOT = ROOT / "input" INPUT_ROOT = Path("/home/knikaido/work/Cornell-Birdcall-Identification/data") RAW_DATA = INPUT_ROOT / "birdsong_recognition" TRAIN_AUDIO_DIR = RAW_DATA / "train_audio" TRAIN_RESAMPLED_AUDIO_DIRS = [ INPUT_ROOT / "birdsong-resampled-train-audio-{:0>2}".format(i) for i in range(5) ] TEST_AUDIO_DIR = RAW_DATA / "test_audio" BIRD_CODE = { 'aldfly': 0, 'ameavo': 1, 'amebit': 2, 'amecro': 3, 'amegfi': 4, 'amekes': 5, 'amepip': 6, 'amered': 7, 'amerob': 8, 'amewig': 9, 'amewoo': 10, 'amtspa': 11, 'annhum': 12, 'astfly': 13, 'baisan': 14, 'baleag': 15, 'balori': 16, 'banswa': 17, 'barswa': 18, 'bawwar': 19, 'belkin1': 20, 'belspa2': 21, 'bewwre': 22, 'bkbcuc': 23, 'bkbmag1': 24, 'bkbwar': 25, 'bkcchi': 26, 'bkchum': 27, 'bkhgro': 28, 'bkpwar': 29, 'bktspa': 30, 'blkpho': 31, 'blugrb1': 32, 'blujay': 33, 'bnhcow': 34, 'boboli': 35, 'bongul': 36, 'brdowl': 37, 'brebla': 38, 'brespa': 39, 'brncre': 40, 'brnthr': 41, 'brthum': 42, 'brwhaw': 43, 'btbwar': 44, 'btnwar': 45, 'btywar': 46, 'buffle': 47, 'buggna': 48, 'buhvir': 49, 'bulori': 50, 'bushti': 51, 'buwtea': 52, 'buwwar': 53, 'cacwre': 54, 'calgul': 55, 'calqua': 56, 'camwar': 57, 'cangoo': 58, 'canwar': 59, 'canwre': 60, 'carwre': 61, 'casfin': 62, 'caster1': 63, 'casvir': 64, 'cedwax': 65, 'chispa': 66, 'chiswi': 67, 'chswar': 68, 'chukar': 69, 'clanut': 70, 'cliswa': 71, 'comgol': 72, 'comgra': 73, 'comloo': 74, 'commer': 75, 'comnig': 76, 'comrav': 77, 'comred': 78, 'comter': 79, 'comyel': 80, 'coohaw': 81, 'coshum': 82, 'cowscj1': 83, 'daejun': 84, 'doccor': 85, 'dowwoo': 86, 'dusfly': 87, 'eargre': 88, 'easblu': 89, 'easkin': 90, 'easmea': 91, 'easpho': 92, 'eastow': 93, 'eawpew': 94, 'eucdov': 95, 'eursta': 96, 'evegro': 97, 'fiespa': 98, 'fiscro': 99, 'foxspa': 100, 'gadwal': 101, 'gcrfin': 102, 'gnttow': 103, 'gnwtea': 104, 'gockin': 105, 'gocspa': 106, 'goleag': 107, 'grbher3': 108, 'grcfly': 109, 'greegr': 110, 'greroa': 111, 'greyel': 112, 'grhowl': 113, 'grnher': 114, 'grtgra': 115, 'grycat': 116, 'gryfly': 117, 'haiwoo': 118, 'hamfly': 119, 'hergul': 120, 'herthr': 121, 'hoomer': 122, 'hoowar': 123, 'horgre': 124, 'horlar': 125, 'houfin': 126, 'houspa': 127, 'houwre': 128, 'indbun': 129, 'juntit1': 130, 'killde': 131, 'labwoo': 132, 'larspa': 133, 'lazbun': 134, 'leabit': 135, 'leafly': 136, 'leasan': 137, 'lecthr': 138, 'lesgol': 139, 'lesnig': 140, 'lesyel': 141, 'lewwoo': 142, 'linspa': 143, 'lobcur': 144, 'lobdow': 145, 'logshr': 146, 'lotduc': 147, 'louwat': 148, 'macwar': 149, 'magwar': 150, 'mallar3': 151, 'marwre': 152, 'merlin': 153, 'moublu': 154, 'mouchi': 155, 'moudov': 156, 'norcar': 157, 'norfli': 158, 'norhar2': 159, 'normoc': 160, 'norpar': 161, 'norpin': 162, 'norsho': 163, 'norwat': 164, 'nrwswa': 165, 'nutwoo': 166, 'olsfly': 167, 'orcwar': 168, 'osprey': 169, 'ovenbi1': 170, 'palwar': 171, 'pasfly': 172, 'pecsan': 173, 'perfal': 174, 'phaino': 175, 'pibgre': 176, 'pilwoo': 177, 'pingro': 178, 'pinjay': 179, 'pinsis': 180, 'pinwar': 181, 'plsvir': 182, 'prawar': 183, 'purfin': 184, 'pygnut': 185, 'rebmer': 186, 'rebnut': 187, 'rebsap': 188, 'rebwoo': 189, 'redcro': 190, 'redhea': 191, 'reevir1': 192, 'renpha': 193, 'reshaw': 194, 'rethaw': 195, 'rewbla': 196, 'ribgul': 197, 'rinduc': 198, 'robgro': 199, 'rocpig': 200, 'rocwre': 201, 'rthhum': 202, 'ruckin': 203, 'rudduc': 204, 'rufgro': 205, 'rufhum': 206, 'rusbla': 207, 'sagspa1': 208, 'sagthr': 209, 'savspa': 210, 'saypho': 211, 'scatan': 212, 'scoori': 213, 'semplo': 214, 'semsan': 215, 'sheowl': 216, 'shshaw': 217, 'snobun': 218, 'snogoo': 219, 'solsan': 220, 'sonspa': 221, 'sora': 222, 'sposan': 223, 'spotow': 224, 'stejay': 225, 'swahaw': 226, 'swaspa': 227, 'swathr': 228, 'treswa': 229, 'truswa': 230, 'tuftit': 231, 'tunswa': 232, 'veery': 233, 'vesspa': 234, 'vigswa': 235, 'warvir': 236, 'wesblu': 237, 'wesgre': 238, 'weskin': 239, 'wesmea': 240, 'wessan': 241, 'westan': 242, 'wewpew': 243, 'whbnut': 244, 'whcspa': 245, 'whfibi': 246, 'whtspa': 247, 'whtswi': 248, 'wilfly': 249, 'wilsni1': 250, 'wiltur': 251, 'winwre3': 252, 'wlswar': 253, 'wooduc': 254, 'wooscj2': 255, 'woothr': 256, 'y00475': 257, 'yebfly': 258, 'yebsap': 259, 'yehbla': 260, 'yelwar': 261, 'yerwar': 262, 'yetvir': 263 } INV_BIRD_CODE = {v: k for k, v in BIRD_CODE.items()} train = pd.read_csv(RAW_DATA / "train.csv") # train = pd.read_csv(TRAIN_RESAMPLED_AUDIO_DIRS[0] / "train_mod.csv") train_rate = train[['ebird_code', 'filename', 'rating']].sort_values('rating') train_rate[train_rate['rating'] == 2.0] train_rate['rating'].value_counts() len(train_rate[train_rate['rating'] <= 1.5]) / len(train_rate) train['secondary_labels'].value_counts() train.columns ```
github_jupyter
# TF-IDF ``` import mwdsbe import mwdsbe.datasets.licenses as licenses import schuylkill as skool import pandas as pd import numpy as np # import registry registry = mwdsbe.load_registry() # geopandas df registry.head() # import license data license = licenses.CommercialActivityLicenses.download() license.head() mini_registry = registry[:5] # clean company_name and dba_name of clean datasets ignore = ['inc', 'group', 'llc', 'corp', 'pc', 'incorporated', 'ltd'] registry = skool.clean_strings(registry, ['company_name', 'dba_name'], True, ignore) license = skool.clean_strings(license, ['company_name'], True, ignore) from ftfy import fix_text import re def ngrams(string, n=3): string = re.sub(r'[,-./]|\sBD',r'', string) ngrams = zip(*[string[i:] for i in range(n)]) return [''.join(ngram) for ngram in ngrams] ngrams(registry['company_name'].iloc[0]) from scipy.sparse import csr_matrix import sparse_dot_topn.sparse_dot_topn as ct def awesome_cossim_top(A, B, ntop, lower_bound=0): # force A and B as a CSR matrix. # If they have already been CSR, there is no overhead A = A.tocsr() B = B.tocsr() M, _ = A.shape _, N = B.shape idx_dtype = np.int32 nnz_max = M*ntop indptr = np.zeros(M+1, dtype=idx_dtype) indices = np.zeros(nnz_max, dtype=idx_dtype) data = np.zeros(nnz_max, dtype=A.dtype) ct.sparse_dot_topn( M, N, np.asarray(A.indptr, dtype=idx_dtype), np.asarray(A.indices, dtype=idx_dtype), A.data, np.asarray(B.indptr, dtype=idx_dtype), np.asarray(B.indices, dtype=idx_dtype), B.data, ntop, lower_bound, indptr, indices, data) return csr_matrix((data,indices,indptr),shape=(M,N)) from sklearn.feature_extraction.text import TfidfVectorizer company_names = registry['company_name'].iloc[:10] vectorizer = TfidfVectorizer(min_df=1, analyzer=ngrams) tf_idf_matrix = vectorizer.fit_transform(company_names) tf_idf_matrix def get_matches_df(sparse_matrix, name_vector, top=100): non_zeros = sparse_matrix.nonzero() sparserows = non_zeros[0] sparsecols = non_zeros[1] if top: nr_matches = top else: nr_matches = sparsecols.size left_side = np.empty([nr_matches], dtype=object) right_side = np.empty([nr_matches], dtype=object) similairity = np.zeros(nr_matches) for index in range(0, nr_matches): left_side[index] = name_vector[sparserows[index]] right_side[index] = name_vector[sparsecols[index]] similairity[index] = sparse_matrix.data[index] return pd.DataFrame({'left_side': left_side, 'right_side': right_side, 'similairity': similairity}) all_company_names = pd.concat([registry['company_name'].dropna(), license['company_name'].dropna()]).unique() len(all_company_names) vectorizer = TfidfVectorizer(min_df=1, analyzer=ngrams) tf_idf_matrix = vectorizer.fit_transform(all_company_names) import time t1 = time.time() matches = awesome_cossim_top(tf_idf_matrix, tf_idf_matrix.transpose(), 10, 0.85) t = time.time()-t1 print("SELFTIMED:", t) matches def get_matches_df(sparse_matrix, name_vector, top=100): non_zeros = sparse_matrix.nonzero() sparserows = non_zeros[0] sparsecols = non_zeros[1] if top: nr_matches = top else: nr_matches = sparsecols.size left_side = np.empty([nr_matches], dtype=object) right_side = np.empty([nr_matches], dtype=object) similairity = np.zeros(nr_matches) for index in range(0, nr_matches): left_side[index] = name_vector[sparserows[index]] right_side[index] = name_vector[sparsecols[index]] similairity[index] = sparse_matrix.data[index] return pd.DataFrame({'left_side': left_side, 'right_side': right_side, 'similarity': similairity}) matches_df = get_matches_df(matches, all_company_names, top=100000) pd.options.display.max_rows = 999 matches_df.loc[(matches_df['similairity'] < 0.99999) & (matches_df['similairity'] > 0.94)] ```
github_jupyter
# Generating percentiles for TensorFlow model input features The current TensorFlow model uses histogram-like percentile features, which are kind of a continuous version of one-hot features. For example, if key cutoff points are `[-3, 1, 0, 2, 10]`, we might encode a value `x` as `sigma((x - cutoff) / scale)`. If `sigma` is the sigmoid function, `x = 0.1`, and `scale = 0.1`, then we'd get `[1, 1, 0.73, 0, 0]`, in other words `x` is definitely above the first 2 points, mostly above the third, and below the fourth and fifth. If we increase `scale` to `2.0`, then values are less discrete: `[0.82, 0.63, 0.51, 0.28, 0.01]`. This notebook generates appropriate cutoff points for these, to reflect most data encountered. ``` # Different options for soft-onehot function. %matplotlib inline import numpy as np from matplotlib import pyplot as plt x = np.linspace(-10, 10, 100) cutoff = 1.0 sigmoid = lambda x: 1/(1+np.exp(-x)) scale = 2.0 logit = (x - cutoff) / scale plt.plot(x, sigmoid(logit)) plt.plot(x, np.exp(- logit * logit)) NUM_LCS = 10_000 # key parameter, turn it down if you want this notebook to finish faster. # Settings determining type of features extracted. window_size = 10 band_time_diff = 4.0 from matplotlib import pyplot as plt import numpy as np import pandas as pd import tensorflow as tf from justice.datasets import plasticc_data source = plasticc_data.PlasticcBcolzSource.get_default() bcolz_source = plasticc_data.PlasticcBcolzSource.get_default() meta_table = bcolz_source.get_table('test_set_metadata') %time all_ids = meta_table['object_id'][:] %%time import random sample_ids = random.Random(828372).sample(list(all_ids), NUM_LCS) lcs = [] _chunk_sz = 100 for start in range(0, len(sample_ids), _chunk_sz): lcs.extend(plasticc_data.PlasticcDatasetLC.bcolz_get_lcs_by_obj_ids( bcolz_source=source, dataset="test_set", obj_ids=sample_ids[start:start + _chunk_sz] )) %%time from justice.features import band_settings_params from justice.features import dense_extracted_features from justice.features import feature_combinators from justice.features import metadata_features from justice.features import per_point_dataset from justice.features import raw_value_features batch_size = 32 rve = raw_value_features.RawValueExtractor( window_size=window_size, band_settings=band_settings_params.BandSettings(lcs[0].expected_bands) ) mve = metadata_features.MetadataValueExtractor() data_gen = per_point_dataset.PerPointDatasetGenerator( extract_fcn=feature_combinators.combine([rve.extract, mve.extract]), batch_size=batch_size, ) def input_fn(): return data_gen.make_dataset_lcs(lcs) def per_band_model_fn(band_features, params): batch_size = params["batch_size"] window_size = params["window_size"] wf = dense_extracted_features.WindowFeatures( band_features, batch_size=batch_size, window_size=window_size, band_time_diff=band_time_diff) dflux_dt = wf.dflux_dt(clip_magnitude=None) init_layer = dense_extracted_features.initial_layer(wf, include_flux_and_time=True) init_layer_masked = wf.masked(init_layer, value_if_masked=0, expected_extra_dims=[3]) return { "initial_layer": init_layer_masked, "in_window": wf.in_window, } def model_fn(features, labels, mode, params): band_settings = band_settings_params.BandSettings.from_params(params) per_band_data = band_settings.per_band_sub_model_fn( per_band_model_fn, features, params=params ) predictions = { 'band_{}.{}'.format(band, name): tensor for band, tensor_dict in zip(band_settings.bands, per_band_data) for name, tensor in tensor_dict.items() } predictions['time'] = features['time'] predictions['object_id'] = features['object_id'] return tf.estimator.EstimatorSpec( mode=mode, predictions=predictions, loss=tf.constant(0.0), train_op=tf.no_op() ) params = { 'batch_size': batch_size, 'window_size': window_size, 'flux_scale_epsilon': 0.5, 'lc_bands': lcs[0].expected_bands, } estimator = tf.estimator.Estimator( model_fn=model_fn, params=params ) predictions = list(estimator.predict(input_fn=input_fn, yield_single_examples=True)) print(f"Got {len(predictions)} predictions.") predictions[4] def get_values_df(band): arrays = [x[f"band_{band}.initial_layer"] for x in predictions if x[f"band_{band}.in_window"]] return pd.DataFrame(np.concatenate(arrays, axis=0), columns=["dflux_dt", "dflux", "dtime"]) df = get_values_df(lcs[0].expected_bands[0]) df.hist('dflux_dt', bins=32) df.hist('dflux', bins=32) df.hist('dtime', bins=32) ``` ## Really messy code to get a histogram with mostly-unique bins. Because we want fixed-size arrays for TensorFlow code, we want a set of e.g. 32 unique cutoff points that reflect a good distribution of cutoffs. However its is really messy, because there tend to be strong peaks in the histogram which are repeated frequently. ``` import collections import scipy.optimize def _some_duplicates(non_unique, unique, num_desired): to_duplicate_candidates = non_unique.tolist() for x in unique: to_duplicate_candidates.remove(x) unique = unique.tolist() while len(unique) < num_desired: assert len(unique) <= num_desired to_duplicate = random.choice(to_duplicate_candidates) unique.insert(unique.index(to_duplicate), to_duplicate) return unique def unique_percentiles(array, num_desired): partition_size = 100.0 / num_desired epsilon = 0.05 * partition_size solution = None optimal_solution = None def _actual_unique(vals): nonlocal solution, optimal_solution if optimal_solution is not None: return 0 # stop optimization, or at least return quickly num_points_base, perturb = vals num_points = int(round(num_desired * num_points_base)) perturb = abs(perturb) q = np.linspace(0, 100, int(num_points)) rng = np.random.RandomState(int(1e6 * perturb)) noise = rng.normal(loc=0, scale=min(1.0, 10 * perturb) * epsilon, size=q.shape) noise[0] = 0 noise[-1] = 0 q += noise non_unique = np.percentile(array, q=q, interpolation='linear') unique = np.unique(non_unique) result = abs(num_desired - len(unique)) if num_desired == len(unique): optimal_solution = unique elif len(unique) <= num_desired <= len(unique) + 1: solution = _some_duplicates(non_unique, unique, num_desired) return (4 if len(unique) > num_desired else 1) * result + perturb res = scipy.optimize.minimize( _actual_unique, x0=[1.0, 0.1], options={'maxiter': 1000, 'rhobeg': 0.3}, tol=1e-6, method='COBYLA') if optimal_solution is None and solution is None: raise ValueError(f"Could not find deduplicated percentiles!") return optimal_solution if optimal_solution is not None else solution desired_num_cutoffs = 32 all_solutions = [] for band in lcs[0].expected_bands: df = get_values_df(band) for i, column in enumerate(df.columns): print(band, column) percentiles = np.array(unique_percentiles(df[column], desired_num_cutoffs), dtype=np.float32) median_scale = np.median(percentiles[1:] - percentiles[:-1]) all_solutions.append({ 'band': band, 'column_index': i, 'column': column, 'median_scale': float(median_scale), 'cutoffs': percentiles, }) with_settings = { 'window_size': window_size, 'band_time_diff': band_time_diff, 'desired_num_cutoffs': desired_num_cutoffs, 'solutions': all_solutions } ``` ## Save to nicely-formatted JSON Writes numpy arrays as strings, then rewrites those strings. ``` import datetime import json from justice import path_util class ArrayPreEncoder(json.JSONEncoder): def default(self, obj): if isinstance(obj, np.ndarray): return "<<<<{}>>>>".format(", ".join(f"{x:.8f}" for x in obj.tolist())) else: print(obj) return json.JSONEncoder.default(self, obj) def _encode(x): result = json.dumps(x, indent=2, cls=ArrayPreEncoder).replace('"<<<<', '[').replace('>>>>"', ']') json.loads(result) # error if not decodable return result now = datetime.datetime.now() path = path_util.data_dir / 'tf_align_model' / 'feature_extraction' / ( f"cutoffs__window_sz-{window_size}__{now.year:04d}-{now.month:02d}-{now.day:02d}.json") path.parent.mkdir(parents=True, exist_ok=True) with open(str(path), 'w') as f: f.write(_encode(with_settings)) ```
github_jupyter
`BcForms` is a toolkit for concretely describing the primary structure of macromolecular complexes, including non-canonical monomeric forms and intra and inter-subunit crosslinks. `BcForms` includes a textual grammar for describing complexes and a Python library, a command line program, and a REST API for validating and manipulating complexes described in this grammar. `BcForms` represents complexes as sets of subunits, with their stoichiometries, and covalent crosslinks which link the subunits. DNA, RNA, and protein subunits can be represented using `BpForms`. Small molecule subunits can be represented using `openbabel.OBMol`, typically imported from SMILES or InChI. This notebook illustrates how to use the `BcForms` Python library via some simple. Please see the second tutorial for more details and more examples. Please also see the [documentation](https://docs.karrlab.org/bcforms/) for more information about the `BcForms` grammar and more instructions for using the `BcForms` website, JSON REST API, and command line interface. # Import BpForms and BcForms libraries ``` import bcforms import bpforms ``` # Create complexes from their string representations ``` form_1 = bcforms.BcForm().from_str('2 * subunit_a + 3 * subunit_b') form_1.set_subunit_attribute('subunit_a', 'structure', bpforms.ProteinForm().from_str('CAAAAAAAA')) form_1.set_subunit_attribute('subunit_b', 'structure', bpforms.ProteinForm().from_str('AAAAAAAAC')) form_2 = bcforms.BcForm().from_str( '2 * subunit_a' '| x-link: [type: disulfide | l: subunit_a(1)-1 | r: subunit_a(2)-1]') form_2.set_subunit_attribute('subunit_a', 'structure', bpforms.ProteinForm().from_str('CAAAAAAAA')) ``` # Create complexes programmatically ``` form_1_b = bcforms.BcForm() form_1_b.subunits.append(bcforms.core.Subunit('subunit_a', 2, bpforms.ProteinForm().from_str('CAAAAAAAA'))) form_1_b.subunits.append(bcforms.core.Subunit('subunit_b', 3, bpforms.ProteinForm().from_str('AAAAAAAAC'))) form_2_b = bcforms.BcForm() subunit = bcforms.core.Subunit('subunit_a', 2, bpforms.ProteinForm().from_str('CAAAAAAAA')) form_2_b.subunits.append(subunit) form_2_b.crosslinks.append(bcforms.core.OntologyCrosslink( 'disulfide', 'subunit_a', 1, 'subunit_a', 1, 1, 2)) ``` # Get properties of polymers ## Subunits ``` form_1.subunits ``` ## Crosslinks ``` form_2.crosslinks ``` # Get the string representation of a complex ``` str(form_1_b) ``` # Check equality of complexes ``` form_1_b.is_equal(form_1) ``` # Calculate properties of a complex ## Molecular structure ``` form_1.get_structure()[0] ``` ## SMILES representation ``` form_1.export('smiles') ``` ## Formula ``` form_1.get_formula() ``` ## Charge ``` form_1.get_charge() ``` ## Molecular weight ``` form_1.get_mol_wt() ```
github_jupyter
##### Copyright 2018 The AdaNet Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # AdaNet on TPU <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_tpu.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_tpu.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> AdaNet supports training on Google's custom machine learning accelerators known as Tensor Processing Units (TPU). Conveniently, we provide `adanet.TPUEstimator` which handles TPU support behind the scenes. There are only a few minor changes needed to switch from `adanet.Estimator` to `adanet.TPUEstimator`. We highlight the necessary changes in this tutorial. If the reader is not familiar with AdaNet, it is reccommended they take a look at [The AdaNet Objective](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_objective.ipynb) and in particular [Customizing AdaNet](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/adanet_objective.ipynb) as this tutorial builds upon the latter. **NOTE: you must provide a valid GCS bucket to use TPUEstimator.** To begin, we import the necessary packages, obtain the Colab's TPU master address, and give the TPU permissions to write to our GCS Bucket. Follow the instructions [here](https://colab.sandbox.google.com/notebooks/tpu.ipynb#scrollTo=_pQCOmISAQBu) to connect to a Colab TPU runtime. ``` #@test {"skip": true} # If you're running this in Colab, first install the adanet package: !pip install adanet from __future__ import absolute_import from __future__ import division from __future__ import print_function import functools import json import os import six import time import adanet from google.colab import auth import tensorflow as tf BUCKET = '' #@param {type: 'string'} MODEL_DIR = 'gs://{}/{}'.format( BUCKET, time.strftime('adanet-tpu-estimator/%Y-%m-%d-%H-%M-%S')) MASTER = '' if 'COLAB_TPU_ADDR' in os.environ: auth.authenticate_user() MASTER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] # Authenticate TPU to use GCS Bucket. with tf.Session(MASTER) as sess: with open('/content/adc.json', 'r') as file_: auth_info = json.load(file_) tf.contrib.cloud.configure_gcs(sess, credentials=auth_info) # The random seed to use. RANDOM_SEED = 42 ``` ## Fashion MNIST We focus again on the Fashion MNIST dataset and download the data via Keras. ``` (x_train, y_train), (x_test, y_test) = ( tf.keras.datasets.fashion_mnist.load_data()) ``` ## `input_fn` Changes There are two minor changes we must make to `input_fn` to support running on TPU: 1. TPUs dynamically shard the input data depending on the number of cores used. Because of this, we augment `input_fn` to take a dictionary `params` argument. When running on TPU, `params` contains a `batch_size` field with the appropriate batch size. 1. Once the input is batched, we drop the last batch if it is smaller than `batch_size`. This can simply be done by specifying `drop_remainder=True` to the [`tf.data.DataSet.batch()`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#batch) function. It is important to specify this option since TPUs do not support dynamic shapes. Note that we only drop the remainder batch during training since evaluation is still done on the CPU. ``` FEATURES_KEY = "images" def generator(images, labels): """Returns a generator that returns image-label pairs.""" def _gen(): for image, label in zip(images, labels): yield image, label return _gen def preprocess_image(image, label): """Preprocesses an image for an `Estimator`.""" image = image / 255. image = tf.reshape(image, [28, 28, 1]) features = {FEATURES_KEY: image} return features, label def input_fn(partition, training, batch_size): """Generate an input_fn for the Estimator.""" def _input_fn(params): # TPU: specify `params` argument. # TPU: get the TPU set `batch_size`, if available. batch_size_ = params.get("batch_size", batch_size) if partition == "train": dataset = tf.data.Dataset.from_generator( generator(x_train, y_train), (tf.float32, tf.int32), ((28, 28), ())) elif partition == "predict": dataset = tf.data.Dataset.from_generator( generator(x_test[:10], y_test[:10]), (tf.float32, tf.int32), ((28, 28), ())) else: dataset = tf.data.Dataset.from_generator( generator(x_test, y_test), (tf.float32, tf.int32), ((28, 28), ())) if training: dataset = dataset.shuffle(10 * batch_size_, seed=RANDOM_SEED).repeat() # TPU: drop the remainder batch when training on TPU. dataset = dataset.map(preprocess_image).batch( batch_size_, drop_remainder=training) iterator = dataset.make_one_shot_iterator() features, labels = iterator.get_next() return features, labels return _input_fn ``` ## `model_fn` Changes We use a similar CNN architecture as used in the [Customizing AdaNet](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/customizing_adanet.ipynb) tutorial. The only TPU specific change we need to make is wrap the `optimizer` in a [`tf.contrib.tpu.CrossShardOptimizer`](https://www.google.com/search?q=cross+shard+optimizer&oq=cross+shard+optimizer&aqs=chrome.0.0j69i57.2391j0j7&sourceid=chrome&ie=UTF-8). ``` #@title Define the Builder and Generator class SimpleCNNBuilder(adanet.subnetwork.Builder): """Builds a CNN subnetwork for AdaNet.""" def __init__(self, learning_rate, max_iteration_steps, seed): """Initializes a `SimpleCNNBuilder`. Args: learning_rate: The float learning rate to use. max_iteration_steps: The number of steps per iteration. seed: The random seed. Returns: An instance of `SimpleCNNBuilder`. """ self._learning_rate = learning_rate self._max_iteration_steps = max_iteration_steps self._seed = seed def build_subnetwork(self, features, logits_dimension, training, iteration_step, summary, previous_ensemble=None): """See `adanet.subnetwork.Builder`.""" images = list(features.values())[0] kernel_initializer = tf.keras.initializers.he_normal(seed=self._seed) x = tf.keras.layers.Conv2D( filters=16, kernel_size=3, padding="same", activation="relu", kernel_initializer=kernel_initializer)( images) x = tf.keras.layers.MaxPool2D(pool_size=2, strides=2)(x) x = tf.keras.layers.Flatten()(x) x = tf.keras.layers.Dense( units=64, activation="relu", kernel_initializer=kernel_initializer)( x) logits = tf.keras.layers.Dense( units=10, activation=None, kernel_initializer=kernel_initializer)( x) complexity = tf.constant(1) return adanet.Subnetwork( last_layer=x, logits=logits, complexity=complexity, persisted_tensors={}) def build_subnetwork_train_op(self, subnetwork, loss, var_list, labels, iteration_step, summary, previous_ensemble=None): """See `adanet.subnetwork.Builder`.""" learning_rate = tf.train.cosine_decay( learning_rate=self._learning_rate, global_step=iteration_step, decay_steps=self._max_iteration_steps) optimizer = tf.train.MomentumOptimizer(learning_rate, .9) # TPU: wrap the optimizer in a CrossShardOptimizer. optimizer = tf.contrib.tpu.CrossShardOptimizer(optimizer) return optimizer.minimize(loss=loss, var_list=var_list) def build_mixture_weights_train_op(self, loss, var_list, logits, labels, iteration_step, summary): """See `adanet.subnetwork.Builder`.""" return tf.no_op("mixture_weights_train_op") @property def name(self): """See `adanet.subnetwork.Builder`.""" return "simple_cnn" class SimpleCNNGenerator(adanet.subnetwork.Generator): """Generates a `SimpleCNN` at each iteration.""" def __init__(self, learning_rate, max_iteration_steps, seed=None): """Initializes a `Generator` that builds `SimpleCNNs`. Args: learning_rate: The float learning rate to use. max_iteration_steps: The number of steps per iteration. seed: The random seed. Returns: An instance of `Generator`. """ self._seed = seed self._dnn_builder_fn = functools.partial( SimpleCNNBuilder, learning_rate=learning_rate, max_iteration_steps=max_iteration_steps) def generate_candidates(self, previous_ensemble, iteration_number, previous_ensemble_reports, all_reports): """See `adanet.subnetwork.Generator`.""" seed = self._seed # Change the seed according to the iteration so that each subnetwork # learns something different. if seed is not None: seed += iteration_number return [self._dnn_builder_fn(seed=seed)] ``` ## Launch TensorBoard Let's run [TensorBoard](https://www.tensorflow.org/guide/summaries_and_tensorboard) to visualize model training over time. We'll use [ngrok](https://ngrok.com/) to tunnel traffic to localhost. *The instructions for setting up Tensorboard were obtained from https://www.dlology.com/blog/quick-guide-to-run-tensorboard-in-google-colab/* Run the next cells and follow the link to see the TensorBoard in a new tab. ``` #@test {"skip": true} get_ipython().system_raw( 'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &' .format(MODEL_DIR) ) # Install ngrok binary. ! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip ! unzip ngrok-stable-linux-amd64.zip print("Follow this link to open TensorBoard in a new tab.") get_ipython().system_raw('./ngrok http 6006 &') ! curl -s http://localhost:4040/api/tunnels | python3 -c \ "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])" ``` ## Using `adanet.TPUEstimator` to Train and Evaluate Finally, we switch from `adanet.Estimator` to `adanet.TPUEstimator`. There are two last changes needed: 1. Update the `RunConfig` to be a [`tf.contrib.tpu.RunConfig`](https://www.tensorflow.org/api_docs/python/tf/contrib/tpu/RunConfig). We supply the TPU `master` address and set `iterations_per_loop=200`. This choice is fairly arbitrary in our case. A good practice is to set it to the number of steps in between summary writes and metric evals. 1. Finally, we specify the `use_tpu` and `batch_size` parameters `adanet.TPUEstimator`. There is an important thing to note about the `batch_size`: each TPU chip consists of 2 cores with 4 shards each. In the [Customizing AdaNet](https://colab.sandbox.google.com/github/tensorflow/adanet/blob/master/adanet/examples/tutorials/customizing_adanet.ipynb) tutorial, a `batch_size` of 64 was used. To be consistent we use the same `batch_size` per shard and drop the number of training steps accordingly. In other words, since we're running on one TPU we set `batch_size=64*8=512` and `train_steps=1000`. In the ideal case, since we drop the `train_steps` by 5x, this means we're **training 5x faster!** ``` #@title AdaNet Parameters LEARNING_RATE = 0.25 #@param {type:"number"} TRAIN_STEPS = 1000 #@param {type:"integer"} BATCH_SIZE = 512 #@param {type:"integer"} ADANET_ITERATIONS = 2 #@param {type:"integer"} # TPU: switch `tf.estimator.RunConfig` to `tf.contrib.tpu.RunConfig`. # The main required changes are specifying `tpu_config` and `master`. config = tf.contrib.tpu.RunConfig( tpu_config=tf.contrib.tpu.TPUConfig(iterations_per_loop=200), master=MASTER, save_checkpoints_steps=200, save_summary_steps=200, tf_random_seed=RANDOM_SEED) head = tf.contrib.estimator.multi_class_head( n_classes=10, loss_reduction=tf.losses.Reduction.SUM_OVER_BATCH_SIZE) max_iteration_steps = TRAIN_STEPS // ADANET_ITERATIONS # TPU: switch `adanet.Estimator` to `adanet.TPUEstimator`. try: estimator = adanet.TPUEstimator( head=head, subnetwork_generator=SimpleCNNGenerator( learning_rate=LEARNING_RATE, max_iteration_steps=max_iteration_steps, seed=RANDOM_SEED), max_iteration_steps=max_iteration_steps, evaluator=adanet.Evaluator( input_fn=input_fn("train", training=False, batch_size=BATCH_SIZE), steps=None), adanet_loss_decay=.99, config=config, model_dir=MODEL_DIR, # TPU: specify `use_tpu` and the batch_size parameters. use_tpu=True, train_batch_size=BATCH_SIZE, eval_batch_size=32) except tf.errors.InvalidArgumentError as e: six.raise_from( Exception( "Invalid GCS Bucket: you must provide a valid GCS bucket in the " "`BUCKET` form field of the first cell."), e) results, _ = tf.estimator.train_and_evaluate( estimator, train_spec=tf.estimator.TrainSpec( input_fn=input_fn("train", training=True, batch_size=BATCH_SIZE), max_steps=TRAIN_STEPS), eval_spec=tf.estimator.EvalSpec( input_fn=input_fn("test", training=False, batch_size=BATCH_SIZE), steps=None, start_delay_secs=1, throttle_secs=1, )) print("Accuracy:", results["accuracy"]) print("Loss:", results["average_loss"]) ``` ## Conclusion That was easy! With very few changes we were able to transform our original estimator into one which can harness the power of TPUs.
github_jupyter
## Data Procesing ``` import pandas as pd import h2o from h2o.estimators.random_forest import H2ORandomForestEstimator h2o.init() %%time train_new = pd.read_csv("D:\\DC Universe\\Ucsc\\Third Year\\ENH 3201 Industrial Placements\\H20 Applications\\H20 ML Notebooks\\H20Csv\\Titanic\\titanic_train.csv") test = pd.read_csv("D:\\DC Universe\\Ucsc\\Third Year\\ENH 3201 Industrial Placements\\H20 Applications\\H20 ML Notebooks\\H20Csv\\Titanic\\titanic_test.csv") subs = pd.read_csv('D:\\DC Universe\\Ucsc\\Third Year\\ENH 3201 Industrial Placements\\H20 Applications\\H20 ML Notebooks\\H20Csv\\Titanic\\gender_submission.csv') #train_new.to_pickle("D:\\DC Universe\\Ucsc\\Third Year\\ENH 3201 Industrial Placements\\H20 Applications\\H20 ML Notebooks\\H20Csv\\Titanic\\train_set.pkl") %%time train = pd.read_pickle("D:\\DC Universe\\Ucsc\\Third Year\\ENH 3201 Industrial Placements\\H20 Applications\\H20 ML Notebooks\\H20Csv\\Titanic\\train_set.pkl") drop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp','Parch'] train = train.drop(drop_elements, axis = 1) test = test.drop(drop_elements, axis = 1) def checkNull_fillData(df): for col in df.columns: if len(df.loc[df[col].isnull() == True]) != 0: if df[col].dtype == "float64" or df[col].dtype == "int64": df.loc[df[col].isnull() == True,col] = df[col].mean() else: df.loc[df[col].isnull() == True,col] = df[col].mode()[0] checkNull_fillData(train) checkNull_fillData(test) str_list = [] num_list = [] for colname, colvalue in train.iteritems(): if type(colvalue[1]) == str: str_list.append(colname) else: num_list.append(colname) train = pd.get_dummies(train, columns=str_list) test = pd.get_dummies(test, columns=str_list) train = h2o.H2OFrame(train) test = h2o.H2OFrame(test) train.describe() test.describe() train1, valid1, new_data1 = train.split_frame(ratios = [.7, .15], seed = 1234) #train.columns test.columns predictors = ["Age","Embarked_C","Pclass","Embarked_Q","Sex_male"] response = "Fare" titanic = H2ORandomForestEstimator(model_id="titanic", ntrees = 1, seed = 1234) titanic.train(x = predictors, y = response, training_frame = train1, validation_frame = valid1) model_path = h2o.save_model(model=titanic, path="D:\\DC Universe\\Ucsc\\Third Year\\ENH 3201 Industrial Placements\\H20 Applications\\H20 ML Notebooks\\H20Csv\\Titanic\\", force=True) print (model_path) titanic = h2o.load_model("D:\\DC Universe\\Ucsc\\Third Year\\ENH 3201 Industrial Placements\\H20 Applications\\H20 ML Notebooks\\H20Csv\\Titanic\\titanic") titanic train2, valid2, new_data2 = test.split_frame(ratios = [.7, .15], seed = 1234) # Checkpoint on the same dataset. This shows how to train an additional # 9 trees on top of the first 1. To do this, set ntrees equal to 10. titanic_continued = H2ORandomForestEstimator(model_id = 'titanic_new', checkpoint = titanic, ntrees = 4, seed = 1234) titanic_continued.train(x = predictors, y = response, training_frame = train2, validation_frame = valid2) col_count=len(train.columns) col_count row_count=len(train) # split_ratio = int(input("Enter Split Ratio >>>")) #split_ratio = int(input("Enter Split Ratio >>>")) #import modin.pandas as mpd ```
github_jupyter
# MNIST Classification with CNN -- Customized training loops -- In this notebook, I describe how to implement CNN using tf.keras. Training loop is defined by customized step-by-step training loops. ``` import tensorflow as tf import time import numpy as np import sklearn from sklearn.model_selection import train_test_split import matplotlib import matplotlib.pyplot as plt import os %matplotlib inline print("tensorflow version: ", tf.__version__) print("numpy version: ", np.__version__) print("scikit learn version: ", sklearn.__version__) print("matplotlib version: ", matplotlib.__version__) ``` ## 1. Load data & preprocessing In this notebook, I use pre-defined mnist dataset. ``` (X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() # Split original training dataset into train/validation dataset X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, shuffle=True) # Normalize Intensity X_train = X_train / 255. X_val = X_val / 255. X_test = X_test / 255. # Convert into 4d tensor shape X_train = X_train.reshape((*X_train.shape, 1)) X_val = X_val.reshape((*X_val.shape, 1)) X_test = X_test.reshape((*X_test.shape, 1)) # Convert into one-hot y_train = tf.keras.utils.to_categorical(y_train) y_val = tf.keras.utils.to_categorical(y_val) y_test = tf.keras.utils.to_categorical(y_test) ``` ## 2. Create tf.data.Dataset ``` train_batch_size = 64 test_batch_size = 1 # Build source dataset for training X_train_dataset = tf.data.Dataset.from_tensor_slices(X_train) y_train_dataset = tf.data.Dataset.from_tensor_slices(y_train) train_dataset = tf.data.Dataset.zip((X_train_dataset, y_train_dataset)).batch(train_batch_size) # Build source dataset for validation X_valid_dataset = tf.data.Dataset.from_tensor_slices(X_val) y_valid_dataset = tf.data.Dataset.from_tensor_slices(y_val) validation_dataset = tf.data.Dataset.zip((X_valid_dataset, y_valid_dataset)).batch(train_batch_size) # Build source dataset for test X_test_dataset = tf.data.Dataset.from_tensor_slices(X_test) y_test_dataset = tf.data.Dataset.from_tensor_slices(y_test) test_dataset = tf.data.Dataset.zip((X_test_dataset, y_test_dataset)).batch(test_batch_size) ``` Debug dataset ``` def visualize_batch(X_batch, y_batch, y_pred=None): assert len(X_batch) == len(y_batch) n_col = 10 if len(X_batch) % n_col ==0: n_row=len(X_batch)//n_col else: n_row=len(X_batch)//n_col + 1 fig = plt.figure(figsize=(20,15)) for idx in range(len(y_batch)): if y_pred is not None: ax = fig.add_subplot(n_row, n_col, idx+1, title="gt={}, pred={}".format(np.argmax(y_batch[idx]), y_pred[idx])) else: ax = fig.add_subplot(n_row, n_col, idx+1, title="gt={}".format(np.argmax(y_batch[idx]))) ax.imshow(X_batch[idx].reshape(28,28), cmap='gray') ax.axes.xaxis.set_visible(False) ax.axes.yaxis.set_visible(False) plt.show() for X_batch, y_batch in train_dataset: visualize_batch(X_batch.numpy(), y_batch.numpy()) break ``` ## 3. Create CNN model Network structure: [CONV(32) - BN - RELU] - MAXPOOL - [CONV(64) - BN - RELU] - MAXPOOL - [FC(1024) - BN - RELU] - DROPOUT - FC(10) - SOFTMAX The weight initialization rule are following: - Layer with relu activation: He initialization - Others: Xavier initialization ### Define network By creating model as an inherited class of tf.keras.Model, we can easily train model by fit() method. ``` class ConvNet(tf.keras.Model): def __init__(self, conv_filters=[32, 64], units=1024, num_class=10, dropout_rate=0.2): super(ConvNet, self).__init__(name="ConvNet") self.conv1 = tf.keras.layers.Conv2D(conv_filters[0], 3, 1, 'same', kernel_initializer=tf.keras.initializers.he_normal()) self.bn1 = tf.keras.layers.BatchNormalization() self.relu1 = tf.keras.layers.Activation("relu") self.pool1 = tf.keras.layers.MaxPool2D(2, 2, padding="same") self.conv2 = tf.keras.layers.Conv2D(conv_filters[1], 3, 1, 'same', kernel_initializer=tf.keras.initializers.he_normal()) self.bn2 = tf.keras.layers.BatchNormalization() self.relu2 = tf.keras.layers.Activation("relu") self.pool2 = tf.keras.layers.MaxPool2D(2, 2, padding="same") self.flattern = tf.keras.layers.Flatten() self.dense3 = tf.keras.layers.Dense(units, kernel_initializer=tf.keras.initializers.he_normal()) self.bn3 = tf.keras.layers.BatchNormalization() self.relu3 = tf.keras.layers.Activation("relu") self.dropout = tf.keras.layers.Dropout(rate=dropout_rate) self.dense4 = tf.keras.layers.Dense(num_class, kernel_initializer=tf.keras.initializers.glorot_normal()) def call(self, inputs, training=True): x = self.conv1(inputs) x = self.bn1(x, training) x = self.relu1(x) x = self.pool1(x) x = self.conv2(x) x = self.bn2(x, training) x = self.relu2(x) x = self.pool2(x) x = self.flattern(x) x = self.dense3(x) x = self.bn3(x, training) x = self.relu3(x) x = self.dropout(x, training) x = self.dense4(x) return x ``` ## 4. Training ``` lr = 0.0001 epochs = 10 # ------------------------------------------------------------- loss = tf.keras.losses.BinaryCrossentropy(from_logits=True) accuracy = tf.keras.metrics.CategoricalAccuracy() optimizer = tf.keras.optimizers.Adam(lr) # Define model model = ConvNet() @tf.function def train_on_batch(model, X, y, accuracy_i): # Open a GradientTape with tf.GradientTape() as tape: logits = model(X) loss_value = loss(y, logits) # Coumpute gradients gradients = tape.gradient(loss_value, model.trainable_weights) # Apply back propagation optimizer.apply_gradients(zip(gradients, model.trainable_weights)) # Update the runnning accuracy accuracy_i.update_state(y, logits) return loss_value @tf.function def eval_on_batch(model, X, y, accuracy_i): logits = model(X, training=False) loss_value = loss(y, logits) accuracy_i.update_state(y, logits) return loss_value for epoch in range(epochs): # Training train_losses = [] accuracy.reset_states() for step, (X, y) in enumerate(train_dataset): train_loss = train_on_batch(model, X, y, accuracy) train_losses.append(train_loss) train_accuracy = accuracy.result() train_loss = np.average(train_losses) # Validation accuracy.reset_states() validation_losses = [] for X, y in validation_dataset: val_loss = eval_on_batch(model, X, y, accuracy) validation_losses.append(val_loss) validation_accuracy = accuracy.result() validation_loss = np.average(validation_losses) print('Epoch: {}, loss: {:.5f}, accuracy: {:.5f}, val_loss: {:.5f}, val_accuracy: {:.5f}'.format( epoch, train_loss, train_accuracy, validation_loss, validation_accuracy )) model.save_weights('./checkpoints_2/model_{:04d}.ckpt'.format(epoch)) ``` ## 5. Test ``` accuracy.reset_states() test_losses = [] for X, y in test_dataset: test_loss = eval_on_batch(model, X, y, accuracy) test_losses.append(test_loss) test_accuracy = accuracy.result() test_loss = np.average(test_losses) print('loss: {:.5f}, accuracy: {:.5f}'.format(test_loss, test_accuracy)) index = np.random.choice(np.arange(0, len(y_test)), size=60) test_input = X_test[index] y_true = y_test[index] predicted = model.predict(test_input) predicted_label = np.argmax(predicted, axis=1) visualize_batch(test_input, y_true, predicted_label) ```
github_jupyter
``` import pandas as pd import numpy as np import datetime as datetime #read in last_played_wars csv last_played_wars = pd.read_csv("updated_last_played_wars.csv") # last_played_wars["Participation"] = last_played_wars["Joined Wars"] / last_played_wars["Total Wars"] last_played_wars = last_played_wars[["Name", "Tag", "Last War", "Town Hall", "Clan", "Total Wars"]] last_played_wars.head() #Split Last Played Wars by clan #Sheer Force #Create a copy of Sheer Force only to manipulate sf_lpw = last_played_wars.loc[last_played_wars.Clan == "Sheer Force"].copy() #Joined Wars of clan member sf_lpw["Joined Wars"] = sf_lpw["Total Wars"] #get maximum number of wars this season sf_max = sf_lpw["Total Wars"].max() sf_lpw["Total Wars"] = sf_max #find participation of members sf_lpw["Participation"] = sf_lpw["Joined Wars"]/sf_max #get < 50% participation members and add to slackers sheer_force_slackers = sf_lpw.loc[sf_lpw.Participation < .5].copy() sheer_force_slackers.head() sf = sheer_force_slackers.to_csv(r"sf.csv", index = True, header = True) #Dark Matter dm_lpw = last_played_wars.loc[last_played_wars.Clan == "Dark Matter"].copy() dm_lpw["Joined Wars"] = dm_lpw["Total Wars"] dm_max = dm_lpw["Total Wars"].max() dm_lpw["Total Wars"] = dm_max dm_lpw["Participation"] = dm_lpw["Joined Wars"]/dm_max dark_matter_slackers = dm_lpw.loc[dm_lpw.Participation < .5].copy() dark_matter_slackers.head() dm = dark_matter_slackers.to_csv(r"dm.csv", index = True, header = True) #Mini Matter mm_lpw = last_played_wars.loc[last_played_wars.Clan == "Mini Matter"].copy() mm_lpw["Joined Wars"] = mm_lpw["Total Wars"] mm_max = mm_lpw["Total Wars"].max() mm_lpw["Total Wars"] = mm_max mm_lpw["Participation"] = mm_lpw["Joined Wars"]/mm_max mini_matter_slackers = mm_lpw.loc[mm_lpw.Participation < .5].copy() mini_matter_slackers.head() mm = mini_matter_slackers.to_csv(r"mm.csv", index = True, header = True) #Legendary Monks lm_lpw = last_played_wars.loc[last_played_wars.Clan == "Legendary Monks"].copy() lm_lpw["Joined Wars"] = lm_lpw["Total Wars"] lm_max = lm_lpw["Total Wars"].max() lm_lpw["Total Wars"] = lm_max lm_lpw["Participation"] = lm_lpw["Joined Wars"]/lm_max legendary_monks_slackers = lm_lpw.loc[lm_lpw.Participation < .5].copy() legendary_monks_slackers.head() lm = legendary_monks_slackers.to_csv(r"lm.csv", index = True, header = True) #Golden Clan kbwf_lpw = last_played_wars.loc[last_played_wars.Clan == "Golden Clan"].copy() kbwf_lpw["Joined Wars"] = kbwf_lpw["Total Wars"] kbwf_max = kbwf_lpw["Total Wars"].max() kbwf_lpw["Total Wars"] = kbwf_max kbwf_lpw["Participation"] = kbwf_lpw["Joined Wars"]/kbwf_max killer_black_slackers = kbwf_lpw.loc[kbwf_lpw.Participation < .5].copy() killer_black_slackers.head() kbwf = killer_black_slackers.to_csv(r"gc.csv", index = True, header = True) ```
github_jupyter
``` import os import re import collections import copy import numpy as np import matplotlib.pyplot as plt %matplotlib inline from IPython.display import clear_output from random import sample import dlnlputils from dlnlputils.data import tokenize_corpus, build_vocabulary import torch device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Если Вы запускаете ноутбук на colab или kaggle, добавьте в начало пути ./stepik-dl-nlp with open('datasets/russian_names.txt') as input_file: names = input_file.read()[:-1].split('\n') names = [' ' + line for line in names] handled_text = names #all unique characters go here tokens = list(set(''.join(handled_text))) num_tokens = len(tokens) print ('num_tokens =', num_tokens) token_to_id = {token: idx for idx, token in enumerate(tokens)} assert len(tokens) == len(token_to_id), "dictionaries must have same size" for i in range(num_tokens): assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list" print("Seems alright!") def to_matrix(data, token_to_id, max_len=None, dtype='int32', batch_first = True): """Casts a list of names into rnn-digestable matrix""" max_len = max_len or max(map(len, data)) data_ix = np.zeros([len(data), max_len], dtype) + token_to_id[' '] for i in range(len(data)): line_ix = [token_to_id[c] for c in data[i]] data_ix[i, :len(line_ix)] = line_ix if not batch_first: # convert [batch, time] into [time, batch] data_ix = np.transpose(data_ix) return data_ix print(handled_text[3]) print(to_matrix(handled_text[3:5], token_to_id)[0]) print(len(handled_text[3])) import torch, torch.nn as nn import torch.nn.functional as F from IPython.display import clear_output from random import sample class CharLSTMLoop(nn.Module): def __init__(self, num_tokens=num_tokens, emb_size=32, hidden_size=128): super(self.__class__, self).__init__() self.emb = nn.Embedding(num_tokens, emb_size) self.LSTM = nn.LSTM(input_size=emb_size, hidden_size=hidden_size, num_layers=1, batch_first=True) self.hid_to_logits = nn.Linear(hidden_size, num_tokens) def forward(self, x, h=None, c=None): if h is not None and c is not None: out_put, (h_new, c_new) = self.LSTM(self.emb(x), (h, c)) else: out_put, (h_new, c_new) = self.LSTM(self.emb(x)) next_logits = self.hid_to_logits(out_put) next_logp = F.log_softmax(next_logits, dim=-1) return next_logp, h_new, c_new MAX_LENGTH = max(map(len, handled_text)) model = CharLSTMLoop() model = model.to(device) opt = torch.optim.Adam(model.parameters()) history = [] best_loss = 6 best_model_wts = copy.deepcopy(model.state_dict()) for i in range(4000): batch_ix = to_matrix(sample(handled_text, 32), token_to_id, max_len=MAX_LENGTH) batch_ix = torch.tensor(batch_ix, dtype=torch.int64).to(device) logp_seq, _, _ = model(batch_ix) # compute loss predictions_logp = logp_seq[:, :-1] actual_next_tokens = batch_ix[:, 1:] loss = -torch.mean(torch.gather(predictions_logp, dim=2, index=actual_next_tokens[:,:,None])) if loss < best_loss: best_loss = loss best_model_wts = copy.deepcopy(model.state_dict()) # train with backprop loss.backward() opt.step() opt.zero_grad() history.append(loss.cpu().data.numpy()) if (i + 1) % 20 == 0: clear_output(True) plt.plot(history, label='loss') plt.legend() plt.show() assert np.mean(history[:25]) > np.mean(history[-25:]), "RNN didn't converge." model.load_state_dict(best_model_wts) def generate_sample(char_rnn, seed_phrase=' ', max_length=MAX_LENGTH, temperature=1.0): ''' The function generates text given a phrase of length at least SEQ_LENGTH. :param seed_phrase: prefix characters. The RNN is asked to continue the phrase :param max_length: maximum output length, including seed_phrase :param temperature: coefficient for sampling. higher temperature produces more chaotic outputs, smaller temperature converges to the single most likely output ''' x_sequence = [[token_to_id[token] for token in seed_phrase]] x_sequence = torch.tensor(x_sequence, dtype=torch.int64) h_t = None c_t = None if len(seed_phrase) > 1: _, h_t, c_t = model.forward(x_sequence[:, :-1], h_t) for _ in range(max_length - len(seed_phrase)): logp_next, h_t, c_t = model.forward(x_sequence[:, -1].unsqueeze(-1), h_t, c_t) p_next = F.softmax(logp_next / temperature, dim=-1).data.numpy()[0] next_ix = np.random.choice(len(tokens), p=p_next[0]) next_ix = torch.tensor([[next_ix]], dtype=torch.int64) x_sequence = torch.cat([x_sequence, next_ix], dim=1) return ''.join([tokens[ix] for ix in x_sequence[0].data.numpy()]) model = model.to('cpu') for _ in range(100): print(generate_sample(model, seed_phrase=' ', temperature=0.5)) torch.save(model, 'Names_generation.pt') ```
github_jupyter
# "Thresholding" # Recap This is the Lab on Thresholding for Classical Image Segmention in CE6003. You should complete the tasks in this lab as part of the Thresholding section of the lesson. Please remember this lab must be completed before taking the quiz at the end of this lesson. First, if we haven't already done so, we need to clone the various images and resources needed to run these labs into our workspace. ``` #!git clone https://github.com/EmdaloTechnologies/CE6003.git ``` # Introduction In this lab we introduce our first image segmentation project where we will use thresholding operations to segment a relatively simple image. We will work through this project using the types of image processing techniques such projects typically need and then segment an image. At the end of the lab we'll review the work we've done and assess what types of images and projects this approach is effective for. # Goal In this tutorial we will learn about three key items: * The general image processing algorithms that are required for most image processing projects; e.g. denoising, * Using our first classical segmentation technique on images (thresholding); * How to use Otsu's Method to automatically find a suitable threshold level to segment an image. # Background Image segmentation is the process of partitioning a digital image into multiple segments to make the image easier to analyze. Often we are looking to locate objects and boundaries in the original image. A more precise way of looking at it is to say that image segmentation's goal is to assign a label to every pixel in an image such that pixels with the same label share certain characteristics. For example, these images show a typical road scene on the left and a segmented version of the image on the right where the cars have been separated from the road, the buildings, the people in the scene, etc. <p float="center"> <img src="https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/bg-road.png?raw=1" width="450" /> </p> # Our First Segmenter So, now that we've seen what is possible, let's start by solving our first segmentation problem. Let's look at this image of a starfish. Let's examine it in its original color, in grayscale and in black and white. Colour | Grayscale | Black & White :--------------------------------:|:------------------------------------:|:---------------------------: ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_resize.png?raw=1) | ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_grey_resize.png?raw=1) | ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_bw_resize.png?raw=1) We are searching these images for clues as to how we might be able to segment them into two regions - the 'starfish' region and the 'not starfish' region. It turns out we can segment this image into a region approximating the starfish and a background region (the 'not starfish; region) using thresholding, along with general purpose image processing techniques such as denoising, morphological operations, and some contour detection and drawing. Finally, once we've established a boundary for the starfish, we can fill our starfish shape. After that we'll use some bitwise operations to overlay our segmented image over the original image. First, lets use OpenCV's fastN1MeansDenoisingColored routine to denoise the image. We're using a larger 'h' and 'hColor' value than typically used as the image is more noisy than images typically used with these technique. (This should make more sense as we go forward into the CNN segmentation examples). ``` # First import OpenCV, NumPY and MatPlotLib as we will use these libraries import cv2 import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Load a color image #img = cv2.imread("/content/CE6003/images/lab2/starfish.png") img = cv2.imread("./images/lab2/starfish.png") #plt.imshow(img, cmap='gray') #plt.imshow(img) plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) # Apply some blurring to reduce noise # h is the Parameter regulating filter strength for luminance component. # Bigger h value perfectly removes noise but also removes image details, # smaller h value preserves details but also preserves some noise # Hint: I recommend using larger h and hColor values than typical to remove noise at the # expense of losing image details # YOUR CODE HERE # Experiment with setting h and hColor to a suitable value. # Exercise: Insert code here to set values for h and hColor. # https://docs.opencv.org/2.4/modules/photo/doc/denoising.html # # h – Parameter regulating filter strength for luminance component. Bigger h value perfectly # removes noise but also removes image details, smaller h value preserves details but also # preserves some noise # # hColor – The same as h but for color components. For most images value equals 10 will be enough # to remove colored noise and do not distort colors # h = 20 hColor = 10 # END YOUR CODE HERE # Default values templateWindowSize = 7 searchWindowSize = 21 blur = cv2.fastNlMeansDenoisingColored(img, None, h, hColor,templateWindowSize,searchWindowSize) plt.imshow(cv2.cvtColor(blur, cv2.COLOR_BGR2RGB)) ``` After applying the fastN1MeansDenoisingColored routine above, you should end up with an image similar to the one on the right here. You may need to vary the h and hColor parameters to observe the effect of changing them on the blurred image. Your blurred image should look like this one. <img src="https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_blur.png?raw=1" alt="Blurred Starfish" align="left" style="width: 300px;"/> Now, lets run a morphological operation on the blurred image. For this example, we are going to generate a gradient. This builds on dilation and erosion. You can read more about erosion and dilation in the 'Basics' section of Lesson 2. Today we are going to use them to generate an outline for our starfish. # Edge Detection Instead of using a gradient, you could use an edge detection such as Sobol, Laplacian and Canny here in combination with adjusting the image denoising step above. I'll leave those as an exercise for the reader for now! ``` # Apply a morphological gradient (dilate the image, erode the image, and take the difference elKernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (13,13)) # YOUR CODE HERE # Exercise: Use openCV's morphologyEx to generate a gradient using the kernel above gradient = cv2.morphologyEx(blur, cv2.MORPH_GRADIENT, elKernel) # END YOUR CODE HERE plt.imshow(cv2.cvtColor(gradient, cv2.COLOR_BGR2RGB)) ``` After applying the gradient morphology routine above, you should end up with an image similar to the one shown here. The outline of the starfish should be starting to emerge at this point. <img src="https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_grad.png?raw=1" alt="Gradient Starfish" align="left" style="width: 300px;"/> We now have completed the pre-processing of our image. From this point onwards, we are concerning ourselves with: a) filling the region of interest, and b) removing artefacts from the image which we are not interested in. There are quite a few approaches we can take to this (including not doing them at all), but today lets apply OTSU's threshold to convert the image to black and white, and perform a closing operation to 'fill in' the starfish and then perform some erosion to remove parts of the image that we consider noise. ## OTSU Thresholding When converting from a grayscale image to a black and white image, selecting a good threshold value can be a time-consuming and manual process. There are a number of automatic thresholding techniques available - and Otsu's method thresholding is one of the better known techniques. Conceptually simple, and relatively low cost computationally, Otsu's threshold iterate through all the possible threshold values to find the threshold value where the sum of foreground and background spreads is at its minimum. ``` # Apply Otsu's method - or you can adjust the level at which thresholding occurs # and see what the effect of this is # Convert gradient to grayscale gray_gradient = cv2.cvtColor(gradient, cv2.COLOR_BGR2GRAY) # YOUR CODE HERE # Exercise: Generate a matrix called otsu using OpenCV's threshold() function. Use # Otsu's method. # Otsu's thresholding ret, otsu = cv2.threshold(gray_gradient,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU) plt.imshow(otsu, cmap='gray') # END YOUR CODE HERE # Apply a closing operation - we're using a large kernel here. By all means adjust the size of this kernel # and observe the effects closingKernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (33,33)) close = cv2.morphologyEx(otsu, cv2.MORPH_CLOSE, closingKernel) plt.imshow(close, cmap='gray') # Erode smaller artefacts out of the image - play with iterations to see how it works # YOUR CODE HERE # Exercise: Generate a matrix called eroded using cv2.erode() function over the 'close' matrix. # Experiment until your output image is similar to the image below # Taking a matrix of size n as the kernel n = 5 kernel = np.ones((n,n), np.uint8) eroded = cv2.erode(close,kernel,iterations = 5) # END YOUR CODE HERE plt.imshow(eroded, cmap='gray') ``` After switching to black and white and applying our closing and erosion operations, our simplified starfish is starting to emerge Original Image | B&W Image | After Closing | After Erosion :------------------------:|:------------------------------:|:-------------------------------:|:-------------- ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish.png?raw=1) | ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_otsu.png?raw=1) | ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_closed.png?raw=1) | ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_eroded.png?raw=1) So, now we've effectively segmented our image. Now, let's post-process the image to find the contours that represent the edge of the starfish. We'll just use the intuition that the starfish is the largest object in the scene. Then we'lll do a little image manipulation to generate a colour representing the starfish, another colour representing the background (i.e. not the starfish) and then merge those colours with the original image. You'll notice the closing and erosion steps are not perfect - they're not supposed to be. They are good enough to feed into the findContours routine. By all means, tune them further to get better quality input into findContours. In the findContours routine we're using cv2.RETR_EXTERNAL. This is to reduce the complexity of post-processing by only reporting 'external' contours (i.e. we'll attempt to suppress contours that are inside other contours). ``` p = int(img.shape[1] * 0.05) eroded[:, 0:p] = 0 eroded[:, img.shape[1] - p:] = 0 plt.imshow(eroded, cmap='gray') # from https://www.programcreek.com/python/example/70440/cv2.findContours major = cv2.__version__.split('.')[0] print(major) # YOUR CODE HERE # Exercise: Find the contours - just external contours to keep post-processing simple contours, _ = cv2.findContours(eroded, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) # END YOUR CODE HERE # Sort the candidates by size, and just keep the largest one contours = sorted(contours, key=cv2.contourArea, reverse=True)[:1] # Lets create two images, initially all zeros (i.e. black) # One image will be filled with 'Blue' wherever we think there's some starfish # The other image will be filled with 'Green' whereever we think there's not some starfish h, w, num_c = img.shape segmask = np.zeros((h, w, num_c), np.uint8) stencil = np.zeros((h, w, num_c), np.uint8) # I know we've only one contour, but - in general - we'd expect to have more contours to deal with for c in contours: # Fill in the starfish shape into segmask cv2.drawContours(segmask, [c], 0, (255, 0, 0), -1) # Lets fill in the starfish shape into stencil as well # and then re-arrange the colors using numpy cv2.drawContours(stencil, [c], 0, (255, 0, 0), -1) stencil[np.where((stencil==[0,0,0]).all(axis=2))] = [0, 255, 0] stencil[np.where((stencil==[255,0,0]).all(axis=2))] = [0, 0, 0] # Now, lets create a mask image by bitwise ORring segmask and stencil together mask = cv2.bitwise_or(stencil, segmask) plt.imshow(cv2.cvtColor(mask, cv2.COLOR_BGR2RGB)) ``` You should have generated a reasonable mask representing our image as having two parts - a 'starfish' and 'not a starfish'. It should look like the final mask in the image below. Starfish Mask | Not Starfish Mask | Final Mask :-------------------------------:|:--------------------------------:|:------------------------------- ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_segmask.png?raw=1) | ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_stencil.png?raw=1) | ![](https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_mask.png?raw=1) ``` # Now, lets just blend our original image with our mask # YOUR CODE HERE # Exercise: Blend the original image 'img' and our mask 'mask' # in any way you see fit, and store it in a variable called output # Hint: You'll find answers at the bottom of the lab. output = cv2.bitwise_or(mask, img) # END YOUR CODE HERE plt.imshow(cv2.cvtColor(output, cv2.COLOR_BGR2RGB)) ``` After you blend the original image with your mask you should see an image similar to the image shown here. <img src="https://github.com/EmdaloTechnologies/CE6003/blob/master/images/lab2/starfish_segmented.png?raw=1" alt="Segmented Starfish" align="left" style="width: 300px;"/> # Conclusion So, that completes the first of the four labs to this module. To summarise , we''ve learned some basic image processing techniques, such as morphological operations like erosion and dilation, contour detection and we've used these techniques in combination with Otsu's thresholding method to segment an image.
github_jupyter
# Week 1 Assignment: Housing Prices In this exercise you'll try to build a neural network that predicts the price of a house according to a simple formula. Imagine that house pricing is as easy as: A house has a base cost of 50k, and every additional bedroom adds a cost of 50k. This will make a 1 bedroom house cost 100k, a 2 bedroom house cost 150k etc. How would you create a neural network that learns this relationship so that it would predict a 7 bedroom house as costing close to 400k etc. Hint: Your network might work better if you scale the house price down. You don't have to give the answer 400...it might be better to create something that predicts the number 4, and then your answer is in the 'hundreds of thousands' etc. ``` import keras import numpy as np # GRADED FUNCTION: house_model def normalize(xs,ys): return xs,ys*0.01 def un_normalize(xs,ys): return xs,ys*100. def house_model(): ### START CODE HERE # Define input and output tensors with the values for houses with 1 up to 6 bedrooms # Hint: Remember to explictly set the dtype as float #xs:number of bedrooms xs = np.arange(6,dtype=float) #ys:cost of house ys = 50+50*xs print(xs) print(ys) xs,ys=normalize(xs,ys) print(xs) print(ys) # Define your model (should be a model with 1 dense layer and 1 unit) model = keras.Sequential([keras.layers.Dense(units=1,input_shape=[1])]) # Compile your model # Set the optimizer to Stochastic Gradient Descent # and use Mean Squared Error as the loss function model.compile(optimizer='sgd',loss='mean_squared_error') # Train your model for 1000 epochs by feeding the i/o tensors model.fit(xs,ys,epochs=1000) ### END CODE HERE return model ``` Now that you have a function that returns a compiled and trained model when invoked, use it to get the model to predict the price of houses: ``` # Get your trained model model = house_model() ``` Now that your model has finished training it is time to test it out! You can do so by running the next cell. ``` new_x = 7.0 prediction = model.predict([new_x])[0]*100. print(50+7*50) print(prediction) ``` If everything went as expected you should see a prediction value very close to 4. **If not, try adjusting your code before submitting the assignment.** Notice that you can play around with the value of `new_y` to get different predictions. In general you should see that the network was able to learn the linear relationship between `x` and `y`, so if you use a value of 8.0 you should get a prediction close to 4.5 and so on. **Congratulations on finishing this week's assignment!** You have successfully coded a neural network that learned the linear relationship between two variables. Nice job! **Keep it up!**
github_jupyter
# Handwritten Number Recognition with TFLearn and MNIST In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the **MNIST** data set, which consists of images of handwritten numbers and their correct labels 0-9. We'll be using [TFLearn](http://tflearn.org/), a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network. ``` # Import Numpy, TensorFlow, TFLearn, and MNIST data import numpy as np import tensorflow as tf import tflearn import tflearn.datasets.mnist as mnist ``` ## Retrieving training and test data The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data. Each MNIST data point has: 1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image) We'll call the images, which will be the input to our neural network, **X** and their corresponding labels **Y**. We're going to want our labels as *one-hot vectors*, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. ### Flattened data For this example, we'll be using *flattened* data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network. ``` # Retrieve the training and test data trainX, trainY, testX, testY = mnist.load_data(one_hot=True) ``` ## Visualize the training data Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function `show_digit` will display that training image along with it's corresponding label in the title. ``` # Visualizing the data import matplotlib.pyplot as plt %matplotlib inline # Function for displaying a training image by it's index in the MNIST set def show_digit(index): label = trainY[index].argmax(axis=0) # Reshape 784 array into 28x28 image image = trainX[index].reshape([28,28]) plt.title('Training data, index: %d, Label: %d' % (index, label)) plt.imshow(image, cmap='gray_r') plt.show() # Display the first (index 0) training image show_digit(0) ``` ## Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: 1. The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. 2. Hidden layers, which recognize patterns in data and connect the input to the output layer, and 3. The output layer, which defines how the network learns and outputs a label for a given image. Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example, ``` net = tflearn.input_data([None, 100]) ``` would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need **784 input units**. ### Adding layers To add new hidden layers, you use ``` net = tflearn.fully_connected(net, n_units, activation='ReLU') ``` This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument `net` is the network you created in the `tflearn.input_data` call, it designates the input to the hidden layer. You can set the number of units in the layer with `n_units`, and set the activation function with the `activation` keyword. You can keep adding layers to your network by repeated calling `tflearn.fully_connected(net, n_units)`. Then, to set how you train the network, use: ``` net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy') ``` Again, this is passing in the network you've been building. The keywords: * `optimizer` sets the training method, here stochastic gradient descent * `learning_rate` is the learning rate * `loss` determines how the network error is calculated. In this example, with categorical cross-entropy. Finally, you put all this together to create the model with `tflearn.DNN(net)`. **Exercise:** Below in the `build_model()` function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc. **Hint:** The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a `softmax` activation layer as your final output layer. ``` # Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net=tflearn.input_data(None,trainX.shape[1]) net=tflearn.fully_connected(net,128,activation='ReLU') # This model assumes that your network is named "net" model = tflearn.DNN(net) return model # Build the model model = build_model() ``` ## Training the network Now that we've constructed the network, saved as the variable `model`, we can fit it to the data. Here we use the `model.fit` method. You pass in the training features `trainX` and the training targets `trainY`. Below I set `validation_set=0.1` which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the `batch_size` and `n_epoch` keywords, respectively. Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely! ``` # Training model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20) ``` ## Testing After you're satisified with the training output and accuracy, you can then run the network on the **test data set** to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results. A good result will be **higher than 95% accuracy**. Some simple models have been known to get up to 99.7% accuracy! ``` # Compare the labels that our model predicts with the actual labels # Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample. predictions = np.array(model.predict(testX)).argmax(axis=1) # Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels actual = testY.argmax(axis=1) test_accuracy = np.mean(predictions == actual, axis=0) # Print out the result print("Test accuracy: ", test_accuracy) ```
github_jupyter
# Pandas ``` import numpy as np import pandas as pd ``` Pandas提供了3种数据类型,分别是`Series`、`DataFrame`和`Panel`。 * `Series`用于保存一维数据 * `DataFrame` 用于保存二维数据 * `Panel`用于保存三维或者可变维数据 ## Series数据结构 `Series`本质上是一个带索引的一维数组。 指定索引: ``` s = pd.Series([1,3,2,4], index=['a', 'b', 'c', 'd']) s.index s.values ``` 默认索引: ``` s = pd.Series([1, 3, 2, 4]) s.index s.values ``` ## DataFrame数据结构 ### 创建DataFrame ``` df = pd.DataFrame({'x': ['a', 'b', 'c'], 'y': range(1, 4), 'z': [2, 5, 3]}) df df.columns df.values ``` ### 查看数据内容 * `df.info()` 查看DataFrame属性信息 * `df.head()` 查看DataFrame前五行数据信息 * `df.tail()` 查看DataFrame后五行数据信息 ### 选取多列 * df.loc * df.iloc ``` df[['x', 'y']] df.loc[:, ['x', 'y']] df.iloc[:, [0, 1]] ``` ### 单行过滤 ``` df[df.z>=3] ``` ### 重新定义列名 ``` df.rename(columns={'x': 'X'}, inplace=True) df df.columns = ['X', 'Y', 'Z'] df ``` ### 数据的多重索引 ``` df = pd.DataFrame({ 'X': list('ABCABC'), 'year': [2010] * 3 + [2011] * 3, 'Value': [1, 3, 4, 3, 5, 2] }) df df.set_index(['X', 'year']) ``` ## 表格的变换 ``` df = pd.DataFrame({ 'X': list('ABC'), '2010': [1, 3, 4], '2011': [3, 5, 2] }) df df_melt = pd.melt(df, id_vars='X', var_name='year', value_name='value') df_melt ``` * `id.vars('X')`表示由标识变量构成的向量,用于标识观测的变量 * `variable_name('year')`表示用于保存原始变量名的变量名称 * `value.name('value')`表示用于保存原始值的名称 ``` df_pivot = df_melt.pivot_table(index='X', columns='year', values='value') df_pivot.reset_index(inplace=True) df_pivot ``` ## 变量的变换 * `apply`的操作对象是`DataFrame`的某一列(`axis`=0)或者某一列(`axis`=1) * `applymap`的操作对象是元素级,作用于每个`DataFrame`的每个数据 ## 表格的排序 `df.sort_values(by, ascending=True)` ``` df df.sort_values('2010', ascending=False) df.sort_values('2011', ascending=True) df.sort_values(by=['X', '2010'], ascending=False) ``` ## 表格拼接 ``` df1 = pd.DataFrame({ 'x': ['a', 'b', 'c'], 'y': range(1, 4), }) df2 = pd.DataFrame({ 'z': ['B', 'D', 'H'], 'g': [2, 5, 3] }) df3 = pd.DataFrame({ 'x': ['g', 'd'], 'y': [2, 5] }) ``` 横轴方向连接 ``` pd.concat([df1, df2], axis=1) ``` 纵轴方向连接 ``` pd.concat([df1, df3], axis=0).reset_index() ``` ## 表的融合 ``` df1 = pd.DataFrame({ 'x': list('abc'), 'y': range(1, 4) }) df2 = pd.DataFrame({ 'x': list('abd'), 'z': [2, 5, 3] }) df3 = pd.DataFrame({ 'g': list('abd'), 'z': [2, 5, 3] }) df1 df2 df3 ``` 只保留左表的所有数据 ``` pd.merge(df1, df2, how='left', on='x') ``` 只保留右表的数据 ``` pd.merge(df1, df2, how='right', on='x') ``` 保留两个表中公共的部分信息 ``` pd.merge(df1, df2, how='inner', on='x') ``` 保留两个表的所有信息 ``` pd.merge(df1, df2, how='outer', on='x') ``` ## 表格分组操作 ``` df = pd.DataFrame({ 'X': list('ABC'), '2010': [1, 3, 4], '2011': [3, 5, 2] }) df ``` 按行或列操作 按行求和 ``` df[['2010', '2011']].apply(lambda x: x.sum(), axis=1) ``` 按列求和 ``` df[['2010', '2011']].apply(lambda x: x.sum(), axis=0) ``` 多列运算 ``` df['2010_2011'] = df[['2010', '2011']].apply(lambda x: x['2010'] + 2 * x['2011'], axis=1) df ``` 分组操作 ``` df = pd.DataFrame({ 'X': list('ABC'), '2010': [1, 3, 4], '2011': [3, 5, 2] }) df_melt = pd.melt(df, id_vars=['X'], var_name='year', value_name='value') df_melt ``` 按`year`分组求均值 ``` df_melt.groupby('year').mean() ``` 按`year`和`x`两列分组求均值 ``` df_melt.groupby(['year', 'X']).mean() df_melt.groupby(['year', 'X'], as_index=False).mean() ``` 分组聚合 ``` df_melt.groupby(['X', 'year']).aggregate([np.mean, np.median]) ``` 分组运算:`transform()`函数 ``` df_melt['percentage'] = df_melt.groupby('X')['value'].transform(lambda x: x/s.sum()) df_melt ``` 分组筛选:`filter()`函数 ``` df_melt.groupby('X').filter(lambda x: x['value'].mean()>2) ```
github_jupyter
``` from numpy.random import Generator import numpy as np import pandas as pd import matplotlib.pyplot as plt import random from scipy.optimize import minimize from scipy.stats import gaussian_kde from scipy.stats import ks_2samp from scipy.special import rel_entr from scipy.stats import entropy import scipy from optimparallel import minimize_parallel config = { 'nbase': 500, 'neca': 2000, 'nwaypoints': 20, 'optimizeon': 'entropy' } ``` # Toy Datasets ``` # For Base cohort age = 45 lab1 = 3.4 lab2 = 5000 pcond1 = 0.3 pcat1 = 0.5 mean = [age, lab1, lab2] cov = [[14, 3, 100], [3, 2, 50], [100, 50, 25000]] np.random.seed(42) x = np.random.multivariate_normal(mean, cov, config['nbase']) x[x< 0] = 0 lab3 = np.random.beta(1, 2, size=config['nbase']) x = np.concatenate((x, lab3.reshape(config['nbase'], -1)), axis=1) cond1 = np.random.binomial(1, pcond1, size=config['nbase']).reshape(config['nbase'], -1) x = np.concatenate((x, cond1), axis=1) cat1 = np.random.binomial(3, pcat1, size=config['nbase']).reshape(config['nbase'], -1) x = np.concatenate((x, cat1), axis=1) data_base = pd.DataFrame(x, columns=['age', 'lab1', 'lab2', 'lab3', 'cond1', 'cat1']) # For External cohort age = 50 lab1 = 5.5 lab2 = 4800 pcond1 = 0.5 pcat1 = 0.7 factor = 1.5 mean = [age, lab1, lab2] cov = [[20, 5, 150], [5, 4, 100], [150, 100, 55000]] x = np.random.multivariate_normal(mean, cov, config['neca']) x[x< 0] = 0 lab3 = factor*np.random.beta(1, 2, size=config['neca']) x = np.concatenate((x, lab3.reshape(config['neca'], -1)), axis=1) cond1 = np.random.binomial(1, pcond1, size=config['neca']).reshape(config['neca'], -1) x = np.concatenate((x, cond1), axis=1) cat1 = np.random.binomial(3, pcat1, size=config['neca']).reshape(config['neca'], -1) x = np.concatenate((x, cat1), axis=1) data_eca = pd.DataFrame(x, columns=['age', 'lab1', 'lab2', 'lab3', 'cond1', 'cat1']) # Define Cohen's d: Standardized Mean Difference def cohensd(d1, d2): num = np.mean(d1) - np.mean(d2) den = np.sqrt((np.std(d1) ** 2 + np.std(d2) ** 2) / 2) cohens_d = num / den return cohens_d # Plot data distribution def plot_dist(data1, data2, col, ax=None): if not ax: _, ax = plt.subplots(1,1) ax.hist(data1[col], density=True, fill='black', histtype='stepfilled', edgecolor='black', bins= 20, linewidth=1.2, label=data1.name) ax.hist(data2[col], density=True, bins= 20, fill='blue', histtype='stepfilled', edgecolor='blue', linewidth=1.2, label=data2.name, alpha=0.7) ax.legend(loc='best') ax.set_title(col) return # View the Starting distribution _, ax = plt.subplots(3,2, figsize=(15, 10)) data_base.name = 'RCT Cohort' data_eca.name = 'ECA Cohort' plot_dist(data_base, data_eca, 'age', ax[0][0]) plot_dist(data_base, data_eca, 'lab1', ax[0][1]) plot_dist(data_base, data_eca, 'lab2', ax[1][0]) plot_dist(data_base, data_eca, 'lab3', ax[1][1]) plot_dist(data_base, data_eca, 'cond1', ax[2][0]) plot_dist(data_base, data_eca, 'cat1', ax[2][1]) # Define main function here def genetic_matching(df_base, df_eca, optimize=True, pweights=None): # Set the plotting style try: plt.style.use('neuroblu') except: print('neuroblu style not found. Using default style!') # Define loss function def calc_loss(w): est_den = {} for c in data_base.columns: est_den[c] = gaussian_kde(data_eca[c], weights=w) if config['optimizeon'] == 'least_squares': loss = 0 for c in data_base.columns: loss += (kde_base[c] - est_den[c](waypoints[c])*barwidth[c])**2 return sum(loss) elif config['optimizeon'] == 'rel_entr': loss = 0 for c in data_base.columns: loss += rel_entr(kde_base[c], est_den[c](waypoints[c])*barwidth[c]) return sum(loss) elif config['optimizeon'] == 'entropy': loss = 0 for c in data_base.columns: loss += entropy(kde_base[c], est_den[c](waypoints[c])*barwidth[c]) return loss else: raise NotImplemented if optimize: # Precompute parameters required for KDE estimation dist_density, waypoints, barwidth, kde_base = {}, {}, {}, {} for c in df_base.columns: dist_density[c] = gaussian_kde(df_base[c]) b = max(df_base[c].max(), df_eca[c].max()) a = min(df_base[c].min(), df_eca[c].min()) waypoints[c] = np.linspace(a, b, config['nwaypoints']) barwidth[c] = np.diff(waypoints[c])[0] kde_base[c] = dist_density[c](waypoints[c])*barwidth[c] # Optimization bounds = list(zip(np.zeros(config['neca']), np.ones(config['neca']))) weights = np.ones(config['neca'])/config['neca'] wopt = minimize(calc_loss, weights, bounds=bounds, method="L-BFGS-B", options={'gtol':'1e-3', 'maxiter': 200, 'disp': True}) p = wopt.x/sum(wopt.x) # Plot weights fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.hist(p, bins=20) ax.set_title('Distribution of weights') ax.set_yscale('log') # Results of optimization print('Total Loss:', wopt.fun) print('Has optimization converged:', wopt.success) print('\nSampling from distribution ..') df_eca_study = data_eca.sample(n=config['nbase'], replace=True, weights=p, random_state=42) df_eca_study.name = 'Optimized ECA Cohort' f, ax = plt.subplots(len(df_base.columns),2, figsize=(15, 20)) for i, col in enumerate(df_base.columns): plot_dist(df_base, df_eca, col, ax[i][0]) plot_dist(df_base, df_eca_study, col, ax[i][1]) f.suptitle('Before (left) and after(right) Matching') return df_eca_study data_eca_study = genetic_matching(data_base, data_eca) for col in data_base.columns: print(f'Distribution of {col}') print('Before balancing', ks_2samp(data_base[col], data_eca[col])) print('After balancing', ks_2samp(data_base[col], data_eca_study[col])) print('\n') for col in data_base.columns: print(f'Distribution of {col}') print('Before balancing', cohensd(data_base[col], data_eca[col])) print('After balancing', cohensd(data_base[col], data_eca_study[col])) print('\n') ```
github_jupyter
## Índice ### 1 - Análise de Sentimentos: uma introdução ### 2 - Carregando os pacotes necessários ### 3 - Conectando com a API do Twitter #### 3.1 - Autenticação #### 3.2 - Obtendo os tweets ### 4 - Text Mining ### 5 - Manipulação de dados #### 5.1 - Tokenização #### 5.2 - Núvem de palavras ### 6 - Removendo as Stopwords ### 7 - Dicionário Léxico ### 8 - Resultados #### 8.1 - Top 10 tweets "positivos" #### 8.2 - Top 10 tweets "negativos" #### 8.3 - Overview da análise de sentimentos #### 8.4 - Análise de sentimentos em série temporal ### 9 - Considerações finais ### 1 - Análise de Sentimentos: uma introdução A análise de sentimentos é a tarefa que tem por objetivo analisar e identificar de maneira quantitativa o estado emocional e afetivo do autor através de seu texto. Existem diversos tipos de textos (acadêmicos, literários, jornalísticos e etc.) e neste artigo iremos analisar o que as pessoas dizem em suas contas do Twitter sobre o time de futebol do Clube de Regatas do Flamengo. ### 2 - Carregando os pacotes necessários O primeiro passo a ser dado é carregar os pacotes que nos auxiliaram nesta jornada! ``` #Install install.packages("twitteR") install.packages("rvest") install.packages("tidyverse") install.packages("tidytext") install.packages("tm") # for text mining install.packages("SnowballC") # for text stemming install.packages("wordcloud") # word-cloud generator install.packages("RColorBrewer") # color palettes install.packages("stopwords") install.packages("lexiconPT") library(twitteR) library(rvest) library(tidyverse) library(tidytext) library(tm) library(SnowballC) library(wordcloud) library(RColorBrewer) library(stopwords) library(lexiconPT) ``` ### 3 - Conectando com a API do Twitter #### 3.1 - Autenticação Após instalarmos e carregarmos todos os pacotes necessários, iremos efetuar a conexão com o Twitter através do pacote "TwitteR". Para isso precisamos das chaves de autenticação da API do twitter. ``` #API's and Token's keys API_key = "hhrTn58RfcBoxjpGdvQbLTf6v" API_secret_key = "r6zON2bTvFv8qlaEDWWaIowRknxRxc97EGENdSBfdXZlFmkmrm" Access_token = "4928132291-OFnrrwGy5FKHLPePkUsxLA5exJwrbJ7l51azMWV" Access_token_secret = "q7inW05npl3S70yRihMkJdasF0roZyrlzZW46D1XTSbDd" #OAuth setup_twitter_oauth(API_key, API_secret_key, Access_token, Access_token_secret) ``` #### 3.2 - Obtendo os Tweets Em seguida, iremos criar uma função que solicita ao usuário uma palavra chave ou hashtag a ser buscada, a quantidade de tweets por dia, o idioma, a localidade e as datas iniciais e finais nas quais a pesquisa será feita. De posse destas informações, a função irá buscar os tweets de acordo com os parâmetros estabelecidos e armazenas as informações numa tabela chamada "tweets_df" ``` #Search Twitters search_twiter <- function(){ #This function search for a specific number of tweets per day in a range of dates #Return a Tibble whith 3 columns: 1- TweetID; 2- Date; 3- Tweet; #Search string search_string <- paste(readline(prompt = "A wordkey or a hashtag: "), " -filter:retweets") #An exemple: "flamengo -filter:retweets" #Number of tweets per day n_tweets <- as.integer(readline(prompt = "The number of tweets per day: ")) #Language language <- readline(prompt = "Language (ex.: pt): ") #Locale loc <- readline(prompt = "Tweets locale (ex.: br): ") #Date range: yyyy-mm-dd ini_date <- as.Date(readline(prompt = "The inicial date (yyyy-mm-dd): "), format = "%Y-%m-%d") #Inicial date final_date <- as.Date(readline(prompt = "The final date (yyyy-mm-dd): "), format = "%Y-%m-%d") #Final date n_days <- as.integer(final_date - ini_date) #Searching tweets <- c() date <- c() for(day in 0:n_days){ tweets <- c(tweets, searchTwitter(searchString = search_string, n = n_tweets, lang = language, locale = loc, since = as.character(ini_date + day), until = as.character(ini_date + day + 1))) date <- c(date, rep(ini_date + day, n_tweets)) } tweets_df <- tibble(tweetID = 1:length(tweets), date = format(as.Date(date, origin = "1970-01-01"), "%d-%m-%Y"), tweets) return(tweets_df) } tweets_df <- search_twiter() ``` Vamos dar uma olhada nos tweets que coletamos! ``` head(tweets_df) ``` ### 4 - Text Mining Note que temos 3500 observações e 3 colunas na nossa tabela. Porém, a terceira coluna é uma lista que contém todas as informações sobre os tweets coletados. É importante ter em mente que um tweet não é representado apenas pelo seu texto, mas sim por outras informações também, como por exemplo, o seu ID, o autor, a data de publicação e etc. Porém, nos interessa apenas o texto contido no tweet. Então, iremos criar outra função que irá extrair o texto e através de Text Mining, vamos "limpar" este texto. Ou seja, vamos remover hashtags, links, pontuações, dígitos e tudo aquilo que não interessa para a nossa análise. ``` #Cleaning tweets clean_tweets <- function(tweets){ #Getting tweets texts tweets = sapply(tweets, function(x) x$text) #Remove retweet entities tweets = gsub('(RT|via)((?:\\b\\W*@\\w+)+)', "", tweets) #Remove Hashtags tweets = gsub('#\\w+', "", tweets) #Remove links tweets = gsub('http\\w+', "", tweets) #Remove punctuations tweets = gsub('[[:punct:]]', "", tweets) #Remove numbers tweets = gsub('[[:digit:]]', "", tweets) #Remove line break tweets = gsub('\n', "", tweets) #lower case tweets = tolower(tweets) } tweets_df$tweets <- clean_tweets(tweets_df$tweets) ``` Embora os tweets ainda apresentem alguns caracteres estranhos, podemos dizer que temos nossos textos devidamente coletados. Estes caracteres estranhos serão tratados mais adiante. ``` head(tweets_df) ``` ### 5 - Manipulação de dados #### 5.1 - Tokenização Neste tópico iremos fazer a manipulação dos dados. O objetivo aqui é ter uma tabela onde cada registro possui uma única palavra, mantendo sempre a referência a qual tweet esta palavra pertence. Isso será muito útil para as análises seguintes. ``` #Tokenization tweets_tokens <- tweets_df %>% unnest_tokens(words, tweets) head(tweets_tokens, 25) ``` #### 5.2 - Núvem de palavras A fim de visualizar as palavras que são mais comuns dentre os tweets, vamos criar uma outra função que gera uma núvem de palavras. ``` wc <- function(tweets_tokens){ plot <- tweets_tokens %>% group_by(words) %>% summarise(freq = n()) %>% filter(words != "ifood") wordcloud(words = plot$words, freq = plot$freq, min.freq = 1, max.words=200, random.order=FALSE, rot.per=0.35, colors=brewer.pal(8, "Dark2")) } wc(tweets_tokens) ``` ### 6 - Stopwords Note que a palavra mais comum é "flamengo" pois é o tema da nossa pesquisa e é seguida por várias outras palavras que não tem muito peso semântico.Essas palavras são conhecidas como "stopwords". As stopwords são palavras que são comumente usadas e não possuem grande relevância para a semântica do texto. Como por exemplo, as palavras "a", "o", "onde", "de" e etc. Então, iremos remove-las. Para tal, iremos usar um conjunto de dados do pacote "stopwords" que contém alguns exemplos de stopwords em português. Após carregar esses dados, vamos usar a função "anti_join" para excluir todas as palavras em comum entre os tweets e o conjunto de stopwords. ``` #Stopwords stopwords <- stopwords(language = "pt", source = "stopwords-iso") stopwords <- tibble(stopwords) head(stopwords) tweets_tokens <- tweets_tokens %>% anti_join(stopwords, by = c("words" = "stopwords")) ``` ### 7 - Dicionário Léxico Até aqui, o que fizemos foi apenas limpar e manipular os dados. Neste tópico daremos início à análise de sentimentos. O primeiro passo consiste em carregarmos o dicionário léxico. Este dicionário léxico contém uma coluna que indica qual a polaridade das palavras, ou seja, se uma palavra é tida como positiva (+1), neutra (0) ou negativa (-1). O objetivo é conseguir quantificar o valor semântico de cada palavra, e por conseguinte, de cada tweet. ``` #LexiconPT lex_pt <- oplexicon_v3.0 head(lex_pt) ``` Em seguida iremos cruzar as palavras dos tweets com as palavras do dicionário léxico, atribuindo uma polaridade para cada palavra dos tweets. Para isso, usaremos a função "inner_join". ``` tweets_tokens <- tweets_tokens %>% inner_join(lex_pt, by = c("words" = "term")) %>% select(tweetID, date, words, polarity) head(tweets_tokens) ``` Agora iremos visualizar novamente uma núvem de palavras para entender como a retirada das stopwords e cruzamento com o dicionário léxico afetaram nossos tweets. ``` wc(tweets_tokens) ``` ### 8 - Resultados Após feita a polarização das palavras contidas nos tweets, vamos utilizar esses dados para extrair melhores informações dos textos. #### 8.1 - Top 10 tweets "positivos" ``` #Most positive tweets top_10_pos <- tweets_tokens %>% group_by(tweetID) %>% summarise(polarity = sum(polarity)) %>% arrange(desc(polarity)) %>% head(10) #Ordering top_10_pos$tweetID <- factor(top_10_pos$tweetID, levels = unique(top_10_pos$tweetID[order(top_10_pos$polarity, decreasing = FALSE)])) #Plot ggplot(top_10_pos, aes(tweetID, polarity)) + geom_col(fill = "cornflowerblue") + coord_flip() + xlab("Tweet ID") + ylab("Polarity") + ggtitle("Top 10 positive Polarity x Tweet ID") ``` #### 8.2 - Top 10 tweets "negativos" ``` #Most negative tweets top_10_neg <- tweets_tokens %>% group_by(tweetID) %>% summarise(polarity = sum(polarity)) %>% arrange(polarity) %>% head(10) #Ordering top_10_neg$tweetID <- factor(top_10_neg$tweetID, levels = unique(top_10_neg$tweetID)[order(top_10_neg$polarity, decreasing = TRUE)]) #Plot ggplot(top_10_neg, aes(tweetID, polarity)) + geom_col(fill = "lightcoral") + coord_flip() + xlab("Tweet ID") + ylab("Polarity") + ggtitle("Top 10 negative Polarity x Tweet ID") ``` #### 8.3 - Overview da análise de sentimentos O gráfico a seguir categoriza os tweets como "positivos", "negativos" e "neutros" e mostra qual a quantidade de tweets por categoria. ``` #Sentiment Analysis Overview tweets_tokens %>% group_by(tweetID) %>% summarise(polarity = sum(polarity)) %>% mutate(class = ifelse(polarity > 0, "positive", ifelse(polarity < 0, "negative", "neutral"))) %>% count(class) %>% ggplot(aes(factor(class), n, fill = class)) + geom_col() + xlab("Class") + ylab("Tweet ID") + ggtitle("Class x Tweet ID") ``` #### 8.4 - Análise de sentimentos em série temporal Este gráfico mostra como foi a sentimento dos usuários em relação ao tema ao longo do período determinado na busca. ``` #Time series tweets_tokens %>% group_by(date) %>% summarise(polarity = sum(polarity)) %>% ggplot(aes(date, polarity, group = 1)) + geom_line() + ggtitle("Polarity x Date") ``` ### 9 - Considerações finais Por fim, conseguimos extrair insights relevantes de como andou o ânimo dos rubro negros na semana do dia 16/07 a 23/07 de 2019. Vamos analisar o gráfico do ítem 8.4. Entre os dias 16 e 17 temos uma leve crescida no índice de polaridade pois o torcedor vivia a expectativa do jogo decisivo entre Flamengo e Atlético Paranaense pela Copa do Brasil (https://globoesporte.globo.com/pr/futebol/copa-do-brasil/noticia/veja-os-melhores-momentos-gols-e-os-penaltis-de-flamengo-x-athletico-pela-copa-do-brasil.ghtml). Expectativa esta que não se confirmou e isso explica a forte insatisfação da torcida nos dias seguintes. Junto à derrota nas penalidades máximas, no dia 21 o time empatou com o Corinthians num jogo moroso o que deixou a exigente torcida mais insatisfeita ainda. Porém, no dia seguinte, a diretoria do clube anunciou uma contratação de peso! Felipe Luiz se torna então jogador do flamengo(https://esportes.r7.com/prisma/cosme-rimoli/por-copa-do-qatar-e-salario-europeu-filipe-luis-e-do-flamengo-22072019). O lateral veio pra ocupar um setor do campo que a torcida apontava carências já há um bom tempo o que explica a forte acensão do humor dos rubro negros. Vale ressaltar que o método aqui adotado para a análise de sentimentos pode não ser o mais preciso pois depende muito da qualidade do dicionário léxico. Porém, podemos ver que os resultados coincidem com os acontecimentos da agitada semana no ninho do urubu. Deixo a cargo do leitor adivinhar qual o time que o autor torce. Espero que você não precise fazer uma análise de sentimentos para descobrir! Rs. Obrigado!
github_jupyter
# Linear Discriminate Analysis (LDA) ``` import numpy as np import matplotlib.pyplot as plt # data generation mean1=np.array([0,0]) mean2=np.array([4,5]) var=np.array([[1,0.1],[0.1,1]]) np.random.seed(0) data1=np.random.multivariate_normal(mean1,var,500) data2=np.random.multivariate_normal(mean2,var,500) data=np.concatenate((data1,data2)) label=np.concatenate((np.zeros(data1.shape[0]),np.ones(data2.shape[0]))) plt.figure() plt.scatter(data[:,0],data[:,1],c=label) plt.title('Data visualization') plt.figure() plt.scatter(data[:,0],np.zeros(data.shape[0]),c=label) plt.title('distribution in x direction') plt.figure() plt.scatter(data[:,1],np.zeros(data.shape[0]),c=label) plt.title('distribution in y direction') # perform 2-class and m-class LDA def LDA(data,label): id={} data_l={} mean_l={} cov_l={} S_w=np.zeros((data.shape[1],data.shape[1])) cls=np.unique(label) for i in cls: id[i]=np.where(label==i)[0] data_l[i]=data[id[i],:] mean_l[i]=np.mean(data_l[i],axis=0) cov_l[i]=(data_l[i]-mean_l[i]).T @ (data_l[i]-mean_l[i]) S_w=S_w+cov_l[i] S_w=S_w/len(data_l) if len(data_l)==2: S_b=(mean_l[0]-mean_l[1]).T @ (mean_l[0]-mean_l[1]) w=np.linalg.inv(S_w) @ (mean_l[0]-mean_l[1]) #print(w.shape) else: S_t=np.cov(data,rowvar=False) S_b=np.zeros((data.shape[1],data.shape[1])) for i in cls: S_b += len(data_l[i])*(mean_l[i]-np.mean(data,axis=0)).T@(mean_l[i]-np.mean(data,axis=0)) #S_b = S_t-S_w # insert your code here u,_,_= np.linalg.svd(np.linalg.inv(S_w)@S_b) w=u[:,:len(data_l)-1] return w # after LDA projection w=LDA(data,label) plt.figure() plt.scatter(data @ w,np.zeros(data.shape[0]),c=label) #classification using LDA #use k-nearest neighbour classifier after dimensionality reduction from sklearn.neighbors import KNeighborsClassifier LDA_data= data @ w[:,np.newaxis] k=5 knn = KNeighborsClassifier(n_neighbors=k) knn.fit(LDA_data, label) print('KNN Training accuracy =',knn.score(LDA_data,label)*100) # test data np.random.seed(0) data1=np.random.multivariate_normal(mean1,var,50) data2=np.random.multivariate_normal(mean2,var,50) data_tst=np.concatenate((data1,data2)) tst_label=np.concatenate((np.zeros(data1.shape[0]),np.ones(data2.shape[0]))) print('KNN Testing accuracy =',knn.score(data_tst@ w[:,np.newaxis],tst_label)*100) ``` ## LDA multiclass 1. 3 class Sythetic data 2. Homework: Mnist 3 class and 10 class ``` import numpy as np import matplotlib.pyplot as plt mean1=np.array([0,0]) mean2=np.array([4,5]) mean3=np.array([-5,-4]) var=np.array([[1,0.1],[0.1,1]]) np.random.seed(0) data1=np.random.multivariate_normal(mean1,var,500) data2=np.random.multivariate_normal(mean2,var,500) data3=np.random.multivariate_normal(mean3,var,500) data=np.concatenate((data1,data2,data3)) label=np.concatenate((np.zeros(data1.shape[0]),np.ones(data2.shape[0]),np.ones(data3.shape[0])+1)) plt.figure() plt.scatter(data[:,0],data[:,1],c=label) plt.title('Data visualization') plt.figure() plt.scatter(data[:,0],np.zeros(data.shape[0]),c=label) plt.title('distribution in x direction') plt.figure() plt.scatter(data[:,1],np.zeros(data.shape[0]),c=label) plt.title('distribution in y direction') # after projection w=LDA(data,label) print(w.shape) plt.figure() plt.scatter(data @ w[:,0],np.zeros(data.shape[0]),c=label) # by performing 1D projection # testing (using KNN) from sklearn.neighbors import KNeighborsClassifier LDA_data= data @ w k=5 knn = KNeighborsClassifier(n_neighbors=k) knn.fit(LDA_data, label) print('KNN Training accuracy =',knn.score(LDA_data,label)*100) # test data np.random.seed(0) data1=np.random.multivariate_normal(mean1,var,50) data2=np.random.multivariate_normal(mean2,var,50) data3=np.random.multivariate_normal(mean3,var,50) data_tst=np.concatenate((data1,data2,data3)) tst_label=np.concatenate((np.zeros(data1.shape[0]),np.ones(data2.shape[0]),np.ones(data2.shape[0])+1)) print('KNN Testing accuracy =',knn.score(data_tst@ w,tst_label)*100) ```
github_jupyter
## AirBnB Data anaylst ### 1. Business Understanding: This project aims to follow the CRISP-DM to address the three questions which related to business or real-world applications. The dataset is picked up from Kaggle, contributed by AirBnB, which contains the rent data about Seattle. I would like to process whole data and try to find some valuable data to help owner understand which feature could improve the rate of rent, moreover, whether train a model to predict. Three questions are: 1. What features influence the rating of the house? 2. When is the most popular time for this area? 3. Could we create a model to predict the price? ### 2. Data Understanding Dataset comes from Kaggle ``` # import libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt % matplotlib inline #load data listing = pd.read_csv('listings.csv') ## to check the feature of the dataset, then fliter listing.columns.values ``` ### Q1. What features influence the rating of house? ### 2.1 Pre-processing data - the null value columns ``` ## check the missing value listing.isnull().mean().sort_values() ## calculate how many column has missing value miss_num = np.sum(listing.isnull().mean() != 0) miss_num # visulize the missing data plt.figure(figsize=(20,8)) (listing.isnull().sum(axis = 0).sort_values(ascending=False)[:miss_num]/len(listing)*100).plot(kind='bar', facecolor='b'); plt.title('Proportion of missing values per feature') plt.ylabel('% of missing values') plt.xlabel("Features"); ``` As we can see above, the features of "security_deposit", "monthly_price", "square_feet" and "license" are missing more than 50% values, and these features are unnecessary for the project, thus, I will drop them. Technically, I will make a copy of original data. ``` #back up dataset listing_backup = listing.copy() # drop the columns which lose more than 50% missing value listing = listing.drop(['security_deposit', 'monthly_price', 'square_feet', 'license'], axis = 1) ## check again, I have dropped them all listing.isnull().mean().sort_values() # check and summary the column of missing data miss_num = np.sum(listing.isnull().mean() != 0) miss_num # visulize the missing data to check plt.figure(figsize=(20,8)) (listing.isnull().sum(axis = 0).sort_values(ascending=False)[:miss_num]/len(listing)*100).plot(kind='bar', facecolor='b'); plt.title('Proportion of missing values per feature') plt.ylabel('% of missing values') plt.xlabel("Features"); # check the total features listing.shape ``` Currently, the dataset has 88 features. Although the column of 'Weekly_price' quite close 50%, it may will be used, just keep it first. However, I think the dataset still contain some outliers and useless data, I will continue to process. ### 2.2 Processing the outliers and useless data #### 2.2.1, As the final target is prediction. The column of "id" and "note" are doesn't help to predict, so drop its ``` def drop_columns(m_columns): listing.drop(columns = m_columns, inplace = True) drop_columns = ['id', 'notes'] drop_columns(drop_columns) ``` #### 2.2.2 When I check the features, I found the values of some columns are link, which also could be defined as useless data. ``` ## the column contains key word of 'url' url_col=[col for col in listing.columns.values if 'url' in col] url_col drop_columns(url_col) ``` #### 2.2.3 Furthermore, there are some wired values, which means whole column only one value. I think those data won't help me to predict, drop it! ``` unique_col=[col for col in listing.columns.values if listing[col].nunique()==1] unique_col drop_columns(unique_col) ``` #### 2.2.4 Next, after checking again, I think some columns that contains 'host', which may not be relevant for the prediction. For instance, 'host_id', 'host_name', 'host_since', 'host_location', 'host_about', 'host_verifications', 'host_neighbourhood'. Same operate as before. Drop it. ``` drop_cols=['host_id', 'host_name', 'host_since', 'host_location', 'host_about', 'host_verifications', 'host_neighbourhood'] drop_columns(drop_cols) ``` #### 2.2.5 Lastly, the unstructured and redundant data. Some columns, such as City, State, Street, smart_location, latitude and longitude, could be represented by other value - zipcode. Therefore, they can be dropped ``` drop_cols=['city','street','state','smart_location','latitude','longitude', 'neighbourhood_cleansed'] drop_cols ## It shows what is the columns I will drop drop_columns(drop_cols) #drop columns containing unstructured data drop_cols=['name','summary', 'space', 'description', 'neighborhood_overview','transit', 'first_review', 'last_review', 'calendar_updated'] drop_cols drop_columns(drop_cols) # review_scores_rating is likely a combination of the other review scores so no need to keep them all. # furthermore, I want to find which features has a high relationship with rating, I would like to assume another review scores # has an exterme influence listing.drop(['review_scores_accuracy', 'review_scores_cleanliness', 'review_scores_checkin','reviews_per_month', 'review_scores_communication', 'review_scores_location', 'review_scores_value' ], axis=1, inplace=True) ## Now, we have 38 features after dropping columns, ## then next step, I will transfer categorical data and fill up missing value of some columns listing.shape # check the sample listing.head() ``` ### 2.3 Filling up missing value and transferring categorical data ##### 2.3.1 transferring categorical data ``` # Creating a table to visual the total value pd.merge(pd.DataFrame(listing.dtypes,columns=['datatype']).reset_index(), pd.DataFrame(pd.DataFrame(listing.iloc[0]).reset_index()),on='index') # get the name of columns which contain categorical data listing.select_dtypes(include=['object']).columns # to check unique value of the 'zip' column listing.zipcode.unique() # there is a outlier in zipcode, which need fix listing.zipcode.replace('99\n98122','98122',inplace=True) # summary the categorical columns and check the values categorical_col = ['host_response_time', 'host_response_rate', 'host_acceptance_rate', 'host_is_superhost', 'host_has_profile_pic', 'host_identity_verified', 'neighbourhood', 'neighbourhood_group_cleansed', 'zipcode', 'is_location_exact', 'property_type', 'room_type', 'bed_type', 'amenities', 'price', 'weekly_price', 'cleaning_fee', 'extra_people', 'instant_bookable', 'cancellation_policy', 'require_guest_profile_picture', 'require_guest_phone_verification'] for i in categorical_col: print(i,":", listing[i].unique(), "\n") ## transfer the columns which contains 'f', 't' Binary_cols = ['host_is_superhost','host_has_profile_pic','host_identity_verified','is_location_exact','instant_bookable','require_guest_profile_picture','require_guest_phone_verification'] for i in Binary_cols: listing[i] = listing[i].map(lambda x: 1 if x == 't' else 0) # to check whether the value is change listing['host_is_superhost'].unique() # same with last step, hot-coding to dummny variables encode_cols=['host_response_time','neighbourhood_group_cleansed','zipcode', 'neighbourhood' ,'property_type','room_type','bed_type','cancellation_policy'] listing=pd.get_dummies(data=listing, columns=encode_cols) listing.shape # Cheking, after convert to dummy variables # to check the sample listing.head() ``` #### 2.3.2 Engineering for 'amenities' column Next I will engineer 'amenities' column to extract categorical variables. Since the amenities column is in the form of list of amenities, I will extract each amenity and it would be its own categorical feature for each listing. ``` # process the data of amenities column, splite it first listing.amenities = listing.amenities.str.replace("[{}]", "") amenities_col = listing.amenities.str.get_dummies(sep = ",") listing_cleaned = pd.merge(listing, amenities_col, left_index=True, right_index=True) # drop the amenities column now that is has been onehot encoded listing_cleaned.drop(['amenities'], axis=1, inplace = True) # to check the amenities column already split listing_cleaned.shape listing_cleaned.head(3) ``` #### 2.3.3 Checking missing value and filling up ``` # check what column has missing value listing_cleaned.isnull().mean().sort_values() # summary the missing value column and to find out what is value of them, ready to fill up missing_cols = ['beds','host_listings_count','host_total_listings_count','bedrooms','bathrooms', 'host_response_rate','review_scores_rating', 'cleaning_fee'] for x in missing_cols: print(x,":", listing_cleaned[x].unique(), "\n") ``` As we can see, for the host_acceptance_rate column, there are only 3 unique values, which including Nan. Hence, I will drop this column as it does not offer an useful information. ``` #drop host_acceptance_rate column listing_cleaned.drop(columns='host_acceptance_rate',inplace=True) ``` Weekly price be dropped. In my mind, firstly, it lost almost 50% data, it is not an accurate data. Next, it could be the target value to be predicted, but not an independant value. ``` # drop the weekly_price column listing_cleaned.drop(columns='weekly_price',inplace=True) # Transfer String to float dollar_cols = ['cleaning_fee', 'extra_people', 'price'] listing_cleaned[dollar_cols]=listing_cleaned[dollar_cols].replace('[\$,]', '', regex=True).astype(float) percent_cols = ['host_response_rate'] listing_cleaned[percent_cols]=listing_cleaned[percent_cols].replace('%', '', regex=True).astype(float) listing_cleaned[percent_cols]=listing_cleaned[percent_cols]/100 # to check whether string already transfer to float listing_cleaned.cleaning_fee.head() # to check whether string already transfer to float listing_cleaned.host_response_rate.head() ``` There are many missing values in review related features because these are new listings and don't get reviewed yet. In reality, travellers usually prefer listings with high number of reviews and high review scores. It might be possible to use a form of clustering to make a better estimate for this feature by looking at the other review scores for the same listing in the cases where these aren't missing. Considering this, I will replace missing values in review related features with mean value of each column. ``` # fill up missing value with mean of their column for x in missing_cols: listing_cleaned[x] = listing_cleaned[x].fillna(listing_cleaned[x].mean()) # check whether the missing value is appear listing_cleaned.isnull().mean().sort_values() ``` ### 3. Modelling Base on the Q1 of business understanding, I will check which feature influnce the rating of the house. Thus, 'review_scores_rating' would be the target(dependent) variable. ``` # split the data into different column, prepare to create a model y = listing_cleaned['review_scores_rating'] X = listing_cleaned.drop(['review_scores_rating'], axis = 1) from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeRegressor ### I use Decision Tree as there are a lot of features from sklearn.grid_search import GridSearchCV # finding the best parameters dt = DecisionTreeRegressor(random_state=42) parameters = { 'max_depth': [20, 30, 40], 'min_samples_split': [2, 3], 'min_samples_leaf': [1, 2]} gridCV = GridSearchCV(estimator = dt, param_grid = parameters, cv = 5) # training model with dataset gridCV.fit(X,y) # to find out what is the best features gridCV.best_params_ # list the important features importance_t = pd.DataFrame(data=[X.columns,gridCV.best_estimator_.feature_importances_]).T importance_t.columns = ['feature','importance'] importance_t.sort_values(by='importance',ascending=False)[:5] ``` Apparently, many reiviews that has a extermely imporant coefficients with the rating! More reviews which means the house always be the best choice, and people would follow this trend when they are choosing. Besides, we can see that price is an important factor for rating (I think so, price always play a key role, which not a surprise). For most of the properties, cleaning fee is included in the price, so the higher the cleaning fee, the higher the price. Most of people select house from AirBnB as they need a convenient and cleaning house. Therefore, it makes sense both the price and cleaning fee are on top 5 list. 365 day-availability and host is superhost are the key role in rating, which Straightly reflect the conception of consume of current customer. They hope that they could rent the house at anytime if they need. ### Q2. When is the most popular time for this area? #### 4.1 Loading data We are mainly looking at the calendar dataset. As seen during the Data Exploration stage, the dataset contains the listings, dates, availability and the price. ``` # loading data calendar = pd.read_csv('calendar.csv') calendar.head() ``` #### 4.2 Pre-processing data ``` # a lot of missing value in 'Price', but we are not interest it. :) calendar.isnull().sum() # drop the column that I dont use alendar.drop(columns = 'price').head(5) # to find out when is busy days popular_time=calendar[calendar['available']=='f'] #group dataset by date popular_time=popular_time.groupby('date')['listing_id'].count() # visualize Data df=pd.DataFrame({'date': popular_time.index, 'count': popular_time.values }) df # plot trend of unavailable listings plt.figure(figsize=(30,20)) plt.plot(df['count'], 'o-') plt.title('Nonavailable days') plt.show() ``` It could no be clearly see from this graph, we still need to sort it ``` # to list the data that what I need popular_time.sort_values(ascending=False).head(50) ``` As we can see, January and July are the popular time for this area. Then, combining the results of question 1 and question 2, whatever is a host or a customer, they could make a strategy that how to improve the rating or attract the customer. ### Q3.Could we create a model to predict the price? Actually same with Q1, I would like to use DecisionTreeModel to train the dataset. The parameters will be used, which tested at Q1. ``` # We use the data after cleaning y_st = listing_cleaned.price X_st = listing_cleaned.drop(['price'], axis = 1) X_train_st, X_test_st, y_train_st, y_test_st = train_test_split(X_st, y_st, test_size = .20, random_state=42) # Check the datasets print(X_train_st.shape, y_train_st.shape) print( X_test_st.shape, y_test_st.shape) # import the libary from sklearn.metrics import r2_score # initial model and train model regressor = DecisionTreeRegressor(max_depth=30, min_samples_leaf=2, min_samples_split= 2, random_state=42) regressor = regressor.fit(X_train_st, y_train_st) pred = regressor.predict(X_test_st) # Check how good the model is by calculating r-squared on the test set print("The r-squared for the model is:", r2_score(y_test_st, pred)) ``` It shows above, there is a model that could be created, which is decision tree. The reason is features is not too many or less, and avoid overfitting. We could see, the R2 score is 0.41, which means 41% of the variance in Y can be predicted from X. Futhermore, the dataset just normal processing and the model does not train quite good, if spending more time to adjust the model, I think the score will be increase. ### 5. Conclusion In this project, I have analized the dataset about renting house from the Kaggle, which provided by AirBnb. The main motivation is to find out more insights for Airbnb business owners and customer in Seattle. Through the analyst, we can see, a rating for a house is quite important. To improve the rating, number of reviews, cleaning fee, price, available days and house's owner is the Top 5 feature which has a big influence for attacting customer. The popular time for the customer is January and July. Futhuremore, we could use scientice tool, such as data analyst to mining more valuable information
github_jupyter
## Import LIbraries ``` import pandas as pd import numpy as np ``` ## Loading Data ``` data = pd.read_csv( 'datasets/kc_house_data.csv' ) # Ploting data data.head() ``` # Business questions: ## 1. How many homes are available for purchase? ``` # Counting number of lines the same number of houses # len(data['id'].unique()) # Or using .drop function (remove od duplicados e conta novamente) # Estrategy: # 1. Select "id" column; # 2. Remov repeated value; # 3. Count the number of unique values. houses = len(data['id'].drop_duplicates()) print('The number of avaible houses are {}'.format(houses)) ``` ## 2. How many attributes do houses have? ``` # Estrategy: # 1. Count the number os columns; # 2. Remove values "id", "date" (they are not attributes): attributes = len(data.drop(['id', 'date'], axis = 1).columns) print('The houses have {} attributes'.format(attributes)) ``` ## 3. What are the attributes of houses? ``` ## Estrategy: # 1. Plot the columns. attributes2 = data.drop(['id', 'date'], axis=1).columns print('The attributes are: {}'.format(attributes2)) ``` ## 4. What is the most expensive house (house with the highest sale price)? ``` # Estrategy: # 1. Arrange the price column from highest to lowest # 2. Apply reset index to recount rows and ensure correct result; # 3. Collect the value and id from the first row. data[['id','price']].sort_values('price', ascending=False).head() # 1 exphouse = data[['id','price']].sort_values('price', ascending=False).reset_index(drop=True)['id'][0] # 2 and 3 highestprice = data[['id','price']].sort_values('price', ascending=False).reset_index(drop=True)['price'][0] print('The most expensive House is: id {} price U${}'.format(exphouse, highestprice)) ``` ## 5. Which house has the most bedrooms? ``` # Estrategy: # 1. Arrange the bedroom column from highest to lowest; # 2. Apply reset index to recount rows and ensure correct result; # 3. Collect the value and id from the first row. data[["id","bedrooms"]].sort_values("bedrooms", ascending=False).head() # 1 bedroomsid = data[['id','bedrooms']].sort_values('bedrooms', ascending=False).reset_index(drop=True)['id'][0] # 2 and 3 bedrooms1 = data[['id','bedrooms']].sort_values('bedrooms', ascending=False).reset_index(drop=True)['bedrooms'][0] print('The house with the most bedrooms is {} and have {} bedroomns'.format(bedroomsid, bedrooms1)) ``` # 6. What is the sum total of bedrooms in the dataset? ``` # Estrategy: # 1. Filter the columns "bedrooms" and sum this value. bedroomsnumber = data['bedrooms'].sum() print('There are a total of {} bedrooms'.format(bedroomsnumber)) ``` # 7. How many houses have 2 bathrooms? ``` # Estrategy: # 1. Find the number of houses whit 2 bathrooms; # 2. Select columns "id" and "bathroom"; # 3. Sum the number of houses. data.loc[data['bathrooms'] == 2, ['id', 'bathrooms']] # 1 bethroomsum = data.loc[data['bathrooms'] == 2, ['id', 'bathrooms']].shape # 2 and 3 print('There are a total of {} bethrooms'.format(bethroomsum)) ``` # 8. What is the average price of all houses in the dataset? ``` # Estrategy: # 1. Find the average price of houses. # Ps.: Use numpy's round() function to select only 2 number after point. averageprice = np.round(data['price'].mean(), 2) print('The average price of the houses is: U${}'.format(averageprice )) # the function data.dtypes show us what types variables we have. data.dtypes ``` # 9. What is the average price only houses whit 2 bathroom? ``` # Estrategy: # 1. Find only the average price of houses whit 2 bathroms. avg_bath = np.round(data.loc[data['bathrooms'] == 2, 'price'].mean(), 2) print('The average price for houses whit 2 bathrooms is U${}'.format(avg_bath)) ``` # 10. What is the minimum price between 3 bedroom homes? ``` # Estrategy: # 1. Select only 3 bedroom house and arrange ascending by price min_price_bed = data.loc[data['bedrooms'] == 3, 'price'].min(), print('The minimum price for houses whit 3 bedrooms is U${}'.format(min_pricebed)) ``` # 11. How many homes have more than 300 square meters in the living room? ``` # Estrategy: # 1. Select only houses whit mor than 300ft² of living room and read the number of lines. data['m2'] = data['sqft_living'] * 0.093 sqft_300 = len(data.loc[data['m2'] > 300, 'id']) print('There are {} houses whit mor than 300ft² in the living room.'.format(sqft_300)) ``` #### Se quiser fazer a converção de ft² para m² basta usar o seguinte raciocionio: (1 ft² = 0.093 m²) #### data['m²']=data['sqft_living'] * 0.093 - (aqui substituimos a variavel sqft_living pela m² ja convertendo o valor) #### len(data.loc[data['m²'] > 300, 'id']) # 12. How many homes have more than 2 floors? ``` # Estrategy: # 1. Select only houses whit mor than 300ft² of living room and read the number of lines. floor_2 = data.loc[data['floors'] > 2, 'id'].size print('There are {} houses whit mor than 2 floors.'.format(floor_2)) ``` # 13. How many houses have a waterfront view? ``` # Estrategy: # 1. Select only houses whit waterfront view and read the number of lines. waterfront_view = len(data.loc[data['waterfront'] != 0, 'id']) print('There are {} houses whit waterfront view.'.format(waterfront_view)) ``` # 14. Of the houses with a waterfront view, how many have 3 bedrooms? ``` # Estrategy: # 1. Select only houses whit waterfront view and read how many have 3 bedrooms. data.columns waterfront_bed = data.loc[(data['waterfront'] != 0) & (data['bedrooms'] == 3), "id"].size print('Of the houses whit waterfront, {} houses have 3 bedrooms.'.format(waterfront_bed)) ``` # 15. Of the houses with more than 300 square meters of living room, how many have more than 2 bathrooms? ``` # Estrategy: # 1. Select only houses whit mor than 300m² of livingo room and mor than 2 bedrooms. house_300m_2bat = data[(data['m2']>300) & (data['bathrooms']>2)].shape[0] print('Of the houses whit 300 square meters of living room, {} houses have 2 bathrooms.'.format(house_300m_2bat)) ```
github_jupyter
<img align="right" src="tf-small.png"/> # Search from MQL These are examples of [MQL](https://shebanq.ancient-data.org/static/docs/MQL-Query-Guide.pdf) queries on [SHEBANQ](https://shebanq.ancient-data.org/hebrew/queries), now expressed as Text-Fabric search templates. For more basic examples, see [searchTutorial](https://github.com/etcbc/text-fabric/blob/master/docs/searchTutorial.ipynb). *Search* in Text-Fabric is a template based way of looking for structural patterns in your dataset. ``` %load_ext autoreload %autoreload 2 from tf.fabric import Fabric ETCBC = 'hebrew/etcbc4c' TF = Fabric( modules=ETCBC ) api = TF.load(''' rela function pdp ''') api.makeAvailableIn(globals()) ``` # By Oliver Glanz [Oliver Glanz: PP with adjective followed by noun](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=547) ``` select all objects where [phrase FOCUS typ = PP [word sp= prep] [word sp=adjv] [word sp=subs] ] ``` 64 results having 251 words. ``` query = ''' phrase typ=PP word sp=prep <: word sp=adjv <: word sp=subs ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) for r in S.fetch(amount=10): print(S.glean(r)) ``` The number of results is right. The number of words that SHEBANQ reports is the number of words in the phrases of the result. Let us count them: ``` print(sum([len(L.d(r[0], otype='word')) for r in S.fetch()])) ``` # By Martijn Naaijer [Martijn Naaijer: Object clauses with >CR](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=997) ``` Select all objects where [clause rela = Objc [word focus first lex = '>CR'] ] ``` 157 results ``` query = ''' verse clause rela=Objc =: word lex=>CR ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]: print(S.glean(r)) ``` We have fewer cases: 96 instead of 157. We are working on the ETCBC version 4c, and the query has been executed against 4b. There have been coding updates that are relevant to this query, e.g. in Genesis 43:27, which is in the results on SHEBANQ, but not here. In 4c the `rela` is `Attr`, and not `Objc`. ``` query = ''' verse book=Genesis chapter=43 verse=27 clause =: word lex=>CR ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) results = sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1]) for r in results: print(r[1], F.rela.v(r[1]), S.glean(r)) ``` # By Cody Kingham [Cody Kingham: MI Hierarchies. p.18n49. First Person Verbs in Narrative](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=1050) ``` SELECT ALL OBJECTS WHERE [book [clause txt = 'N' [word FOCUS sp = verb [word ps = p1 ] ] ] ] OR [book [clause txt = '?N' [word FOCUS sp = verb [word ps = p1 ] ] ] ] ``` 273 results. ``` query = ''' book clause txt=N|?N word sp=verb ps=p1 ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]: print(S.glean(r)) ``` # By Reinoud Oosting [Reinoud Oosting: to go + object marker](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=755) ``` Select all objects where [clause [phrase function = Pred OR function = PreC [word FOCUS sp = verb AND vs = qal AND lex = "HLK[" ] ] .. [phrase FOCUS [word First lex = ">T"] ] ] OR [clause [phrase FOCUS [word First lex = ">T" ] ] .. [phrase function = Pred OR function = PreC [word FOCUS sp = verb AND vs = qal AND lex = "HLK["] ] ] ``` 4 results. This is a case where we can simplify greatly because we are not hampered by automatic constraints on the order of the phrases. ``` query = ''' clause p1:phrase function=Pred|PreC word sp=verb vs=qal lex=HLK[ p2:phrase =: word lex=>T p1 # p2 ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]: print(S.glean(r)) ``` # By Reinoud Oosting (ii) [Reinoud Oosting: To establish covenant](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=1485) ``` select all objects where [clause [phrase function = Pred OR function = PreC [word FOCUS sp = verb AND vs = hif AND lex = "QWM[" ] ] .. [phrase function = Objc [word FOCUS lex = "BRJT/" ] ] ] OR [clause [phrase function = Objc [word FOCUS lex = "BRJT/" ] ] .. [phrase function = Pred OR function = PreC [word FOCUS sp = verb AND vs = hif AND lex = "QWM["] ] ] ``` 13 results ``` query = ''' clause phrase function=Pred|PreC word sp=verb vs=hif lex=QWM[ phrase function=Objc word lex=BRJT/ ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) resultsx = sorted((L.u(r[0], otype='verse')+r for r in S.fetch()), key=lambda r: sortKey(r[0])) for r in resultsx: print(S.glean(r)) ``` # By Reinoud Oosting (iii) [Reinoud Oosting: To find grace in sight of](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=1484) ``` select all objects where [clause [phrase FOCUS function = Pred OR function = PreC [word sp = verb AND vs = qal AND lex = "MY>[" ] ] .. [phrase function = Objc [word FOCUS lex = "XN/" ] ] [phrase function = Cmpl [word FOCUS lex = "B"] [word FOCUS lex = "<JN/"] ] ] OR [clause [phrase function = Objc [word FOCUS lex = "XN/" ] ] [phrase function = Cmpl [word FOCUS lex = "B"] [word FOCUS lex = "<JN/"] .. [phrase function = Pred OR function = PreC [word FOCUS sp = verb AND vs = qal AND lex = "MY>["] ] ] ] ``` 38 results ``` query = ''' clause p1:phrase function=Pred|PreC word sp=verb vs=qal lex=MY>[ p2:phrase function=Objc word lex=XN/ p3:phrase function=Cmpl word lex=B <: word lex=<JN/ p2 << p3 ''' S.study(query) S.showPlan(details=True) S.count(progress=1000, limit=-1) ``` # By Stephen Ku [Stephen Ku: Verbless Clauses](https://shebanq.ancient-data.org/hebrew/query?version=4&id=1314) ``` SELECT ALL OBJECTS WHERE [clause [phrase function IN (Subj) [phrase_atom NOT rela IN (Appo,Para,Spec) [word FOCUS pdp IN (subs,nmpr,prps,prde,prin,adjv) ] ] ] NOTEXIST [phrase function IN (Pred)] .. NOTEXIST [phrase function IN (Pred)] [phrase function IN (PreC) NOTEXIST [word pdp IN (prep)] [word FOCUS pdp IN (subs,nmpr,prin,adjv) AND ls IN (card,ordn)] ] ] ``` 1441 results with 1244 words in those results. We do not have the `NOTEXIST` operator, and we cannot say `NOT rela IN`, so we are at a disadvantage here. Let's see what we can do. We can use additional processing to furnish the template and weed out results. The first thing is: we have to fetch all possible values of the `rela` feature, in order to see what other values than `Appo`, `Para`, `Spec` it can take. The function `freqList()` gives us a frequency list of values, we only need the values other than the indicated ones, separated by a `|`. We also need to consult the relation legend to pick the proper ordering between the two phrases. ``` excludedRela = {'Appo', 'Para', 'Spec'} '|'.join(x[0] for x in F.rela.freqList() if x[0] not in excludedRela) print(S.relationLegend) query = ''' clause p1:phrase function=Subj phrase_atom rela=NA|rec|par|Adju|Attr|adj|Coor|atr|dem|Resu|Objc|Link|mod|Subj|RgRc|ReVo|Cmpl|PrAd|PreC|Sfxs word pdp=subs|nmpr|prps|prde|prin|adjv p2:phrase function=PreC word pdp=subs|nmpr|prin|adjv ls=card|ordn p1 << p2 ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]: print(S.glean(r)) ``` We have too many results, because we have not posed the restrictions by the `NOTEXIST` operator. Let's weed out the results that do not satisfy those criteria. That is, essentially, throwing away those clauses * that have a phrase with `function=Pred` after the phrase with `function=Pred` * where the second phrase has a preposition ``` indent(reset=True) properResults = [] resultWords = set() for r in S.fetch(): clause = r[0] phrase1 = r[1] phrase2 = r[4] word1 = r[3] word2 = r[5] phrases = [p for p in L.d(clause, otype='phrase') if sortKey(p) > sortKey(phrase1)] words2 = L.d(phrase2, otype='word') if any(F.function.v(phrase) == 'Pred' for phrase in phrases): continue if any(F.pdp.v(word) == 'prep' for word in words2): continue resultWords |= {word1, word2} properResults.append(r) info('Found {} proper results with {} words in it'.format(len(properResults), len(resultWords))) ``` We have still many more results than the MQL query on SHEBANQ. Let us have a look at some results words and compare them with the result words on SHEBANQ. It is handy to fetch from SHEBANQ the csv file with query results. ``` resultsx = sorted((L.u(r[0], otype='verse')+r for r in properResults), key=lambda r: sortKey(r[0])) resultWordsx = [(L.u(w, otype='verse')[0], w) for w in sortNodes(resultWords)] for r in resultWordsx[0:30]: print(S.glean(r)) ``` In the list from SHEBANQ we see this: The first thing we miss in the SHEBANQ output is ``` Genesis 5:14 עֶ֣שֶׂר ``` and in SHEBANQ we see that this word has not been marked with `ls=card|ordn`, while in the newer ETCBC4c it is! I have conducted a SHEBANQ query for numerals here [Dirk Roorda: numerals](https://shebanq.ancient-data.org/hebrew/query?id=1487), in versions 4 and 4b, and quite something happened with the encoding of numerals between those versions. Let us also find the numerals in 4c: ``` S.study(''' word ls=card|ordn ''') ``` So we have for the amount of numerals in the ETCBC versions: 4|4b|4c ---|---|--- 6839|7014|7013 On the basis of these numbers, this cannot be the sole cause of the discrepancy. # By Dirk Roorda [Dirk Roorda: Yesh](https://shebanq.ancient-data.org/hebrew/query?version=4b&id=556) ``` select all objects where [book [chapter [verse [clause [clause_atom [phrase [phrase_atom [word focus lex="JC/" OR lex=">JN/"] ] ] ] ] ]]] ``` 926 results ``` query = ''' verse clause clause_atom phrase phrase_atom word lex=JC/|>JN/ ''' S.study(query) S.showPlan() S.count(progress=1000, limit=-1) for r in sorted(S.fetch(), key=lambda x: C.rank.data[x[0]-1])[0:10]: print(S.glean(r)) ```
github_jupyter
## Question and Answer Chatbot ---- ------ ## Loading the Data Using the Babi Data Set from Facebook Research. https://research.fb.com/downloads/babi/ ## Paper Reference - Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, Alexander M. Rush, "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks", http://arxiv.org/abs/1502.05698 ``` import pickle import numpy as np with open("train_qa.txt", "rb") as fp: # Unpickling train_data = pickle.load(fp) with open("test_qa.txt", "rb") as fp: # Unpickling test_data = pickle.load(fp) ``` ---- ## Exploring the Format of the Data ``` type(test_data) type(train_data) len(test_data) len(train_data) train_data[0] ' '.join(train_data[0][0]) ' '.join(train_data[0][1]) train_data[0][2] ``` ----- ## Setting up Vocabulary of All Words ``` # Create a set that holds the vocab words vocab = set() all_data = test_data + train_data for story, question , answer in all_data: # In case you don't know what a union of sets is: # https://www.programiz.com/python-programming/methods/set/union vocab = vocab.union(set(story)) vocab = vocab.union(set(question)) vocab.add('no') vocab.add('yes') vocab vocab_len = len(vocab) + 1 #we add an extra space to hold a 0 for Keras's pad_sequences max_story_len = max([len(data[0]) for data in all_data]) max_story_len max_question_len = max([len(data[1]) for data in all_data]) max_question_len ``` ## Vectorizing the Data ``` vocab # Reserve 0 for pad_sequences vocab_size = len(vocab) + 1 ``` ----------- ``` from keras.preprocessing.sequence import pad_sequences from keras.preprocessing.text import Tokenizer # integer encode sequences of words tokenizer = Tokenizer(filters=[]) tokenizer.fit_on_texts(vocab) tokenizer.word_index train_story_text = [] train_question_text = [] train_answers = [] for story,question,answer in train_data: train_story_text.append(story) train_question_text.append(question) train_story_seq = tokenizer.texts_to_sequences(train_story_text) len(train_story_text) len(train_story_seq) # word_index = tokenizer.word_index ``` ### Functionalize Vectorization ``` def vectorize_stories(data, word_index=tokenizer.word_index, max_story_len=max_story_len,max_question_len=max_question_len): ''' INPUT: data: consisting of Stories,Queries,and Answers word_index: word index dictionary from tokenizer max_story_len: the length of the longest story (used for pad_sequences function) max_question_len: length of the longest question (used for pad_sequences function) OUTPUT: Vectorizes the stories,questions, and answers into padded sequences. We first loop for every story, query , and answer in the data. Then we convert the raw words to an word index value. Then we append each set to their appropriate output list. Then once we have converted the words to numbers, we pad the sequences so they are all of equal length. Returns this in the form of a tuple (X,Xq,Y) (padded based on max lengths) ''' # X = STORIES X = [] # Xq = QUERY/QUESTION Xq = [] # Y = CORRECT ANSWER Y = [] for story, query, answer in data: # Grab the word index for every word in story x = [word_index[word.lower()] for word in story] # Grab the word index for every word in query xq = [word_index[word.lower()] for word in query] # Grab the Answers (either Yes/No so we don't need to use list comprehension here) # Index 0 is reserved so we're going to use + 1 y = np.zeros(len(word_index) + 1) # Now that y is all zeros and we know its just Yes/No , we can use numpy logic to create this assignment # y[word_index[answer]] = 1 # Append each set of story,query, and answer to their respective holding lists X.append(x) Xq.append(xq) Y.append(y) # Finally, pad the sequences based on their max length so the RNN can be trained on uniformly long sequences. # RETURN TUPLE FOR UNPACKING return (pad_sequences(X, maxlen=max_story_len),pad_sequences(Xq, maxlen=max_question_len), np.array(Y)) inputs_train, queries_train, answers_train = vectorize_stories(train_data) inputs_test, queries_test, answers_test = vectorize_stories(test_data) inputs_test queries_test answers_test sum(answers_test) tokenizer.word_index['yes'] tokenizer.word_index['no'] ``` ## Creating the Model ``` from keras.models import Sequential, Model from keras.layers.embeddings import Embedding from keras.layers import Input, Activation, Dense, Permute, Dropout from keras.layers import add, dot, concatenate from keras.layers import LSTM ``` ### Placeholders for Inputs Recall we technically have two inputs, stories and questions. So we need to use placeholders. `Input()` is used to instantiate a Keras tensor. ``` input_sequence = Input((max_story_len,)) question = Input((max_question_len,)) ``` ### Building the Networks To understand why we chose this setup, make sure to read the paper we are using: * Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, Rob Fergus, "End-To-End Memory Networks", http://arxiv.org/abs/1503.08895 ## Encoders ### Input Encoder m ``` # Input gets embedded to a sequence of vectors input_encoder_m = Sequential() input_encoder_m.add(Embedding(input_dim=vocab_size,output_dim=64)) input_encoder_m.add(Dropout(0.3)) # This encoder will output: # (samples, story_maxlen, embedding_dim) ``` ### Input Encoder c ``` # embed the input into a sequence of vectors of size query_maxlen input_encoder_c = Sequential() input_encoder_c.add(Embedding(input_dim=vocab_size,output_dim=max_question_len)) input_encoder_c.add(Dropout(0.3)) # output: (samples, story_maxlen, query_maxlen) ``` ### Question Encoder ``` # embed the question into a sequence of vectors question_encoder = Sequential() question_encoder.add(Embedding(input_dim=vocab_size, output_dim=64, input_length=max_question_len)) question_encoder.add(Dropout(0.3)) # output: (samples, query_maxlen, embedding_dim) ``` ### Encode the Sequences ``` # encode input sequence and questions (which are indices) # to sequences of dense vectors input_encoded_m = input_encoder_m(input_sequence) input_encoded_c = input_encoder_c(input_sequence) question_encoded = question_encoder(question) ``` ##### Use dot product to compute the match between first input vector seq and the query ``` # shape: `(samples, story_maxlen, query_maxlen)` match = dot([input_encoded_m, question_encoded], axes=(2, 2)) match = Activation('softmax')(match) ``` #### Add this match matrix with the second input vector sequence ``` # add the match matrix with the second input vector sequence response = add([match, input_encoded_c]) # (samples, story_maxlen, query_maxlen) response = Permute((2, 1))(response) # (samples, query_maxlen, story_maxlen) ``` #### Concatenate ``` # concatenate the match matrix with the question vector sequence answer = concatenate([response, question_encoded]) answer # Reduce with RNN (LSTM) answer = LSTM(32)(answer) # (samples, 32) # Regularization with Dropout answer = Dropout(0.5)(answer) answer = Dense(vocab_size)(answer) # (samples, vocab_size) # we output a probability distribution over the vocabulary answer = Activation('softmax')(answer) # build the final model model = Model([input_sequence, question], answer) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) model.summary() # train history = model.fit([inputs_train, queries_train], answers_train,batch_size=32,epochs=120,validation_data=([inputs_test, queries_test], answers_test)) ``` ### Saving the Model ``` filename = 'chatbot_120_epochs.h5' model.save(filename) ``` ## Evaluating the Model ### Plotting Out Training History ``` import matplotlib.pyplot as plt %matplotlib inline print(history.history.keys()) # summarize history for accuracy plt.plot(history.history['acc']) plt.plot(history.history['val_acc']) plt.title('model accuracy') plt.ylabel('accuracy') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper left') plt.show() ``` ### Evaluating on Given Test Set ``` model.load_weights(filename) pred_results = model.predict(([inputs_test, queries_test])) test_data[0][0] story =' '.join(word for word in test_data[0][0]) print(story) query = ' '.join(word for word in test_data[0][1]) print(query) print("True Test Answer from Data is:",test_data[0][2]) #Generate prediction from model val_max = np.argmax(pred_results[0]) for key, val in tokenizer.word_index.items(): if val == val_max: k = key print("Predicted answer is: ", k) print("Probability of certainty was: ", pred_results[0][val_max]) ``` ## Writing Own Stories and Questions Remember you can only use words from the existing vocab ``` vocab # Note the whitespace of the periods my_story = "John left the kitchen . Sandra dropped the football in the garden ." my_story.split() my_question = "Is the football in the garden ?" my_question.split() mydata = [(my_story.split(),my_question.split(),'yes')] my_story,my_ques,my_ans = vectorize_stories(mydata) pred_results = model.predict(([ my_story, my_ques])) #Generate prediction from model val_max = np.argmax(pred_results[0]) for key, val in tokenizer.word_index.items(): if val == val_max: k = key print("Predicted answer is: ", k) print("Probability of certainty was: ", pred_results[0][val_max]) ```
github_jupyter
# Monitoring Data Drift Over time, models can become less effective at predicting accurately due to changing trends in feature data. This phenomenon is known as *data drift*, and it's important to monitor your machine learning solution to detect it so you can retrain your models if necessary. In this lab, you'll configure data drift monitoring for datasets. ## Before you start In addition to the latest version of the **azureml-sdk** and **azureml-widgets** packages, you'll need the **azureml-datadrift** package to run the code in this notebook. Run the cell below to verify that it is installed. ``` !pip show azureml-datadrift ``` ## Connect to your workspace With the required SDK packages installed, now you're ready to connect to your workspace. > **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure. ``` from azureml.core import Workspace # Load the workspace from the saved config file ws = Workspace.from_config() print('Ready to work with', ws.name) ``` ## Create a *baseline* dataset To monitor a dataset for data drift, you must register a *baseline* dataset (usually the dataset used to train your model) to use as a point of comparison with data collected in the future. ``` from azureml.core import Datastore, Dataset # Upload the baseline data default_ds = ws.get_default_datastore() default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], target_path='diabetes-baseline', overwrite=True, show_progress=True) # Create and register the baseline dataset print('Registering baseline dataset...') baseline_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-baseline/*.csv')) baseline_data_set = baseline_data_set.register(workspace=ws, name='diabetes baseline', description='diabetes baseline data', tags = {'format':'CSV'}, create_new_version=True) print('Baseline dataset registered!') ``` ## Create a *target* dataset Over time, you can collect new data with the same features as your baseline training data. To compare this new data to the baseline data, you must define a target dataset that includes the features you want to analyze for data drift as well as a timestamp field that indicates the point in time when the new data was current -this enables you to measure data drift over temporal intervals. The timestamp can either be a field in the dataset itself, or derived from the folder and filename pattern used to store the data. For example, you might store new data in a folder hierarchy that consists of a folder for the year, containing a folder for the month, which in turn contains a folder for the day; or you might just encode the year, month, and day in the file name like this: *data_2020-01-29.csv*; which is the approach taken in the following code: ``` import datetime as dt import pandas as pd print('Generating simulated data...') # Load the smaller of the two data files data = pd.read_csv('data/diabetes2.csv') # We'll generate data for the past 6 weeks weeknos = reversed(range(6)) file_paths = [] for weekno in weeknos: # Get the date X weeks ago data_date = dt.date.today() - dt.timedelta(weeks=weekno) # Modify data to ceate some drift data['Pregnancies'] = data['Pregnancies'] + 1 data['Age'] = round(data['Age'] * 1.2).astype(int) data['BMI'] = data['BMI'] * 1.1 # Save the file with the date encoded in the filename file_path = 'data/diabetes_{}.csv'.format(data_date.strftime("%Y-%m-%d")) data.to_csv(file_path) file_paths.append(file_path) # Upload the files path_on_datastore = 'diabetes-target' default_ds.upload_files(files=file_paths, target_path=path_on_datastore, overwrite=True, show_progress=True) # Use the folder partition format to define a dataset with a 'date' timestamp column partition_format = path_on_datastore + '/diabetes_{date:yyyy-MM-dd}.csv' target_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, path_on_datastore + '/*.csv'), partition_format=partition_format) # Register the target dataset print('Registering target dataset...') target_data_set = target_data_set.with_timestamp_columns('date').register(workspace=ws, name='diabetes target', description='diabetes target data', tags = {'format':'CSV'}, create_new_version=True) print('Target dataset registered!') ``` ## Create a data drift monitor Now you're ready to create a data drift monitor for the diabetes data. The data drift monitor will run periodicaly or on-demand to compare the baseline dataset with the target dataset, to which new data will be added over time. ### Create a compute target To run the data drift monitor, you'll need a compute target. Run the following cell to specify a compute cluster (if it doesn't exist, it will be created). > **Important**: Change *your-compute-cluster* to the name of your compute cluster in the code below before running it! Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character. ``` from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException cluster_name = "anhldt-compute2" try: # Check for existing compute target training_cluster = ComputeTarget(workspace=ws, name=cluster_name) print('Found existing cluster, use it.') except ComputeTargetException: # If it doesn't already exist, create it try: compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2) training_cluster = ComputeTarget.create(ws, cluster_name, compute_config) training_cluster.wait_for_completion(show_output=True) except Exception as ex: print(ex) ``` > **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota. ### Define the data drift monitor Now you're ready to use a **DataDriftDetector** class to define the data drift monitor for your data. You can specify the features you want to monitor for data drift, the name of the compute target to be used to run the monitoring process, the frequency at which the data should be compared, the data drift threshold above which an alert should be triggered, and the latency (in hours) to allow for data collection. ``` from azureml.datadrift import DataDriftDetector # set up feature list features = ['Pregnancies', 'Age', 'BMI'] # set up data drift detector monitor = DataDriftDetector.create_from_datasets(ws, 'mslearn-diabates-drift', baseline_data_set, target_data_set, compute_target=cluster_name, frequency='Week', feature_list=features, drift_threshold=.3, latency=24) monitor ``` ## Backfill the data drift monitor You have a baseline dataset and a target dataset that includes simulated weekly data collection for six weeks. You can use this to backfill the monitor so that it can analyze data drift between the original baseline and the target data. > **Note** This may take some time to run, as the compute target must be started to run the backfill analysis. The widget may not always update to show the status, so click the link to observe the experiment status in Azure Machine Learning studio! ``` from azureml.widgets import RunDetails backfill = monitor.backfill(dt.datetime.now() - dt.timedelta(weeks=6), dt.datetime.now()) RunDetails(backfill).show() backfill.wait_for_completion() ``` ## Analyze data drift You can use the following code to examine data drift for the points in time collected in the backfill run. ``` drift_metrics = backfill.get_metrics() for metric in drift_metrics: print(metric, drift_metrics[metric]) ``` You can also visualize the data drift metrics in [Azure Machine Learning studio](https://ml.azure.com) by following these steps: 1. On the **Datasets** page, view the **Dataset monitors** tab. 2. Click the data drift monitor you want to view. 3. Select the date range over which you want to view data drift metrics (if the column chart does not show multiple weeks of data, wait a minute or so and click **Refresh**). 4. Examine the charts in the **Drift overview** section at the top, which show overall drift magnitude and the drift contribution per feature. 5. Explore the charts in the **Feature detail** section at the bottom, which enable you to see various measures of drift for individual features. > **Note**: For help understanding the data drift metrics, see the [How to monitor datasets](https://docs.microsoft.com/azure/machine-learning/how-to-monitor-datasets#understanding-data-drift-results) in the Azure Machine Learning documentation. ## Explore further This lab is designed to introduce you to the concepts and principles of data drift monitoring. To learn more about monitoring data drift using datasets, see the [Detect data drift on datasets](https://docs.microsoft.com/azure/machine-learning/how-to-monitor-datasets) in the Azure machine Learning documentation. You can also collect data from published services and use it as a target dataset for datadrift monitoring. See [Collect data from models in production](https://docs.microsoft.com/azure/machine-learning/how-to-enable-data-collection) for details.
github_jupyter
# Scaling up ML using Cloud AI Platform In this notebook, we take a previously developed TensorFlow model to predict taxifare rides and package it up so that it can be run in Cloud AI Platform. For now, we'll run this on a small dataset. The model that was developed is rather simplistic, and therefore, the accuracy of the model is not great either. However, this notebook illustrates *how* to package up a TensorFlow model to run it within Cloud AI Platform. Later in the course, we will look at ways to make a more effective machine learning model. ## Environment variables for project and bucket Note that: <ol> <li> Your project id is the *unique* string that identifies your project (not the project name). You can find this from the GCP Console dashboard's Home page. My dashboard reads: <b>Project ID:</b> cloud-training-demos </li> <li> Cloud training often involves saving and restoring model files. If you don't have a bucket already, I suggest that you create one from the GCP console (because it will dynamically check whether the bucket name you want is available). A common pattern is to prefix the bucket name by the project id, so that it is unique. Also, for cost reasons, you might want to use a single region bucket. </li> </ol> <b>Change the cell below</b> to reflect your Project ID and bucket name. ``` !sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst import os PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID BUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1 # For Python Code # Model Info MODEL_NAME = 'taxifare' # Model Version MODEL_VERSION = 'v1' # Training Directory name TRAINING_DIR = 'taxi_trained' # For Bash Code os.environ['PROJECT'] = PROJECT os.environ['BUCKET'] = BUCKET os.environ['REGION'] = REGION os.environ['MODEL_NAME'] = MODEL_NAME os.environ['MODEL_VERSION'] = MODEL_VERSION os.environ['TRAINING_DIR'] = TRAINING_DIR os.environ['TFVERSION'] = '2.5' # Tensorflow version %%bash gcloud config set project $PROJECT gcloud config set compute/region $REGION ``` ## Packaging up the code Take your code and put into a standard Python package structure. <a href="taxifare/trainer/model.py">model.py</a> and <a href="taxifare/trainer/task.py">task.py</a> containing the Tensorflow code from earlier (explore the <a href="taxifare/trainer/">directory structure</a>). ``` %%bash find ${MODEL_NAME} %%bash cat ${MODEL_NAME}/trainer/model.py ``` ## Find absolute paths to your data Note the absolute paths below. ``` %%bash echo "Working Directory: ${PWD}" echo "Head of taxi-train.csv" head -1 $PWD/taxi-train.csv echo "Head of taxi-valid.csv" head -1 $PWD/taxi-valid.csv ``` ## Running the Python module from the command-line #### Clean model training dir/output dir ``` %%bash # This is so that the trained model is started fresh each time. However, this needs to be done before rm -rf $PWD/${TRAINING_DIR} %%bash # Setup python so it sees the task module which controls the model.py export PYTHONPATH=${PYTHONPATH}:${PWD}/${MODEL_NAME} # Currently set for python 2. To run with python 3 # 1. Replace 'python' with 'python3' in the following command # 2. Edit trainer/task.py to reflect proper module import method python -m trainer.task \ --train_data_paths="${PWD}/taxi-train*" \ --eval_data_paths=${PWD}/taxi-valid.csv \ --output_dir=${PWD}/${TRAINING_DIR} \ --train_steps=1000 --job-dir=./tmp %%bash ls $PWD/${TRAINING_DIR}/export/exporter/ %%writefile ./test.json {"pickuplon": -73.885262,"pickuplat": 40.773008,"dropofflon": -73.987232,"dropofflat": 40.732403,"passengers": 2} %%bash sudo find "/usr/lib/google-cloud-sdk/lib/googlecloudsdk/command_lib/ml_engine" -name '*.pyc' -delete %%bash # This model dir is the model exported after training and is used for prediction # model_dir=$(ls ${PWD}/${TRAINING_DIR}/export/exporter | tail -1) # predict using the trained model gcloud ai-platform local predict \ --model-dir=${PWD}/${TRAINING_DIR}/export/exporter/${model_dir} \ --json-instances=./test.json ``` #### Clean model training dir/output dir ``` %%bash # This is so that the trained model is started fresh each time. However, this needs to be done before rm -rf $PWD/${TRAINING_DIR} ``` ## Running locally using gcloud ``` %%bash # Use Cloud Machine Learning Engine to train the model in local file system gcloud ai-platform local train \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ -- \ --train_data_paths=${PWD}/taxi-train.csv \ --eval_data_paths=${PWD}/taxi-valid.csv \ --train_steps=1000 \ --output_dir=${PWD}/${TRAINING_DIR} %%bash ls $PWD/${TRAINING_DIR} ``` ## Submit training job using gcloud First copy the training data to the cloud. Then, launch a training job. After you submit the job, go to the cloud console (http://console.cloud.google.com) and select <b>AI Platform | Jobs</b> to monitor progress. <b>Note:</b> Don't be concerned if the notebook stalls (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. Use the Cloud Console link (above) to monitor the job. ``` %%bash # Clear Cloud Storage bucket and copy the CSV files to Cloud Storage bucket echo $BUCKET gsutil -m rm -rf gs://${BUCKET}/${MODEL_NAME}/smallinput/ gsutil -m cp ${PWD}/*.csv gs://${BUCKET}/${MODEL_NAME}/smallinput/ %%bash OUTDIR=gs://${BUCKET}/${MODEL_NAME}/smallinput/${TRAINING_DIR} JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S) echo $OUTDIR $REGION $JOBNAME # Clear the Cloud Storage Bucket used for the training job gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=BASIC \ --runtime-version 2.3 \ --python-version 3.5 \ -- \ --train_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-train*" \ --eval_data_paths="gs://${BUCKET}/${MODEL_NAME}/smallinput/taxi-valid*" \ --output_dir=$OUTDIR \ --train_steps=10000 ``` Don't be concerned if the notebook appears stalled (with a blue progress bar) or returns with an error about being unable to refresh auth tokens. This is a long-lived Cloud job and work is going on in the cloud. <b>Use the Cloud Console link to monitor the job and do NOT proceed until the job is done.</b> ``` %%bash gsutil ls gs://${BUCKET}/${MODEL_NAME}/smallinput ``` ## Train on larger dataset I have already followed the steps below and the files are already available. <b> You don't need to do the steps in this comment. </b> In the next chapter (on feature engineering), we will avoid all this manual processing by using Cloud Dataflow. Go to http://bigquery.cloud.google.com/ and type the query: <pre> SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers, 'nokeyindata' AS key FROM [nyc-tlc:yellow.trips] WHERE trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 AND ABS(HASH(pickup_datetime)) % 1000 == 1 </pre> Note that this is now 1,000,000 rows (i.e. 100x the original dataset). Export this to CSV using the following steps (Note that <b>I have already done this and made the resulting GCS data publicly available</b>, so you don't need to do it.): <ol> <li> Click on the "Save As Table" button and note down the name of the dataset and table. <li> On the BigQuery console, find the newly exported table in the left-hand-side menu, and click on the name. <li> Click on "Export Table" <li> Supply your bucket name and give it the name train.csv (for example: gs://cloud-training-demos-ml/taxifare/ch3/train.csv). Note down what this is. Wait for the job to finish (look at the "Job History" on the left-hand-side menu) <li> In the query above, change the final "== 1" to "== 2" and export this to Cloud Storage as valid.csv (e.g. gs://cloud-training-demos-ml/taxifare/ch3/valid.csv) <li> Download the two files, remove the header line and upload it back to GCS. </ol> <p/> <p/> ## Run Cloud training on 1-million row dataset This took 60 minutes and uses as input 1-million rows. The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). At the end of the training the loss was 32, but the RMSE (calculated on the validation dataset) was stubbornly at 9.03. So, simply adding more data doesn't help. ``` %%bash OUTDIR=gs://${BUCKET}/${MODEL_NAME}/${TRAINING_DIR} JOBNAME=${MODEL_NAME}_$(date -u +%y%m%d_%H%M%S) CRS_BUCKET=cloud-training-demos # use the already exported data echo $OUTDIR $REGION $JOBNAME gsutil -m rm -rf $OUTDIR gcloud ai-platform jobs submit training $JOBNAME \ --region=$REGION \ --module-name=trainer.task \ --package-path=${PWD}/${MODEL_NAME}/trainer \ --job-dir=$OUTDIR \ --staging-bucket=gs://$BUCKET \ --scale-tier=STANDARD_1 \ --runtime-version 2.3 \ --python-version 3.5 \ -- \ --train_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/train.csv" \ --eval_data_paths="gs://${CRS_BUCKET}/${MODEL_NAME}/ch3/valid.csv" \ --output_dir=$OUTDIR \ --train_steps=100000 ``` ## Challenge Exercise Modify your solution to the challenge exercise in d_trainandevaluate.ipynb appropriately. Make sure that you implement training and deployment. Increase the size of your dataset by 10x since you are running on the cloud. Does your accuracy improve? Copyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
github_jupyter
# Nanodegree Engenheiro de Machine Learning ## Aprendizado Supervisionado ## Projeto: Encontrando doadores para a *CharityML* Seja bem-vindo ao segundo projeto do Nanodegree Engenheiro de Machine Learning! Neste notebook, você receberá alguns códigos de exemplo e será seu trabalho implementar as funcionalidades adicionais necessárias para a conclusão do projeto. As seções cujo cabeçalho começa com **'Implementação'** indicam que o bloco de código posterior requer funcionalidades adicionais que você deve desenvolver. Para cada parte do projeto serão fornecidas instruções e as diretrizes da implementação estarão marcadas no bloco de código com uma expressão `'TODO'`. Por favor, leia cuidadosamente as instruções! Além de implementações de código, você terá de responder questões relacionadas ao projeto e à sua implementação. Cada seção onde você responderá uma questão terá um cabeçalho com o termo **'Questão X'**. Leia com atenção as questões e forneça respostas completas nas caixas de texto que começam com o termo **'Resposta:'**. A submissão do seu projeto será avaliada baseada nas suas resostas para cada uma das questões além das implementações que você disponibilizar. >**Nota:** Por favor, especifique QUAL A VERSÃO DO PYTHON utilizada por você para a submissão deste notebook. As células "Code" e "Markdown" podem ser executadas utilizando o atalho do teclado **Shift + Enter**. Além disso, as células "Markdown" podem ser editadas clicando-se duas vezes na célula. >**Python 2.7** ## Iniciando Neste projeto, você utilizará diversos algoritmos de aprendizado supervisionado para modelar com precisão a remuneração de indivíduos utilizando dados coletados no censo americano de 1994. Você escolherá o algoritmo mais adequado através dos resultados preliminares e irá otimizá-lo para modelagem dos dados. O seu objetivo com esta implementação é construir um modelo que pode predizer com precisão se um indivíduo possui uma remuneração superior a $50,000. Este tipo de tarefa pode surgir em organizações sem fins lucrativos que sobrevivem de doações. Entender a remuneração de um indivíduo pode ajudar a organização o montante mais adequado para uma solicitação de doação, ou ainda se eles realmente deveriam entrar em contato com a pessoa. Enquanto pode ser uma tarefa difícil determinar a faixa de renda de uma pesssoa de maneira direta, nós podemos inferir estes valores através de outros recursos disponíveis publicamente. O conjunto de dados para este projeto se origina do [Repositório de Machine Learning UCI](https://archive.ics.uci.edu/ml/datasets/Census+Income) e foi cedido por Ron Kohavi e Barry Becker, após a sua publicação no artigo _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. Você pode encontrar o artigo de Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). Os dados que investigaremos aqui possuem algumas pequenas modificações se comparados com os dados originais, como por exemplo a remoção da funcionalidade `'fnlwgt'` e a remoção de registros inconsistentes. ---- ## Explorando os dados Execute a célula de código abaixo para carregas as bibliotecas Python necessárias e carregas os dados do censo. Perceba que a última coluna deste cojunto de dados, `'income'`, será o rótulo do nosso alvo (se um indivíduo possui remuneração igual ou maior do que $50,000 anualmente). Todas as outras colunas são dados de cada indívduo na base de dados do censo. ``` # Importe as bibliotecas necessárias para o projeto. import numpy as np import pandas as pd from time import time from IPython.display import display # Permite a utilização da função display() para DataFrames. # Importação da biblioteca de visualização visuals.py import visuals as vs # Exibição amigável para notebooks %matplotlib inline # Carregando os dados do Censo data = pd.read_csv("census.csv") # Sucesso - Exibindo o primeiro registro display(data.head(n=1)) ``` ### Implementação: Explorando os Dados Uma investigação superficial da massa de dados determinará quantos indivíduos se enquadram em cada grupo e nos dirá sobre o percentual destes indivúdos com remuneração anual superior à \$50,000. No código abaixo, você precisará calcular o seguinte: - O número total de registros, `'n_records'` - O número de indivíduos com remuneração anual superior à \$50,000, `'n_greater_50k'`. - O número de indivíduos com remuneração anual até \$50,000, `'n_at_most_50k'`. - O percentual de indivíduos com remuneração anual superior à \$50,000, `'greater_percent'`. ** DICA: ** Você pode precisar olhar a tabela acima para entender como os registros da coluna `'income'` estão formatados. ``` # Usado para a divisão retornar um float from __future__ import division # TODO: Número total de registros. n_records = data['age'].count() # TODO: Número de registros com remuneração anual superior à $50,000 n_greater_50k = data[data['income'] == '>50K']['age'].count() # TODO: O número de registros com remuneração anual até $50,000 n_at_most_50k = data[data['income'] == '<=50K']['age'].count() # TODO: O percentual de indivíduos com remuneração anual superior à $50,000 greater_percent = (data[data['income'] == '>50K']['age'].count() * 100) / data['age'].count() # Exibindo os resultados print "Total number of records: {}".format(n_records) print "Individuals making more than $50,000: {}".format(n_greater_50k) print "Individuals making at most $50,000: {}".format(n_at_most_50k) print "Percentage of individuals making more than $50,000: {:.2f}%".format(greater_percent) ``` ** Explorando as colunas ** * **age**: contínuo. * **workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked. * **education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool. * **education-num**: contínuo. * **marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse. * **occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces. * **relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried. * **race**: Black, White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other. * **sex**: Female, Male. * **capital-gain**: contínuo. * **capital-loss**: contínuo. * **hours-per-week**: contínuo. * **native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands. ---- ## Preparando os dados Antes de que os dados possam ser utilizados como input para algoritmos de machine learning, muitas vezes eles precisam ser tratados, formatados e reestruturados — este processo é conhecido como **pré-processamento**. Felizmente neste conjunto de dados não existem registros inconsistentes para tratamento, porém algumas colunas precisam ser ajustadas. Este pré-processamento pode ajudar muito com o resultado e poder de predição de quase todos os algoritmos de aprendizado. ### Transformando os principais desvios das colunas contínuas Um conjunto de dados pode conter ao menos uma coluna onde os valores tendem a se próximar para um único número, mas também podem conter registros com o mesmo atributo contendo um valor muito maior ou muito menor do que esta tendência. Algoritmos podem ser sensíveis para estes casos de distribuição de valores e este fator pode prejudicar sua performance se a distribuição não estiver normalizada de maneira adequada. Com o conjunto de dados do censo, dois atributos se encaixam nesta descrição: '`capital-gain'` e `'capital-loss'`. Execute o código da célula abaixo para plotar um histograma destes dois atributos. Repare na distribuição destes valores. ``` # Dividindo os dados entre features e coluna alvo income_raw = data['income'] features_raw = data.drop('income', axis = 1) # Visualizando os principais desvios das colunas contínuas entre os dados vs.distribution(data) ``` Para atributos com distribuição muito distorcida, tais como `'capital-gain'` e `'capital-loss'`, é uma prática comum aplicar uma <a href="https://en.wikipedia.org/wiki/Data_transformation_(statistics)">transformação logarítmica</a> nos dados para que os valores muito grandes e muito pequenos não afetem a performance do algoritmo de aprendizado. Usar a transformação logarítmica reduz significativamente os limites dos valores afetados pelos outliers (valores muito grandes ou muito pequenos). Deve-se tomar cuidado ao aplicar esta transformação, poir o logaritmo de `0` é indefinido, portanto temos que incrementar os valores em uma pequena quantia acima de `0` para aplicar o logaritmo adequadamente. Execute o código da célula abaixo para realizar a transformação nos dados e visualizar os resultados. De novo, note os valores limite e como os valores estão distribuídos. ``` # Aplicando a transformação de log nos registros distorcidos. skewed = ['capital-gain', 'capital-loss'] features_log_transformed = pd.DataFrame(data = features_raw) features_log_transformed[skewed] = features_raw[skewed].apply(lambda x: np.log(x + 1)) # Visualizando as novas distribuições após a transformação. vs.distribution(features_log_transformed, transformed = True) ``` ### Normalizando atributos numéricos Além das transformações em atributos distorcidos, é uma boa prática comum realizar algum tipo de adaptação de escala nos atributos numéricos. Ajustar a escala nos dados não modifica o formato da distribuição de cada coluna (tais como `'capital-gain'` ou `'capital-loss'` acima); no entanto, a normalização garante que cada atributo será tratado com o mesmo peso durante a aplicação de aprendizado supervisionado. Note que uma vez aplicada a escala, a observação dos dados não terá o significado original, como exemplificado abaixo. Execute o código da célula abaixo para normalizar cada atributo numérico, nós usaremos ara isso a [`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html). ``` # Importando sklearn.preprocessing.StandardScaler from sklearn.preprocessing import MinMaxScaler # Inicializando um aplicador de escala e aplicando em seguida aos atributos scaler = MinMaxScaler() # default=(0, 1) numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week'] features_log_minmax_transform = pd.DataFrame(data = features_log_transformed) features_log_minmax_transform[numerical] = scaler.fit_transform(features_log_transformed[numerical]) # Exibindo um exemplo de registro com a escala aplicada display(features_log_minmax_transform.head(n=5)) ``` ### Implementação: Pré-processamento dos dados A partir da tabela em **Explorando os dados** acima, nós podemos observar que existem diversos atributos não-numéricos para cada registro. Usualmente, algoritmos de aprendizado esperam que os inputs sejam numéricos, o que requer que os atributos não numéricos (chamados de *variáveis de categoria*) sejam convertidos. Uma maneira popular de converter as variáveis de categoria é utilizar a estratégia **one-hot encoding**. Esta estratégia cria uma variável para cada categoria possível de cada atributo não numérico. Por exemplo, assuma que `algumAtributo` possuí três valores possíveis: `A`, `B`, ou `C`. Nós então transformamos este atributo em três novos atributos: `algumAtributo_A`, `algumAtributo_B` e `algumAtributo_C`. | | algumAtributo | | algumAtributo_A | algumAtributo_B | algumAtributo_C | | :-: | :-: | | :-: | :-: | :-: | | 0 | B | | 0 | 1 | 0 | | 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 | | 2 | A | | 1 | 0 | 0 | Além disso, assim como os atributos não-numéricos, precisaremos converter a coluna alvo não-numérica, `'income'`, para valores numéricos para que o algoritmo de aprendizado funcione. Uma vez que só existem duas categorias possíveis para esta coluna ("<=50K" e ">50K"), nós podemos evitar a utilização do one-hot encoding e simplesmente transformar estas duas categorias para `0` e `1`, respectivamente. No trecho de código abaixo, você precisará implementar o seguinte: - Utilizar [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) para realizar o one-hot encoding nos dados da `'features_log_minmax_transform'`. - Converter a coluna alvo `'income_raw'` para re. - Transforme os registros com "<=50K" para `0` e os registros com ">50K" para `1`. ``` # TODO: Utilize o one-hot encoding nos dados em 'features_log_minmax_transform' utilizando pandas.get_dummies() features_final = pd.get_dummies(features_log_minmax_transform) # TODO: Faça o encode da coluna 'income_raw' para valores numéricos income = income_raw.apply(lambda d: 0 if d == '<=50K' else 1) # Exiba o número de colunas depois do one-hot encoding encoded = list(features_final.columns) print "{} total features after one-hot encoding.".format(len(encoded)) # Descomente a linha abaixo para ver as colunas após o encode # print encoded ``` ### Embaralhar e dividir os dados Agora todas as _variáveis de categoria_ foram convertidas em atributos numéricos e todos os atributos numéricos foram normalizados. Como sempre, nós agora dividiremos os dados entre conjuntos de treinamento e de teste. 80% dos dados serão utilizados para treinamento e 20% para teste. Execute o código da célula abaixo para realizar divisão. ``` # Importar train_test_split from sklearn.cross_validation import train_test_split # Dividir os 'atributos' e 'income' entre conjuntos de treinamento e de testes. X_train, X_test, y_train, y_test = train_test_split(features_final, income, test_size = 0.2, random_state = 0) # Show the results of the split print "Training set has {} samples.".format(X_train.shape[0]) print "Testing set has {} samples.".format(X_test.shape[0]) ``` ---- ## Avaliando a performance do modelo Nesta seção nós investigaremos quatro algoritmos diferentes e determinaremos qual deles é melhor para a modelagem dos dados. Três destes algoritmos serão algoritmos de aprendizado supervisionado de sua escolha e o quarto algoritmo é conhecido como *naive predictor*. ### Métricas e o Naive predictor *CharityML*, equpada com sua pesquisa, sabe que os indivíduos que fazem mais do que \$50,000 possuem maior probabilidade de doar para a sua campanha de caridade. Por conta disto, a *CharityML* está particularmente interessada em predizer com acurácia quais indivíduos possuem remuneração acima de \$50,000. Parece que utilizar **acurácia (accuracy)** como uma métrica para avaliar a performance de um modelo é um parâmetro adequado. Além disso, identificar alguém que *não possui* remuneração acima de \$50,000 como alguém que recebe acima deste valor seria ruim para a *CharityML*, uma vez que eles estão procurando por indivíduos que desejam doar. Com isso, a habilidade do modelo em predizer com preisão aqueles que possuem a remuneração acima dos \$50,000 é *mais importante* do que a habilidade de realizar o **recall** destes indivíduos. Nós podemos utilizar a fórmula **F-beta score** como uma métrica que considera ambos: precision e recall. $$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$ Em particular, quando $\beta = 0.5$, maior ênfase é atribuída para a variável precision. Isso é chamado de **F$_{0.5}$ score** (ou F-score, simplificando). Analisando a distribuição de classes (aqueles que possuem remuneração até \$50,000 e aqueles que possuem remuneração superior), fica claro que a maioria dos indivíduos não possui remuneração acima de \$50,000. Isto pode ter grande impacto na **acurácia (accuracy)**, uma vez que nós poderíamos simplesmente dizer *"Esta pessoa não possui remuneração acima de \$50,000"* e estar certos em boa parte das vezes, sem ao menos olhar os dados! Fazer este tipo de afirmação seria chamado de **naive**, uma vez que não consideramos nenhuma informação para balisar este argumento. É sempre importante considerar a *naive prediction* para seu conjunto de dados, para ajudar a estabelecer um benchmark para análise da performance dos modelos. Com isso, sabemos que utilizar a naive prediction não traria resultado algum: Se a predição apontasse que todas as pessoas possuem remuneração inferior à \$50,000, a *CharityML* não identificaria ninguém como potencial doador. #### Nota: Revisando: accuracy, precision e recall ** Accuracy ** mede com que frequência o classificador faz a predição correta. É a proporção entre o número de predições corretas e o número total de predições (o número de registros testados). ** Precision ** informa qual a proporção de mensagens classificamos como spam eram realmente spam. Ou seja, é a proporção de verdadeiros positivos (mensagens classificadas como spam que eram realmente spam) sobre todos os positivos (todas as palavras classificadas como spam, independente se a classificação estava correta), em outras palavras, é a proporção `[Verdadeiros positivos/(Verdadeiros positivos + Falso positivos)]` ** Recall(sensibilidade)** nos informa qual a proporção das mensagens que eram spam que foram corretamente classificadas como spam. É a proporção entre os verdadeiros positivos (classificados como spam, que realmente eram spam) sobre todas as palavras que realmente eram spam. Em outras palavras, é a proporção entre `[Verdadeiros positivos/(Verdadeiros positivos + Falso negativos)]` Para problemas de classificação distorcidos em suas distribuições, como no nosso caso, por exemplo, se tivéssemos 100 mensagems de texto e apenas 2 fossem spam e todas as outras não fossem, a "accuracy" por si só não seria uma métrica tão boa. Nós poderiamos classificar 90 mensagems como "não-spam" (incluindo as 2 que eram spam mas que teriam sido classificadas como não-spam e, por tanto, seriam falso negativas.) e 10 mensagems como spam (todas as 10 falso positivas) e ainda assim teriamos uma boa pontuação de accuracy. Para estess casos, precision e recall são muito úteis. Estas duas métricas podem ser combinadas para resgatar o F1 score, que é calculado através da média(harmônica) dos valores de precision e de recall. Este score pode variar entre 0 e 1, sendo 1 o melhor resultado possível para o F1 score (consideramos a média harmônica pois estamos lidando com proporções). ``` TP = np.sum(income) # Contando pois este é o caso "naive". Note que 'income' são os dados 'income_raw' convertido para valores numéricos durante o passo de pré-processamento de dados. FP = income.count() - TP # Específico para o caso naive TN = 0 # Sem predições negativas para o caso naive FN = 0 # Sem predições negativas para o caso naive # TODO: Calcular accuracy, precision e recall accuracy = TP / income.count() recall = TP / (TP + FN) precision = TP / (TP + FP) # TODO: Calcular o F-score utilizando a fórmula acima para o beta = 0.5 e os valores corretos de precision e recall. fscore = (1 + (0.5 ** 2)) * ((precision * recall) / (((0.5 ** 2) * precision) + recall)) # Exibir os resultados print "Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore) ``` ### Questão 1 - Performance do Naive Predictor * Se escolhessemos um modelo que sempre prediz que um indivíduo possui remuneração acima de $50,000, qual seria a accuracy e o F-score considerando este conjunto de dados? Você deverá utilizar o código da célula abaixo e atribuir os seus resultados para as variáveis `'accuracy'` e `'fscore'` que serão usadas posteriormente. ** Por favor, note ** que o propósito ao gerar um naive predictor é simplesmente exibir como um modelo sem nenhuma inteligência se comportaria. No mundo real, idealmente o seu modelo de base será o resultado de um modelo anterior ou poderia ser baseado em um paper no qual você se basearia para melhorar. Quando não houver qualquer benchmark de modelo, utilizar um naive predictor será melhor do que uma escolha aleatória. ** DICA: ** * Quando temos um modelo que sempre prediz '1' (e.x o indivíduo possui remuneração superior à 50k) então nosso modelo não terá Verdadeiros Negativos ou Falso Negativos, pois nós não estaremos afirmando que qualquer dos valores é negativo (ou '0') durante a predição. Com isso, nossa accuracy neste caso se torna o mesmo valor da precision (Verdadeiros positivos/ (Verdadeiros positivos + Falso positivos)) pois cada predição que fizemos com o valor '1' que deveria ter o valor '0' se torna um falso positivo; nosso denominador neste caso é o número total de registros. * Nossa pontuação de Recall(Verdadeiros positivos/(Verdadeiros Positivos + Falsos negativos)) será 1 pois não teremos Falsos negativos. ### Modelos de Aprendizado Supervisionado **Estes são alguns dos modelos de aprendizado supervisionado disponíveis em** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) - Gaussian Naive Bayes (GaussianNB) - Decision Trees (Árvores de decisão) - Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting) - K-Nearest Neighbors (KNeighbors) - Stochastic Gradient Descent Classifier (SGDC) - Support Vector Machines (SVM) - Logistic Regression ### Questão 2 - Aplicação do Modelo Liste três dos modelos de aprendizado supervisionado acima que são apropriados para este problema que você irá testar nos dados do censo. Para cada modelo escolhido - Descreva uma situação do mundo real onde este modelo pode ser utilizado. - Quais são as vantagems da utilização deste modelo; quando ele performa bem? - Quais são as fraquesas do modelo; quando ele performa mal? - O que torna este modelo um bom candidato para o problema, considerando o que você sabe sobre o conjunto de dados? ** DICA: ** Estruture sua resposta no mesmo formato acima^, com 4 partes para cada um dos modelos que você escolher. Por favor, inclua referências em cada uma das respostas. **Resposta: ** **Gaussian Naive Bayes** - **Descreva uma situação do mundo real onde este modelo pode ser utilizado.** Detectar span. - **Quais são as vantagens da utilização deste modelo; quando ele performa bem?** Implementação fácil. Ele é rápido. - **Quais são as fraquezas do modelo; quando ele performa mal?** Dados com muitos atributos podem não funcionar tão bem. Atributos são independentes. - **O que torna este modelo um bom candidato para o problema, considerando o que você sabe sobre o conjunto de dados?** É um modelo bem conhecido e pode ser usado para muitos tipos de dados, e também é rápido. **Referências: **http://scikit-learn.org/stable/modules/svm.html , https://en.wikipedia.org/wiki/Naive_Bayes_classifier , https://pt.wikipedia.org/wiki/Teorema_de_Bayes **Decision Trees** - **Descreva uma situação do mundo real onde este modelo pode ser utilizado.** Decidir quando jogar futebol com base nos dados, quando está frio ou quente, está chovendo ou ensolarado, e mais tipos de dados disponíveis. - **Quais são as vantagens da utilização deste modelo; quando ele performa bem?** Simples de entender, interpretar e analisar. Árvores podem ser visualizadas em forma gráfica. Requer pouca (ou nenhuma) preparação e modificação de dados. Capaz de lidar com dados numéricos e texto. - **Quais são as fraquezas do modelo; quando ele performa mal?** A árvore de decisão podem ser complexas e não generalizar bem os dados. As árvores de decisão podem ficar instáveis com pequenas variações nos dados, e podem resultar na geração de uma árvore diferente. - **O que torna este modelo um bom candidato para o problema, considerando o que você sabe sobre o conjunto de dados?** Eu realmente gosto de árvores de decisão e acredito que com a quantidade de dados e seus tipos, pode ser uma excelente escolha para o projeto. **Referências: **http://scikit-learn.org/stable/modules/tree.html , https://en.wikipedia.org/wiki/Decision_tree **Logistic Regression** - **Descreva uma situação do mundo real onde este modelo pode ser utilizado.** Prever se você pode ganhar ou perder em ações. - **Quais são as vantagens da utilização deste modelo; quando ele performa bem?** É simples. É rápido. Funciona com grandes e pequenas quantidades de dados. É fácil de entender. - **Quais são as fraquezas do modelo; quando ele performa mal?** Não pode ser usado quando os dados não são binários. - **O que torna este modelo um bom candidato para o problema, considerando o que você sabe sobre o conjunto de dados?** É um modelo relativamente mais simples que os outros modelo e tem um excelente desempenho. **Referências: **http://scikit-learn.org/stable/modules/linear_model.html#logistic-regression , https://en.wikipedia.org/wiki/Logistic_regression , https://pt.wikipedia.org/wiki/Regress%C3%A3o_log%C3%ADstica ### Implementação - Criando um Pipeline de Treinamento e Predição Para avaliar adequadamente a performance de cada um dos modelos que você escolheu é importante que você crie um pipeline de treinamento e predição que te permite de maneira rápida e eficiente treinar os modelos utilizando vários tamanhos de conjuntos de dados para treinamento, além de performar predições nos dados de teste. Sua implementação aqui será utilizada na próxima seção. No bloco de código abaixo, você precisará implementar o seguinte: - Importar `fbeta_score` e `accuracy_score` de [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics). - Adapte o algoritmo para os dados de treinamento e registre o tempo de treinamento. - Realize predições nos dados de teste `X_test`, e também nos 300 primeiros pontos de treinamento `X_train[:300]`. - Registre o tempo total de predição. - Calcule a acurácia tanto para o conjundo de dados de treino quanto para o conjunto de testes. - Calcule o F-score para os dois conjuntos de dados: treino e testes. - Garanta que você configurou o parâmetro `beta`! ``` # TODO: Import two metrics from sklearn - fbeta_score and accuracy_score from sklearn.metrics import fbeta_score, accuracy_score def train_predict(learner, sample_size, X_train, y_train, X_test, y_test): ''' inputs: - learner: the learning algorithm to be trained and predicted on - sample_size: the size of samples (number) to be drawn from training set - X_train: features training set - y_train: income training set - X_test: features testing set - y_test: income testing set ''' results = {} # TODO: Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:]) start = time() # Get start time learner = learner.fit(X_train[:sample_size], y_train[:sample_size]) end = time() # Get end time # TODO: Calculate the training time results['train_time'] = end - start # TODO: Get the predictions on the test set(X_test), # then get predictions on the first 300 training samples(X_train) using .predict() start = time() # Get start time predictions_test = learner.predict(X_test) predictions_train = learner.predict(X_train[:300]) end = time() # Get end time # TODO: Calculate the total prediction time results['pred_time'] = end - start # TODO: Compute accuracy on the first 300 training samples which is y_train[:300] results['acc_train'] = accuracy_score(y_train[:300], predictions_train) # TODO: Compute accuracy on test set using accuracy_score() results['acc_test'] = accuracy_score(y_test, predictions_test) # TODO: Compute F-score on the the first 300 training samples using fbeta_score() results['f_train'] = fbeta_score(y_train[:300], predictions_train, 0.5) # TODO: Compute F-score on the test set which is y_test results['f_test'] = fbeta_score(y_test, predictions_test, 0.5) # Success print "{} trained on {} samples.".format(learner.__class__.__name__, sample_size) # Return the results return results ``` ### Implementação: Validação inicial do modelo No código da célular, você precisará implementar o seguinte: - Importar os três modelos de aprendizado supervisionado que você escolheu na seção anterior - Inicializar os três modelos e armazená-los em `'clf_A'`, `'clf_B'`, e `'clf_C'`. - Utilize um `'random_state'` para cada modelo que você utilizar, caso seja fornecido. - **Nota:** Utilize as configurações padrão para cada modelo - você otimizará um modelo específico em uma seção posterior - Calcule o número de registros equivalentes à 1%, 10%, e 100% dos dados de treinamento. - Armazene estes valores em `'samples_1'`, `'samples_10'`, e `'samples_100'` respectivamente. **Nota:** Dependendo do algoritmo de sua escolha, a implementação abaixo pode demorar algum tempo para executar! ``` # TODO: Importe os três modelos de aprendizado supervisionado da sklearn from sklearn.naive_bayes import GaussianNB from sklearn.tree import DecisionTreeClassifier from sklearn.linear_model import LogisticRegression # TODO: Inicialize os três modelos clf_A = GaussianNB() clf_B = DecisionTreeClassifier(random_state=8) clf_C = LogisticRegression(random_state=8) # TODO: Calcule o número de amostras para 1%, 10%, e 100% dos dados de treinamento # HINT: samples_100 é todo o conjunto de treinamento e.x.: len(y_train) # HINT: samples_10 é 10% de samples_100 # HINT: samples_1 é 1% de samples_100 samples_100 = len(y_train) samples_10 = int(samples_100 * 0.1) samples_1 = int(samples_100 * 0.01) # Colete os resultados dos algoritmos de aprendizado results = {} for clf in [clf_A, clf_B, clf_C]: clf_name = clf.__class__.__name__ results[clf_name] = {} for i, samples in enumerate([samples_1, samples_10, samples_100]): results[clf_name][i] = \ train_predict(clf, samples, X_train, y_train, X_test, y_test) # Run metrics visualization for the three supervised learning models chosen vs.evaluate(results, accuracy, fscore) ``` ---- ## Melhorando os resultados Nesta seção final, você irá escolher o melhor entre os três modelos de aprendizado supervisionado para utilizar nos dados dos estudantes. Você irá então realizar uma busca grid para otimização em todo o conjunto de dados de treino (`X_train` e `y_train`) fazendo o tuning de pelo menos um parâmetro para melhorar o F-score anterior do modelo. ### Questão 3 - Escolhendo o melhor modelo * Baseado na validação anterior, em um ou dois parágrafos explique para a *CharityML* qual dos três modelos você acredita ser o mais apropriado para a tarefa de identificar indivíduos com remuneração anual superior à \$50,000. ** DICA: ** Analise o gráfico do canto inferior esquerdo da célula acima(a visualização criada através do comando `vs.evaluate(results, accuracy, fscore)`) e verifique o F score para o conjunto de testes quando 100% do conjunto de treino é utilizado. Qual modelo possui o maior score? Sua resposta deve abranger os seguintes pontos: * métricas - F score no conjunto de testes quando 100% dos dados de treino são utilizados, * tempo de predição/treinamento * a adequação do algoritmo para este cojunto de dados. **Resposta: ** O modelo GaussianNB teve um desempenho melhor e o modelo DecisionTreeClassifier teve um melhor precisão, os dois quando estavam utilizando o Subset. Mas o LogisticRegression, utilizando o Set, teve uma precisão maior e desempenho melhor que os outros dois. Dos modelos escolhidos para realizar os testes, o que tem a melhor desempenho e precisão para a identificação de quem poderá ser doador é o modelo LogisticRegression, ele é o melhor utilizando o Set, tando no Accuracy e F-score. ### Questão 4 - Descrevendo o modelo nos termos de Layman * Em um ou dois parágrafos, explique para a *CharityML*, nos termos de layman, como o modelo final escolhido deveria funcionar. Garanta que você está descrevendo as principais vantagens do modelo, tais como o modo de treinar o modelo e como o modelo realiza a predição. Evite a utilização de jargões matemáticos avançados, como por exemplo a descrição de equações. ** DICA: ** Quando estiver explicando seu modelo, cite as fontes externas utilizadas, caso utilize alguma. **Resposta: ** O modelo que escolhemos usar é o Regressão Logística, muito utilizado em Machine Learning, para classificação binaria e tem origem no campo da estatística. A Regressão Logística é um modelo estatístico de classificação binária, que retorna um resultado binário (0 e 1), medindo a relação entre variáveis dependentes e variáveis de predição, assim conseguindo estimar suas probabilidades. Mais especificamente para essa analise, o modelo Regressão Logística ira pegar todos os atributos e seus valores de nossos dados, que seria a idade, educação, profissão, e os demais atributos, analisar, e tentar encontrar uma combinação entre eles, e categorizando essa analise com 0 para como um não possível doador e 1 para um possível doador, fazendo isso para todas as linhas de nossos dados, e retornando como resultado dessa essa analise, a quantidade e quais são os possíveis doadores que ganham mais de $50.000,00 por ano, ou não. **Referências: **https://en.wikipedia.org/wiki/Logistic_regression , https://pt.wikipedia.org/wiki/Regress%C3%A3o_log%C3%ADstica ### Implementação: Tuning do modelo Refine o modelo escolhido. Utilize uma busca grid (`GridSearchCV`) com pleo menos um parâmetro importante refinado com pelo menos 3 valores diferentes. Você precisará utilizar todo o conjunto de treinamento para isso. Na célula de código abaixo, você precisará implementar o seguinte: - Importar [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) e [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html). - Inicializar o classificador escolhido por você e armazená-lo em `clf`. - Configurar um `random_state` se houver um disponível para o mesmo estado que você configurou anteriormente. - Criar um dicionário dos parâmetros que você quer otimizar para o modelo escolhido. - Exemplo: `parâmetro = {'parâmetro' : [lista de valores]}`. - **Nota:** Evite otimizar o parâmetro `max_features` se este parâmetro estiver disponível! - Utilize `make_scorer` para criar um objeto de pontuação `fbeta_score` (com $\beta = 0.5$). - Realize a busca gride no classificador `clf` utilizando o `'scorer'` e armazene-o na variável `grid_obj`. - Adeque o objeto da busca grid aos dados de treino (`X_train`, `y_train`) e armazene em `grid_fit`. **Nota:** Dependendo do algoritmo escolhido e da lista de parâmetros, a implementação a seguir pode levar algum tempo para executar! ``` # TODO: Importar 'GridSearchCV', 'make_scorer', e qualquer biblioteca necessária from sklearn.grid_search import GridSearchCV from sklearn.metrics import fbeta_score, make_scorer # TODO: Inicializar o classificador clf = LogisticRegression(random_state=8) # TODO: Criar a lista de parâmetros que você quer otimizar, utilizando um dicionário, caso necessário. # HINT: parameters = {'parameter_1': [value1, value2], 'parameter_2': [value1, value2]} parameters = {'penalty': ['l2'], 'C': [0.01, 0.1, 1, 10, 100], 'max_iter': [500, 1000, 1500], 'verbose': [1, 5, 10]} # TODO: Criar um objeto fbeta_score utilizando make_scorer() scorer = make_scorer(fbeta_score, beta=0.5) # TODO: Realizar uma busca grid no classificador utilizando o 'scorer' como o método de score no GridSearchCV() grid_obj = GridSearchCV(clf, parameters, scorer) # TODO: Adequar o objeto da busca grid como os dados para treinamento e encontrar os parâmetros ótimos utilizando fit() grid_fit = grid_obj.fit(X_train, y_train) # Recuperar o estimador best_clf = grid_fit.best_estimator_ # Realizar predições utilizando o modelo não otimizado e modelar predictions = (clf.fit(X_train, y_train)).predict(X_test) best_predictions = best_clf.predict(X_test) # Reportar os scores de antes e de depois print "Unoptimized model\n------" print "Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)) print "\nOptimized Model\n------" print "Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)) ``` ### Questão 5 - Validação final do modelo * Qual é a accuracy e o F-score do modelo otimizado utilizando os dados de testes? * Estes scores são melhores ou piores do que o modelo antes da otimização? * Como os resultados do modelo otimizado se comparam aos benchmarks do naive predictor que você encontrou na **Questão 1**?_ **Nota:** Preencha a tabela abaixo com seus resultados e então responda as questões no campo **Resposta** #### Resultados: | Metric | Unoptimized Model | Optimized Model | | :------------: | :---------------: | :-------------: | | Accuracy Score | 0.8419 | 0.8420 | | F-score | 0.6832 | 0.6842 | **Resposta: ** - O Accuracy e F-score do modelo otimizado utilizando os dados de de teste são 0.8420 e 0.6842 respectivamente. - As precisões são melhores com o modelo otimizado. Teve uma pequeno aumento tanto no accuracy e F-score. - O Accuracy e F-score encontrados na Questão 1 é muito inferior aos do modelo otimizado, sendo que o modelo otimizado é mais que o dobro do que encontrado na Questão 1, uma melhoria muito significativa. ---- ## Importância dos atributos Uma tarefa importante quando realizamos aprendizado supervisionado em um conjunto de dados como os dados do censo que estudamos aqui é determinar quais atributos fornecem maior poder de predição. Focando no relacionamento entre alguns poucos atributos mais importantes e na label alvo nós simplificamos muito o nosso entendimento do fenômeno, que é a coisa mais importante a se fazer. No caso deste projeto, isso significa que nós queremos identificar um pequeno número de atributos que possuem maior chance de predizer se um indivíduo possui renda anual superior à \$50,000. Escolha um classificador da scikit-learn (e.x.: adaboost, random forests) que possua o atributo `feature_importance_`, que é uma função que calcula o ranking de importância dos atributos de acordo com o classificador escolhido. Na próxima célula python ajuste este classificador para o conjunto de treinamento e utilize este atributo para determinar os 5 atributos mais importantes do conjunto de dados do censo. ### Questão 6 - Observação da Relevância dos Atributos Quando **Exploramos os dados**, vimos que existem treze atributos disponíveis para cada registro nos dados do censo. Destes treze atributos, quais os 5 atributos que você acredita que são os mais importantes para predição e em que ordem você os ranquearia? Por quê? **Resposta:** - 1) capital-gain - Para conseguir ter uma renda alta, é muito importante ter uma ganho de capital alto. - 2) capital-loss - Para conseguir ter uma renda alta, é necessário não perder dinheiro, uma perda de valor alto pode prejudicar muito sua renda. - 3) education-num - Representação numérica da educação, quanto maior for o numero, maior é seu grau de educação e seu salario, como por exemplo, Graduado e Doutor. - 4) occupation - Profissões variam muito, e seus salários também, existem profissões que tem um salario muito alto, por necessitar de um conhecimento maior, enquanto outras profissões que não necessitam muito conhecimento que recebem menos. - 5) age - Todas as pessoas que estão começando sua vida profissional tendem a ganhar menos, e ao passar dos anos, este começa a ganhar mais. ### Implementação - Extraindo a importância do atributo Escolha um algoritmo de aprendizado supervisionado da `sciki-learn` que possui o atributo `feature_importance_` disponível. Este atributo é uma função que ranqueia a importância de cada atributo dos registros do conjunto de dados quando realizamos predições baseadas no algoritmo escolhido. Na célula de código abaixo, você precisará implementar o seguinte: - Importar um modelo de aprendizado supervisionado da sklearn se este for diferente dos três usados anteriormente. - Treinar o modelo supervisionado com todo o conjunto de treinamento. - Extrair a importância dos atributos utilizando `'.feature_importances_'`. ``` # TODO: Importar um modelo de aprendizado supervisionado que tenha 'feature_importances_' from sklearn.tree import DecisionTreeClassifier model = DecisionTreeClassifier(random_state=8) # TODO: Treinar o modelo utilizando o conjunto de treinamento com .fit(X_train, y_train) model = model.fit(X_train, y_train) # TODO: Extrair a importância dos atributos utilizando .feature_importances_ importances = model.feature_importances_ # Plotar vs.feature_plot(importances, X_train, y_train) ``` ### Questão 7 - Extraindo importância dos atributos Observe a visualização criada acima que exibe os cinco atributos mais relevantes para predizer se um indivíduo possui remuneração igual ou superior à \$50,000 por ano. * Como estes cinco atributos se comparam com os 5 atributos que você discutiu na **Questão 6**? * Se você estivesse próximo da mesma resposta, como esta visualização confirma o seu raciocínio? * Se você não estava próximo, por que você acha que estes atributos são mais relevantes? **Resposta:** Dos cinco que eu listei e dos cinco atributos mais relevantes, 3 foram iguais, são eles capital-gain, education-num e age, e os outros dois são marital-status e hours-per-week. A minha resposta esta próxima ao mostrado no gráfico acima, pois são atributos muitos importantes para conseguir uma renda acima de $50.000,00. O atributo relacionado a casamento foi uma surpresa, mas analisando, é um atributo importante, pois pessoas tente a casar quando já estão com um situação financeira maior, o que possibilita a uma renda anual alta. O atributo idade se reflete que no começo da vida profissional, se ganha menos, e são poucos os jovens que guardam dinheiro e/ou investem, isso tende a acontecer quando eles já estão com alguns anos de experiencia na vida profissional. O atributo da educação é um dos principais atributos, pois quanto mais se estuda, mais a chance de você ganhar mais, e conseguir uma renda alta. O atributo de Ganho de Capital, que podem ser investimentos, facilita muito ter uma renda anual alta, pois é um dinheiro extra, não relacionado ao trabalho do possível doador. O atributo Horas Trabalhadas por Semana demonstra que, quanto mais horas se trabalha, mais ganha, pois irá ter um salario maior, que resulta a uma ótima renda anual. ### Selecionando atributos Como um modelo performa se nós só utilizamos um subconjunto de todos os atributos disponíveis nos dados? Com menos atributos necessários para treinar, a expectativa é que o treinamento e a predição sejam executados em um tempo muito menor — com o custo da redução nas métricas de performance. A partir da visualização acima, nós vemos que os cinco atributos mais importantes contribuem para mais de 50% da importância de **todos** os atributos presentes nos dados. Isto indica que nós podemos tentar *reduzir os atributos* e simplificar a informação necessária para o modelo aprender. O código abaixo utilizará o mesmo modelo otimizado que você encontrou anteriormente e treinará o modelo com o mesmo conjunto de dados de treinamento, porém apenas com *os cinco atributos mais importantes* ``` # Importar a funcionalidade para clonar um modelo from sklearn.base import clone # Reduzir a quantidade de atributos X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]] X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]] # Treinar o melhor modelo encontrado com a busca grid anterior clf = (clone(best_clf)).fit(X_train_reduced, y_train) # Fazer novas predições reduced_predictions = clf.predict(X_test_reduced) # Reportar os scores do modelo final utilizando as duas versões dos dados. print "Final Model trained on full data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)) print "\nFinal Model trained on reduced data\n------" print "Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)) print "F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)) ``` ### Questão 8 - Efeitos da seleção de atributos * Como o F-score do modelo final e o accuracy score do conjunto de dados reduzido utilizando apenas cinco atributos se compara aos mesmos indicadores utilizando todos os atributos? * Se o tempo de treinamento é uma variável importante, você consideraria utilizar os dados enxutos como seu conjunto de treinamento? **Resposta:** Os dois modelo são parecidos em comparação aos valores, pois a diferença é muito pequena, para Accuracy, é de menos de 0.02, enquanto F-score, é de menos de 0.04, uma redução aceitável se o tempo for muito importante. Mas se o tempo não é problema, ainda é melhor continuar usando todos os atributos, pois uma diferença pequena pode ser importante no resultado final. > **Nota**: Uma vez que você tenha concluído toda a implementação de código e respondido cada uma das questões acima, você poderá finalizar o seu trabalho exportando o iPython Notebook como um documento HTML. Você pode fazer isso utilizando o menu acima navegando para **File -> Download as -> HTML (.html)**. Inclua este documento junto do seu notebook como sua submissão.
github_jupyter
# Going deeper with Tensorflow In this seminar, we're going to play with [Tensorflow](https://www.tensorflow.org/) and see how it helps us build deep learning models. If you're running this notebook outside the course environment, you'll need to install tensorflow: * `pip install tensorflow` should install cpu-only TF on Linux & Mac OS * If you want GPU support from offset, see [TF install page](https://www.tensorflow.org/install/) ``` import tensorflow as tf gpu_options = tf.GPUOptions(allow_growth=True, per_process_gpu_memory_fraction=0.1) s = tf.InteractiveSession(config=tf.ConfigProto(gpu_options=gpu_options)) ``` # Warming up For starters, let's implement a python function that computes the sum of squares of numbers from 0 to N-1. * Use numpy or python * An array of numbers 0 to N - numpy.arange(N) ``` import numpy as np def sum_squares(N): return <student.Implement_me()> %%time sum_squares(10**8) ``` # Tensorflow teaser Doing the very same thing ``` #I gonna be your function parameter N = tf.placeholder('int64', name="input_to_your_function") #i am a recipe on how to produce sum of squares of arange of N given N result = tf.reduce_sum((tf.range(N)**2)) %%time #example of computing the same as sum_squares print(result.eval({N:10**8})) ``` # How does it work? 1. define placeholders where you'll send inputs; 2. make symbolic graph: a recipe for mathematical transformation of those placeholders; 3. compute outputs of your graph with particular values for each placeholder * output.eval({placeholder:value}) * s.run(output, {placeholder:value}) * So far there are two main entities: "placeholder" and "transformation" * Both can be numbers, vectors, matrices, tensors, etc. * Both can be int32/64, floats of booleans (uint8) of various size. * You can define new transformations as an arbitrary operation on placeholders and other transformations * tf.reduce_sum(tf.arange(N)\**2) are 3 sequential transformations of placeholder N * There's a tensorflow symbolic version for every numpy function * `a+b, a/b, a**b, ...` behave just like in numpy * np.mean -> tf.reduce_mean * np.arange -> tf.range * np.cumsum -> tf.cumsum * If if you can't find the op you need, see the [docs](https://www.tensorflow.org/api_docs/python). Still confused? We gonna fix that. ``` #Default placeholder that can be arbitrary float32 scalar, vector, matrix, etc. arbitrary_input = tf.placeholder('float32') #Input vector of arbitrary length input_vector = tf.placeholder('float32',shape=(None,)) #Input vector that _must_ have 10 elements and integer type fixed_vector = tf.placeholder('int32',shape=(10,)) #Matrix of arbitrary n_rows and 15 columns (e.g. a minibatch your data table) input_matrix = tf.placeholder('float32',shape=(None,15)) #You can generally use None whenever you don't need a specific shape input1 = tf.placeholder('float64',shape=(None,100,None)) input2 = tf.placeholder('int32',shape=(None,None,3,224,224)) #elementwise multiplication double_the_vector = input_vector*2 #elementwise cosine elementwise_cosine = tf.cos(input_vector) #difference between squared vector and vector itself vector_squares = input_vector**2 - input_vector #Practice time: create two vectors of type float32 my_vector = <student.init_float32_vector()> my_vector2 = <student.init_one_more_such_vector()> #Write a transformation(recipe): #(vec1)*(vec2) / (sin(vec1) +1) my_transformation = <student.implementwhatwaswrittenabove()> print(my_transformation) #it's okay, it's a symbolic graph # dummy = np.arange(5).astype('float32') my_transformation.eval({my_vector:dummy,my_vector2:dummy[::-1]}) ``` ### Visualizing graphs It's often useful to visualize the computation graph when debugging or optimizing. Interactive visualization is where tensorflow really shines as compared to other frameworks. There's a special instrument for that, called Tensorboard. You can launch it from console: ```tensorboard --logdir=/tmp/tboard --port=7007``` If you're pathologically afraid of consoles, try this: ```os.system("tensorboard --logdir=/tmp/tboard --port=7007 &"``` _(but don't tell anyone we taught you that)_ ``` # launch tensorflow the ugly way, uncomment if you need that import os port = 6000 + os.getuid() print("Port: %d" % port) #!killall tensorboard os.system("tensorboard --logdir=./tboard --port=%d &" % port) # show graph to tensorboard writer = tf.summary.FileWriter("./tboard", graph=tf.get_default_graph()) writer.close() ``` One basic functionality of tensorboard is drawing graphs. Once you've run the cell above, go to `localhost:7007` in your browser and switch to _graphs_ tab in the topbar. Here's what you should see: <img src="https://s12.postimg.org/a374bmffx/tensorboard.png" width=480> Tensorboard also allows you to draw graphs (e.g. learning curves), record images & audio ~~and play flash games~~. This is useful when monitoring learning progress and catching some training issues. One researcher said: ``` If you spent last four hours of your worktime watching as your algorithm prints numbers and draws figures, you're probably doing deep learning wrong. ``` You can read more on tensorboard usage [here](https://www.tensorflow.org/get_started/graph_viz) # Do It Yourself __[2 points max]__ ``` # Quest #1 - implement a function that computes a mean squared error of two input vectors # Your function has to take 2 vectors and return a single number <student.define_inputs_and_transformations()> mse =<student.define_transformation()> compute_mse = lambda vector1, vector2: <how to run you graph?> # Tests from sklearn.metrics import mean_squared_error for n in [1,5,10,10**3]: elems = [np.arange(n),np.arange(n,0,-1), np.zeros(n), np.ones(n),np.random.random(n),np.random.randint(100,size=n)] for el in elems: for el_2 in elems: true_mse = np.array(mean_squared_error(el,el_2)) my_mse = compute_mse(el,el_2) if not np.allclose(true_mse,my_mse): print('Wrong result:') print('mse(%s,%s)' % (el,el_2)) print("should be: %f, but your function returned %f" % (true_mse,my_mse)) raise ValueError("Что-то не так") print("All tests passed") ``` # variables The inputs and transformations have no value outside function call. This isn't too comfortable if you want your model to have parameters (e.g. network weights) that are always present, but can change their value over time. Tensorflow solves this with `tf.Variable` objects. * You can assign variable a value at any time in your graph * Unlike placeholders, there's no need to explicitly pass values to variables when `s.run(...)`-ing * You can use variables the same way you use transformations ``` #creating shared variable shared_vector_1 = tf.Variable(initial_value=np.ones(5)) #initialize variable(s) with initial values s.run(tf.global_variables_initializer()) #evaluating shared variable (outside symbolic graph) print("initial value", s.run(shared_vector_1)) # within symbolic graph you use them just as any other inout or transformation, not "get value" needed #setting new value s.run(shared_vector_1.assign(np.arange(5))) #getting that new value print("new value", s.run(shared_vector_1)) ``` # tf.gradients - why graphs matter * Tensorflow can compute derivatives and gradients automatically using the computation graph * Gradients are computed as a product of elementary derivatives via chain rule: $$ {\partial f(g(x)) \over \partial x} = {\partial f(g(x)) \over \partial g(x)}\cdot {\partial g(x) \over \partial x} $$ It can get you the derivative of any graph as long as it knows how to differentiate elementary operations ``` my_scalar = tf.placeholder('float32') scalar_squared = my_scalar**2 #a derivative of scalar_squared by my_scalar derivative = tf.gradients(scalar_squared, my_scalar)[0] import matplotlib.pyplot as plt %matplotlib inline x = np.linspace(-3,3) x_squared, x_squared_der = s.run([scalar_squared,derivative], {my_scalar:x}) plt.plot(x, x_squared,label="x^2") plt.plot(x, x_squared_der, label="derivative") plt.legend(); ``` # Why that rocks ``` my_vector = tf.placeholder('float32',[None]) #Compute the gradient of the next weird function over my_scalar and my_vector #warning! Trying to understand the meaning of that function may result in permanent brain damage weird_psychotic_function = tf.reduce_mean((my_vector+my_scalar)**(1+tf.nn.moments(my_vector,[0])[1]) + 1./ tf.atan(my_scalar))/(my_scalar**2 + 1) + 0.01*tf.sin(2*my_scalar**1.5)*(tf.reduce_sum(my_vector)* my_scalar**2)*tf.exp((my_scalar-4)**2)/(1+tf.exp((my_scalar-4)**2))*(1.-(tf.exp(-(my_scalar-4)**2))/(1+tf.exp(-(my_scalar-4)**2)))**2 der_by_scalar = <student.compute_grad_over_scalar()> der_by_vector = <student.compute_grad_over_vector()> #Plotting your derivative scalar_space = np.linspace(1, 7, 100) y = [s.run(weird_psychotic_function, {my_scalar:x, my_vector:[1, 2, 3]}) for x in scalar_space] plt.plot(scalar_space, y, label='function') y_der_by_scalar = [s.run(der_by_scalar, {my_scalar:x, my_vector:[1, 2, 3]}) for x in scalar_space] plt.plot(scalar_space, y_der_by_scalar, label='derivative') plt.grid() plt.legend(); ``` # Almost done - optimizers While you can perform gradient descent by hand with automatic grads from above, tensorflow also has some optimization methods implemented for you. Recall momentum & rmsprop? ``` y_guess = tf.Variable(np.zeros(2,dtype='float32')) y_true = tf.range(1,3,dtype='float32') loss = tf.reduce_mean((y_guess - y_true + tf.random_normal([2]))**2) optimizer = tf.train.MomentumOptimizer(0.01,0.9).minimize(loss,var_list=y_guess) #same, but more detailed: #updates = [[tf.gradients(loss,y_guess)[0], y_guess]] #optimizer = tf.train.MomentumOptimizer(0.01,0.9).apply_gradients(updates) from IPython.display import clear_output s.run(tf.global_variables_initializer()) guesses = [s.run(y_guess)] for _ in range(100): s.run(optimizer) guesses.append(s.run(y_guess)) clear_output(True) plt.plot(*zip(*guesses),marker='.') plt.scatter(*s.run(y_true),c='red') plt.show() ``` # Logistic regression example Implement the regular logistic regression training algorithm Tips: * Use a shared variable for weights * X and y are potential inputs * Compile 2 functions: * `train_function(X, y)` - returns error and computes weights' new values __(through updates)__ * `predict_fun(X)` - just computes probabilities ("y") given data We shall train on a two-class MNIST dataset * please note that target `y` are `{0,1}` and not `{-1,1}` as in some formulae ``` from sklearn.datasets import load_digits mnist = load_digits(2) X,y = mnist.data, mnist.target print("y [shape - %s]:" % (str(y.shape)), y[:10]) print("X [shape - %s]:" % (str(X.shape))) print('X:\n',X[:3,:10]) print('y:\n',y[:10]) plt.imshow(X[0].reshape([8,8])) # inputs and shareds weights = <student.code_variable()> input_X = <student.code_placeholder()> input_y = <student.code_placeholder()> predicted_y = <predicted probabilities for input_X> loss = <logistic loss (scalar, mean over sample)> optimizer = <optimizer that minimizes loss> train_function = <compile function that takes X and y, returns log loss and updates weights> predict_function = <compile function that takes X and computes probabilities of y> from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y) from sklearn.metrics import roc_auc_score for i in range(5): <run optimizer operation> loss_i = <compute loss at iteration i> print("loss at iter %i:%.4f" % (i, loss_i)) print("train auc:",roc_auc_score(y_train, predict_function(X_train))) print("test auc:",roc_auc_score(y_test, predict_function(X_test))) print ("resulting weights:") plt.imshow(shared_weights.get_value().reshape(8, -1)) plt.colorbar(); ``` # Bonus: my1stNN Your ultimate task for this week is to build your first neural network [almost] from scratch and pure tensorflow. This time you will same digit recognition problem, but at a larger scale * images are now 28x28 * 10 different digits * 50k samples Note that you are not required to build 152-layer monsters here. A 2-layer (one hidden, one output) NN should already have ive you an edge over logistic regression. __[bonus score]__ If you've already beaten logistic regression with a two-layer net, but enthusiasm still ain't gone, you can try improving the test accuracy even further! The milestones would be 95%/97.5%/98.5% accuraсy on test set. __SPOILER!__ At the end of the notebook you will find a few tips and frequently made mistakes. If you feel enough might to shoot yourself in the foot without external assistance, we encourage you to do so, but if you encounter any unsurpassable issues, please do look there before mailing us. ``` from mnist import load_dataset #[down]loading the original MNIST dataset. #Please note that you should only train your NN on _train sample, # _val can be used to evaluate out-of-sample error, compare models or perform early-stopping # _test should be hidden under a rock untill final evaluation... But we both know it is near impossible to catch you evaluating on it. X_train,y_train,X_val,y_val,X_test,y_test = load_dataset() print (X_train.shape,y_train.shape) plt.imshow(X_train[0,0]) <here you could just as well create computation graph> <this may or may not be a good place to evaluating loss and optimizer> <this may be a perfect cell to write a training&evaluation loop in> <predict & evaluate on test here, right? No cheating pls.> ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` # SPOILERS! Recommended pipeline * Adapt logistic regression from previous assignment to classify some number against others (e.g. zero vs nonzero) * Generalize it to multiclass logistic regression. - Either try to remember lecture 0 or google it. - Instead of weight vector you'll have to use matrix (feature_id x class_id) - softmax (exp over sum of exps) can implemented manually or as T.nnet.softmax (stable) - probably better to use STOCHASTIC gradient descent (minibatch) - in which case sample should probably be shuffled (or use random subsamples on each iteration) * Add a hidden layer. Now your logistic regression uses hidden neurons instead of inputs. - Hidden layer uses the same math as output layer (ex-logistic regression), but uses some nonlinearity (sigmoid) instead of softmax - You need to train both layers, not just output layer :) - Do not initialize layers with zeros (due to symmetry effects). A gaussian noize with small sigma will do. - 50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve. - In ideal casae this totals to 2 .dot's, 1 softmax and 1 sigmoid - __make sure this neural network works better than logistic regression__ * Now's the time to try improving the network. Consider layers (size, neuron count), nonlinearities, optimization methods, initialization - whatever you want, but please avoid convolutions for now.
github_jupyter
# VAE for MNIST clustering and generation The goal of this notebook is to explore some recent works dealing with variational auto-encoder (VAE). We will use MNIST dataset and a basic VAE architecture. ``` import os import torch import torch.nn as nn import torch.nn.functional as F import torchvision from torchvision import transforms from torchvision.utils import save_image import numpy as np import matplotlib.pyplot as plt %matplotlib inline from sklearn.metrics.cluster import normalized_mutual_info_score def show(img): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest') # Device configuration device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # Create a directory if not exists sample_dir = 'samples' if not os.path.exists(sample_dir): os.makedirs(sample_dir) batch_size = 128 #to be modified data_dir = '/home/mlelarge/data' # MNIST dataset dataset = torchvision.datasets.MNIST(root=data_dir, train=True, transform=transforms.ToTensor(), download=True) # Data loader data_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader( torchvision.datasets.MNIST(data_dir, train=False, download=True, transform=transforms.ToTensor()), batch_size=10, shuffle=False) ``` # Variational Autoencoders Consider a latent variable model with a data variable $x\in \mathcal{X}$ and a latent variable $z\in \mathcal{Z}$, $p(z,x) = p(z)p_\theta(x|z)$. Given the data $x_1,\dots, x_n$, we want to train the model by maximizing the marginal log-likelihood: \begin{eqnarray*} \mathcal{L} = \mathbf{E}_{p_d(x)}\left[\log p_\theta(x)\right]=\mathbf{E}_{p_d(x)}\left[\log \int_{\mathcal{Z}}p_{\theta}(x|z)p(z)dz\right], \end{eqnarray*} where $p_d$ denotes the empirical distribution of $X$: $p_d(x) =\frac{1}{n}\sum_{i=1}^n \delta_{x_i}(x)$. To avoid the (often) difficult computation of the integral above, the idea behind variational methods is to instea maximize a lower bound to the log-likelihood: \begin{eqnarray*} \mathcal{L} \geq L(p_\theta(x|z),q(z|x)) =\mathbf{E}_{p_d(x)}\left[\mathbf{E}_{q(z|x)}\left[\log p_\theta(x|z)\right]-\mathrm{KL}\left( q(z|x)||p(z)\right)\right]. \end{eqnarray*} Any choice of $q(z|x)$ gives a valid lower bound. Variational autoencoders replace the variational posterior $q(z|x)$ by an inference network $q_{\phi}(z|x)$ that is trained together with $p_{\theta}(x|z)$ to jointly maximize $L(p_\theta,q_\phi)$. The variational posterior $q_{\phi}(z|x)$ is also called the encoder and the generative model $p_{\theta}(x|z)$, the decoder or generator. The first term $\mathbf{E}_{q(z|x)}\left[\log p_\theta(x|z)\right]$ is the negative reconstruction error. Indeed under a gaussian assumption i.e. $p_{\theta}(x|z) = \mathcal{N}(\mu_{\theta}(z), 1)$ the term $\log p_\theta(x|z)$ reduced to $\propto \|x-\mu_\theta(z)\|^2$, which is often used in practice. The term $\mathrm{KL}\left( q(z|x)||p(z)\right)$ can be seen as a regularization term, where the variational posterior $q_\phi(z|x)$ should be matched to the prior $p(z)= \mathcal{N}(0,1)$. Variational Autoencoders were introduced by [Kingma and Welling](https://arxiv.org/abs/1312.6114), see also [Doersch](https://arxiv.org/abs/1606.05908) for a tutorial. There are vairous examples of VAE in pytorch available [here](https://github.com/pytorch/examples/tree/master/vae) or [here](https://github.com/yunjey/pytorch-tutorial/blob/master/tutorials/03-advanced/variational_autoencoder/main.py#L38-L65). The code below is taken from this last source. ``` # Hyper-parameters image_size = 784 h_dim = 400 z_dim = 20 num_epochs = 15 learning_rate = 1e-3 # VAE model class VAE(nn.Module): def __init__(self, image_size=784, h_dim=400, z_dim=20): super(VAE, self).__init__() self.fc1 = nn.Linear(image_size, h_dim) self.fc2 = nn.Linear(h_dim, z_dim) self.fc3 = nn.Linear(h_dim, z_dim) self.fc4 = nn.Linear(z_dim, h_dim) self.fc5 = nn.Linear(h_dim, image_size) def encode(self, x): h = F.relu(self.fc1(x)) return self.fc2(h), self.fc3(h) def reparameterize(self, mu, log_var): std = torch.exp(log_var/2) eps = torch.randn_like(std) return mu + eps * std def decode(self, z): h = F.relu(self.fc4(z)) return torch.sigmoid(self.fc5(h)) def forward(self, x): mu, log_var = self.encode(x) z = self.reparameterize(mu, log_var) x_reconst = self.decode(z) return x_reconst, mu, log_var model = VAE().to(device) optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) ``` Here for the loss, instead of MSE for the reconstruction loss, we take BCE. The code below is still from the pytorch tutorial (with minor modifications to avoid warnings!). ``` # Start training for epoch in range(num_epochs): for i, (x, _) in enumerate(data_loader): # Forward pass x = x.to(device).view(-1, image_size) x_reconst, mu, log_var = model(x) # Compute reconstruction loss and kl divergence # For KL divergence, see Appendix B in VAE paper reconst_loss = F.binary_cross_entropy(x_reconst, x, reduction='sum') kl_div = - 0.5 * torch.sum(1 + log_var - mu.pow(2) - log_var.exp()) # Backprop and optimize loss = reconst_loss + kl_div optimizer.zero_grad() loss.backward() optimizer.step() if (i+1) % 10 == 0: print ("Epoch[{}/{}], Step [{}/{}], Reconst Loss: {:.4f}, KL Div: {:.4f}" .format(epoch+1, num_epochs, i+1, len(data_loader), reconst_loss.item()/batch_size, kl_div.item()/batch_size)) ``` Let see how our network reconstructs our last batch. We display pairs of original digits and reconstructed version. ``` mu, _ = model.encode(x) out = model.decode(mu) x_concat = torch.cat([x.view(-1, 1, 28, 28), out.view(-1, 1, 28, 28)], dim=3) out_grid = torchvision.utils.make_grid(x_concat).cpu().data show(out_grid) ``` Let see now, how our network generates new samples. ``` with torch.no_grad(): z = torch.randn(16, z_dim).to(device) out = model.decode(z).view(-1, 1, 28, 28) out_grid = torchvision.utils.make_grid(out).cpu() show(out_grid) ``` Not great, but we did not train our network for long... That being said, we have no control of the generated digits. In the rest of this jupyter, we explore ways to generates zeros, ones, twos and so on. As a by product, we show how our VAE will allow us to do clustering. The main idea is to build what we call a Gumbel VAE as described below. # Gumbel VAE Implement a VAE where you add a categorical variable $c\in \{0,\dots 9\}$ so that your latent variable model is $p(c,z,x) = p(c)p(z)p_{\theta}(x|,c,z)$ and your variational posterior is $q_{\phi}(c|x)q_{\phi}(z|x)$ as described in this NIPS [paper](https://arxiv.org/abs/1804.00104). Make minimal modifications to previous architecture... The idea is that you incorporates a categorical variable in your latent space. You hope that this categorical variable will encode the class of the digit, so that your network can use it for a better reconstruction. Moreover, if things work as planed, you will then be able to generate digits conditionally to the class, i.e. you can choose the class thanks to the latent categorical variable $c$ and then generate digits from this class. As noticed above, in order to sample random variables while still being able to use backpropagation required us to use the reparameterization trick which is easy for Gaussian random variables. For categorical random variables, the reparameterization trick is explained in this [paper](https://arxiv.org/abs/1611.01144). This is implemented in pytorch thanks to [F.gumbel_softmax](https://pytorch.org/docs/stable/nn.html?highlight=gumbel_softmax#torch.nn.functional.gumbel_softmax) ``` n_classes = 10 class VAE_Gumbel(nn.Module): def __init__(self, image_size=784, h_dim=400, z_dim=20, n_classes = 10): super(VAE_Gumbel, self).__init__() # # your code here # def encode(self, x): # # your code here / use F.log_softmax # def reparameterize(self, mu, log_var): std = torch.exp(log_var/2) eps = torch.randn_like(std) return mu + eps * std def decode(self, z, y_onehot): # # your code here / use torch.cat # def forward(self, x): # # your code here / use F.gumbel_softmax # model_G = VAE_Gumbel().to(device) optimizer = torch.optim.Adam(model_G.parameters(), lr=learning_rate) ``` You need to modify the loss to take into account the categorical random variable with an uniform prior on $\{0,\dots 9\}$, see Appendix A.2 in the NIPS [paper](https://arxiv.org/abs/1804.00104) ``` def train_G(model, data_loader=data_loader,num_epochs=num_epochs, beta = 1., verbose=True): nmi_scores = [] model.train(True) for epoch in range(num_epochs): all_labels = [] all_labels_est = [] for i, (x, labels) in enumerate(data_loader): # Forward pass x = x.to(device).view(-1, image_size) # # your code here # reconst_loss = F.binary_cross_entropy(x_reconst, x, reduction='sum') # # your code here # # Backprop and optimize loss = # your code here optimizer.zero_grad() loss.backward() optimizer.step() if verbose: if (i+1) % 10 == 0: print ("Epoch[{}/{}], Step [{}/{}], Reconst Loss: {:.4f}, KL Div: {:.4f}, Entropy: {:.4f}" .format(epoch+1, num_epochs, i+1, len(data_loader), reconst_loss.item()/batch_size, kl_div.item()/batch_size, H_cat.item()/batch_size)) train_G(model_G,num_epochs=10,verbose=True) x,_ = next(iter(data_loader)) x = x[:24,:,:,:].to(device) out, _, _, log_p = model_G(x.view(-1, image_size)) x_concat = torch.cat([x.view(-1, 1, 28, 28), out.view(-1, 1, 28, 28)], dim=3) out_grid = torchvision.utils.make_grid(x_concat).cpu().data show(out_grid) ``` This was for reconstruction, but we care more about generation. For each category, we are generating 8 samples thanks to the following matrix, so that in the end, we should have on each line only one digit represented. ``` matrix = np.zeros((8,n_classes)) matrix[:,0] = 1 final = matrix[:] for i in range(1,n_classes): final = np.vstack((final,np.roll(matrix,i))) with torch.no_grad(): z = torch.randn(8*n_classes, z_dim).to(device) y_onehot = torch.tensor(final).type(torch.FloatTensor).to(device) out = model_G.decode(z,y_onehot).view(-1, 1, 28, 28) out_grid = torchvision.utils.make_grid(out).cpu() show(out_grid) ``` It does not look like our original idea is working... To check that our network is not using the categorical variable, we can track the [normalized mutual information](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.normalized_mutual_info_score.html) between the true labels and the labels 'predicted' by our network (just by taking the category with maximal probability). Change your training loop to return the normalized mutual information (NMI) for each epoch. Plot the curve to check that the NMI is actually decreasing. In order to force our network to use the categorical variable, we will change the loss following this ICLR [paper](https://openreview.net/forum?id=Sy2fzU9gl) Implement this change in the training loop and plot the new NMI curve after 10 epochs. For $\beta = 20$, you should see that NMI increases. But reconstruction starts to be bad and generation is still poor. This is explained in this [paper](https://arxiv.org/abs/1804.03599) and a solution is proposed see Section 5. Implement the solution described in Section 3 equation (7) if the NIPS [paper](https://arxiv.org/abs/1804.00104) ``` model_G = VAE_Gumbel().to(device) optimizer = torch.optim.Adam(model_G.parameters(), lr=learning_rate) def train_G_modified_loss(model, data_loader=data_loader,num_epochs=num_epochs, beta , C_z_fin, C_c_fin, verbose=True): # # your code here # with torch.no_grad(): z = torch.randn(8*n_classes, z_dim).to(device) y_onehot = torch.tensor(final).type(torch.FloatTensor).to(device) out = model_G.decode(z,y_onehot).view(-1, 1, 28, 28) out_grid = torchvision.utils.make_grid(out).cpu() show(out_grid) i = 1 with torch.no_grad(): plt.plot() z = torch.randn(8, z_dim).to(device) y_onehot = torch.tensor(np.roll(matrix,i)).type(torch.FloatTensor).to(device) out = model_G.decode(z,y_onehot).view(-1, 1, 28, 28) out_grid = torchvision.utils.make_grid(out).cpu() show(out_grid) x,_ = next(iter(data_loader)) x = x[:24,:,:,:].to(device) out, _, _, log_p = model_G(x.view(-1, image_size)) x_concat = torch.cat([x.view(-1, 1, 28, 28), out.view(-1, 1, 28, 28)], dim=3) out_grid = torchvision.utils.make_grid(x_concat).cpu().data show(out_grid) ```
github_jupyter
# Data Science Ex 00 - Preparation 23.02.2021, Lukas Kretschmar (lukas.kretschmar@ost.ch) ## Let's have some Fun with Data Science! Welcome to Data Science. We will use an interactive environment where you can mix text and code, with the awesome feature that you can execute the code. ## Pre-Installation We will work with Anaconda. You can download the package at the following location: https://www.anaconda.com/products/individual Install the distribution for your operating system that uses **Python 3.8**. **Note:** Anaconda needs up to **6 GB** of disk space. So, make sure you have this amount of storage available on your machine. ## Installation Please follow the installation instructions provided under the following link: https://docs.anaconda.com/anaconda/install/ If you don't want to install Anaconda just for your user profile, you can use the instrations under https://docs.anaconda.com/anaconda/install/multi-user/ ### Known Issues #### Non-unicode characters in user name (e.g., ä, ö, ü, etc.) We have encountered problems with students having non-unicode characters in their installation path (e.g., ä, ö, ü, é, etc.). This might be a problem for you if you use the default location which points to your user profile (e.g., C:\Users\\[your user name]). Please choose a location that does only contain ASCII characters. **Solution:** - Choose a location that contains only ASCII characters - Install Anaconda for multiple-users (https://docs.anaconda.com/anaconda/install/multi-user/) If you've installed Anaconda nevertheless to a "non-suitable" location, there exists a simple workaround. In this case you have to change the default security settings on your notebook server and open the website everytime by hand (or you try to find the url that your notebook server hosted). You'll find the instructions at the end of this document. ## Post-Installation ### Update After the installation is complete, you should also run an update to ensure that all packages are up-to-date. To do so, open an **Anaconda Prompt with elevated privileges (administrator rights)** and enter the following ``` conda update --all ``` ### Configuration Juypter Notebooks opens the file browser in a specific directory. Per default, it's your *My Documents* folder. You can change the starting location to a different path by editing the configuration. So, the [following](https://stackoverflow.com/questions/35254852/how-to-change-the-jupyter-start-up-folder) steps are only necessary, if you want Jupyter Notebooks to start from a specific location. Open an **Anaconda Prompt** and enter the following ``` jupyter notebook --generate-config ``` This command will generate a configuration for your Jupyter installation at *C:\Users\yourusername\\.jupyter\jupyter_notebook_config.py* (for the nerds of you - yeah, it's a python code file). The location on a Mac is probably at a similar location. Open the file with a text editor and search for the line ``` python #c.NotebookApp.notebook_dir = '' ``` Remove the \# at the beginning (this is the character for code comments) and enter the path you want Jupyter to start per default. And use / within your path and not \\ as it is common on windows systems. Otherwise the path might not work. Your entry should now look like ``` python c.NotebookApp.notebook_dir = 'path/to/your/folder' ``` ### Change the security settings **PLEASE NOTE: This step is only necessary, if your notebooks won't start property (e.g., installation at a location with unicode characters).** If your Jupyter Lab or Jupyter Notebook starts, you must not change the security settings. Within the configuration, you'll find the following line ``` python # c.NotebookApp.token = '<generated>' ``` Per default, a new token is generated everytime you start a new server. Now, you can either set the token to a fixed value, like ``` python c.NotebookApp.token = 'ffed3a68-f5b2-47a3-bb11-df8711c5aab3' ``` *Note: This is just an example. You can choose your own token value.* or to none (security is disabled) ``` python c.NotebookApp.token = '' ``` In the first case, your server will always run at - **JupyterLab:** http://localhost:8888/lab?token=ffed3a68-f5b2-47a3-bb11-df8711c5aab3 - **Jupyter Notebook:** http://localhost:8888/tree?token=ffed3a68-f5b2-47a3-bb11-df8711c5aab3 In the second case, your server will always run at - **JupyterLab:** http://localhost:8888/lab - **Juypter Notebook:** http://localhost:8888/tree Please note: The port (8888) might be incremented by 1 if 8888 is already blocked. Thus, if http://localhost:8888/lab is already used, the next server will be hosted at http://localhost:8889/lab ### Run Anaconda Check that your installation is running by starting **Anaconda**. You should be able to get to the following screen. <img src="./AnacondaNavigator.png" style="height:600px" /> And then try to start either **JupyterLab** or **Jupyter Notebook**. Both tools will open a new browser tab.
github_jupyter
<h1>Table of Contents<span class="tocSkip"></span></h1> <div class="toc"><ul class="toc-item"><li><span><a href="#My-ideia:" data-toc-modified-id="My-ideia:-1">My ideia:</a></span><ul class="toc-item"><li><span><a href="#1" data-toc-modified-id="1-1.1">1</a></span></li><li><span><a href="#2" data-toc-modified-id="2-1.2">2</a></span></li><li><span><a href="#1" data-toc-modified-id="1-1.3">1</a></span></li></ul></li><li><span><a href="#Train-model" data-toc-modified-id="Train-model-2">Train model</a></span><ul class="toc-item"><li><span><a href="#--First-normal-xgb" data-toc-modified-id="--First-normal-xgb-2.1">- First normal xgb</a></span></li></ul></li></ul></div> ``` import matplotlib.pyplot as plt import seaborn as sns sns.set() import numpy as np import pandas as pd pd.set_option('display.max_columns', 50) import sys from utils import read_data, process_time, merge_data infos, items, orders = read_data(data_dir='../../main/datasets/') infos.head(20) items.head(20) print(orders.shape) orders.head(20) df = merge_data(orders, items, infos) process_time(df) # Modifies in place print(df.shape) df.head(20) # Since data is ordered by time, this works: first_pair_week_item = df.groupby(["itemID"])["group_backwards"].first() first_pair_week_item.head(20) for pair_week in range(13, 0, -1): group = df.query("group_backwards == @pair_week") total = group["order"].sum() total_revenue = (group["salesPrice"]*group["order"]).sum() new_items = first_pair_week_item[first_pair_week_item == pair_week].index new_items_total = group.loc[group["itemID"].isin(new_items), "order"].sum() new_items_revenue = group.loc[group["itemID"].isin(new_items), ["order", "salesPrice"]] new_items_revenue = (new_items_revenue["order"]*new_items_revenue["salesPrice"]).sum() print("group backwards:", pair_week, "\tamount of new items", len(new_items), # "\n", "\t% of sales new items:", f"{100*new_items_total/total:.2f} %", "\t% of revenue new items:", f"{100*new_items_revenue/total_revenue:.2f}, %", #"\n\n" ) print("New items in DMC test", len(items)-df["itemID"].nunique()) for pair_week in range(13, 0, -1): group = df.query("group_backwards == @pair_week") total = group["order"].sum() new_items = first_pair_week_item[first_pair_week_item == pair_week].index new_items_counts = group.loc[group["itemID"].isin(new_items)] new_items_counts = new_items_counts.groupby("itemID")["order"].sum().values pl = sns.violinplot(new_items_counts) plt.title(f"pair_week {pair_week}") plt.savefig(f'pair_week_{pair_week}.png', bbox_inches='tight') plt.show() from tabulate import tabulate for pair_week in range(13, 0, -1): group = df.query("group_backwards == @pair_week") total = group["order"].sum() new_items = first_pair_week_item[first_pair_week_item == pair_week].index new_items_counts = group.loc[group["itemID"].isin(new_items)] new_items_counts = new_items_counts.groupby("itemID")["order"].sum() print(f"pair_week {pair_week}") print(tabulate(pd.DataFrame(new_items_counts.describe()))) print("\n\n") ``` Therefore, around 35% of a fortnight sales is NEW ITEMS. That is, items that never appeared in the dataset until now. Aproximately the same thing should happen for DMC (maybe a bit less) --- <br> <br> <br> <br> <br> <br> # My ideia: - Divide into TWO seperate problems: - 1) Predict the amount of items already seen with tradional approach - 2) Predict new items with a model dedicated to new items ONLY for our test fortnight. - This is further broken down into: ## 1 - Use a binary classifier to first find out if a item sold or not (given that it's not completely new) - If it sold, use normal model with good features (Tobias, Sasaki, Dora) - Otherwise, predict 0. We will start from group backwards 11 (first data will be 10, since for 11 all null) ## 2 - Make it work somehow: start off with easy stats. Let's go! --- <br> ## 1 ``` def create_dataset(target_week, final_test=False): if final_test: train_items = df.query('(@target_week + 1) <= group_backwards <= 13')["itemID"].unique() full = df.query('group_backwards >= (@target_week)').reset_index(drop = True) else: train_items = df.query('(@target_week + 2) <= group_backwards <= 13')["itemID"].unique() full = df.query('group_backwards >= (@target_week)').reset_index(drop = True) # only return instances where that item has already appeared. full = full.query("itemID in @train_items") return full #full_val = create_dataset(2) full_sub = create_dataset(1, final_test=True) # ok, so the proper is the way above, but this is SHIT, but turns out the other version # is good *m full_sub = create_dataset(0, final_test=True) X_full_val = pd.DataFrame( index=pd.MultiIndex.from_product([ full_sub["itemID"].unique(), range(13, 0, -1) ], names=["itemID", "group_backwards"] )).reset_index() X_full_val = pd.merge(X_full_val, items, left_on="itemID", right_on="itemID", validate="m:1") X_full_val .head() ``` **OBS**: From now on, salesPrice and order in X_train means the SUM **OBS**: We remove it when defining x/y later on. ``` cols = ["order", "salesPrice"] extra = full_sub.groupby(["group_backwards", "itemID"], as_index=False)[cols].sum() X_full_val = pd.merge(X_full_val, extra, on=["group_backwards", "itemID"], how="left", validate="1:1") # 0 so the features generated will make sense X_full_val.fillna(0, inplace=True) X_full_val.tail() # X_full_val.loc[X_full_val["group_backwards"] == 1, "order"].sum() ``` --- <br> Brute force some features ``` def feat_amount_sold(groupby): return groupby["order"].sum() def feat_value(groupby): return groupby["salesPrice"].sum() def amount_of_transactions(): return groupby.size() func_feats_past_time = { "quantidade_vend_sum_{}": feat_amount_sold, "valor_vend_sum_{}": feat_value, } # These are features generated only from the last 14 days. Currently it's the same, # but who knows. func_feats_past_fortnight = { "quantidade_vend_sum_{}": feat_amount_sold, "valor_vend_sum_{}": feat_value, } # TODO #func_feats_other = { # "quantidade_trans_sum": amount_of_transactions, #} def apply_all_item_feats(X, past_time, func_feats): """Only works for data that hasnt been grouped! ie, "Orders" like. Calculates features that depend on the item ID and it's derivatives """ cols = ["itemID", "brand", "manufacturer", "category1", "category2", "category3"] # All theses columns above depend only on the item, so we can just get the first feats = X.groupby(["itemID"]).first() for col in cols: groupby = X.loc[past_time].groupby(col) for name, func in func_feats.items(): feat = func(groupby) feats = pd.merge(feats, feat.rename(name.format(col)), left_on=col, right_index=True) return feats.drop(columns=X.loc[past_time].columns.drop("itemID"), errors="ignore") # Test the features: temp = X_full_val.query("group_backwards >= 12").index temp_results = apply_all_item_feats(X_full_val, temp, func_feats_past_time) # temp_results.head() temp_results.loc[[450, 108, 10224]] def generate_all(X, stop_week, start_fortnight): new_X = X.copy() new_past_cols = None # Start off with 12 because we don't generate data for the first fortnight due to time lag. for fortnight in range(start_fortnight, stop_week-1, -1): # here is the line that COULD BE LEAK if we >= past_time = X["group_backwards"] > fortnight this_fortnight = X["group_backwards"] == fortnight feats = apply_all_item_feats(X, past_time, func_feats_past_time) # if fortnight == stop_week: return feats if new_past_cols is None: new_past_cols = feats.columns for col in new_past_cols: new_X[col] = np.nan # Kind of bad, I know. There is a way a better way, but I want to sleep. merged = pd.merge(X.loc[this_fortnight], feats, on="itemID", validate="m:1", how="left") new_X.loc[this_fortnight, new_past_cols] = merged[new_past_cols].values return new_X stop_week = 1 start_fortnight = 8 #start_fortnight = 5 X_full_val_feats = generate_all(X_full_val, stop_week, start_fortnight) X_full_val_feats = X_full_val_feats.query("group_backwards <= @start_fortnight") X_full_val_feats.head(6) X_full_val_feats.tail() # Since we filled missing orders/sales price with 0, impossible to have NaN X_full_val_feats.isna().sum().sum() ``` # Train model ``` full_train = X_full_val_feats.query("group_backwards > 1") train = X_full_val_feats.query("group_backwards > 2") val = X_full_val_feats.query("group_backwards == 2") sub = X_full_val_feats.query("group_backwards == 1") drop_cols = ["order", "salesPrice"] x_full_train = full_train.drop(columns=drop_cols) x_train = train.drop(columns=drop_cols) x_val = val.drop(columns=drop_cols) x_sub = sub.drop(columns=drop_cols) y_full_train = full_train["order"] y_train = train["order"] y_val = val["order"] y_sub = sub["order"] weights = infos.set_index('itemID')['simulationPrice'].to_dict() w_full_train = full_train['itemID'].map(weights) w_train = train['itemID'].map(weights) w_val = val['itemID'].map(weights) w_sub = sub['itemID'].map(weights) # Create binary targets y_full_train_bin = (y_full_train > 0).astype(int) y_train_bin = (y_train > 0).astype(int) y_val_bin = (y_val > 0).astype(int) y_sub_bin = (y_sub > 0).astype(int) # Not that good, but doable... maybe # for idx, (y_temp, name) in enumerate(zip([y_train_bin, y_val_bin, y_sub_bin], # ["train", "val", "sub"]), 1): # plt.subplot(1, 3, idx) # sns.countplot(y_temp) # plt.title(name) ``` <hr> ``` from sklearn.metrics import mean_squared_error as mse def evaluate(prediction, target, simulationPrice): return np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulationPrice) ``` ## - First normal xgb ``` import xgboost as xgb xgb.__version__ # custom objective def gradient(prediction, dtrain): y = dtrain.get_label() # prediction.astype(int) # prediction = np.minimum(prediction.astype(int), 1) return -2 * (prediction - np.maximum(prediction - y, 0) * 1.6) * (1 - (prediction > y) * 1.6) def hessian(prediction, dtrain): y = dtrain.get_label() # prediction.prediction(int) # prediction = np.minimum(prediction.astype(int), 1) return -2 * (1 - (prediction > y) * 1.6) ** 2 def objective(prediction, dtrain): w = dtrain.get_weight() grad = gradient(prediction, dtrain) * w hess = hessian(prediction, dtrain) * w return grad, hess # custom feval def feval(prediction, dtrain): prediction = prediction.astype(int) # predt = np.minimum(predt.astype(int), 1) target = dtrain.get_label() simulationPrice = dtrain.get_weight() return 'feval', np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulationPrice) dtrain = xgb.DMatrix(x_train, y_train, w_train, missing=0) dval = xgb.DMatrix(x_val, y_val, w_val, missing=0) dsub = xgb.DMatrix(x_sub, y_sub, w_sub, missing=0) dfulltrain = xgb.DMatrix(x_full_train, y_full_train, w_full_train, missing=0) # specify parameters via map param = { 'max_depth':10, 'eta':0.005, 'objective':'reg:squarederror', 'disable_default_eval_metric': 1, # 'tree_method' : 'gpu_hist', } num_round = 100 bst = xgb.train(param, dtrain, num_round, early_stopping_rounds = 30, evals = [(dtrain, 'train'), (dval, 'val')], # obj = objective, feval = feval, maximize = True, ) prediction = bst.predict(dsub, ntree_limit=bst.best_ntree_limit).astype(int) evaluate(prediction, y_sub, w_sub) # retrain! bst_sub = xgb.train(param, dfulltrain, num_boost_round = bst.best_ntree_limit, early_stopping_rounds=30, # obj = objective, feval = feval, maximize = True, evals = [(dfulltrain, 'ftrain')], verbose_eval = False, ) prediction = bst_sub.predict(dsub, ntree_limit=bst_sub.best_ntree_limit).astype(int) evaluate(prediction, y_sub, w_sub) ```
github_jupyter
``` import sys !{sys.executable} -m pip install numpy pandas matplotlib sklearn seaborn ``` ### 1 - Step **1.** Looking for missing values. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt #%matplotlib inline %matplotlib notebook df = pd.read_csv('./winequality-red.csv') df_missig_values = df[df.isna() == True].value_counts() print("Checking if contains missing values: {}".format(len(df_missig_values))) print("Size: {} rows and {} features".format(df.shape[0], df.shape[1])) df.describe() ``` #### Dataframe info - Dataframe have 1599 records - Data columns: total 12 columns - There is no missing values - There is no categorical feature ### 2 - Step **2.** Correlation between the features ``` import seaborn as sns %matplotlib inline plt.subplots(figsize=(10, 8)) sns.heatmap(df.corr(), annot=True, fmt=".3f", linewidths=.5); # Normalize the dataframe to return proportions rather than frequencies def normalize_data(dataframe, column, ascending): ''' Parameters ---------- dataframe: DataFrame Data structure with labeled axes column: str Column label to use for normalization ascending: bool Sort ascending (True) or descending (False) Returns ------- dataframe: DataFrame Dataframe normalized with proportions according to the column ''' return dataframe[column].value_counts(normalize=True).sort_values(ascending=ascending).to_frame() plt.figure(figsize=(15,6)) quality_count = normalize_data(df, 'quality', False) quality_count = quality_count.rename(columns={'quality':'Percentage'}) ax = sns.barplot(x=quality_count.index, y='Percentage', data=quality_count, palette="pastel") # Annotate the point xy with number formatted like percentage # For more details look on https://matplotlib.org/3.3.3/api/_as_gen/matplotlib.axes.Axes.annotate.html#matplotlib.axes.Axes.annotate for p in ax.patches: ax.annotate('{:.1f} %'.format((p.get_height() * 100)), (p.get_x() + p.get_width() / 2., p.get_height()), ha='center', va='center', xytext = (0,9), textcoords='offset points') plt.xlabel("Quality", fontsize=15) plt.ylabel("Percentage", fontsize=15) plt.title("Chart of Quality", fontsize=20) plt.show() ``` ### 3 - Step **3.** Transform into percentage values ``` def convert_value_into_percentage(fraction_number): ''' Parameters ---------- fraction_number: float Number in decimal form Returns ------- float the percentage calculated ''' return fraction_number * 100 quality_count = normalize_data(df, 'quality', True) # I sorted to look better on the pie chart quality_count = quality_count.rename(columns={'quality':'Percentage'}).sort_values(by='Percentage').reindex([3, 5, 4, 7, 8, 6]) # Apply the function to transform quality_count['Percentage'] = quality_count['Percentage'].apply(convert_value_into_percentage) # Building a plot fig1, ax1 = plt.subplots(figsize=(6, 6)) wedges, texts, autotexts = ax1.pie(quality_count['Percentage'], autopct='%1.1f%%', startangle=0, textprops=dict(color="w")) ax1.legend(wedges, quality_count.index, title="Quality level", loc="center left", bbox_to_anchor=(1, 0, 0, 1)) plt.setp(autotexts, size=10, weight="bold") plt.show() ``` ### 4 - Step **4.** Looking on alcohol features distribution using box plot ``` f, ax = plt.subplots(1, 1, figsize=(12, 6)) sns.despine(left=True) sns.boxplot(x=df['alcohol']) #sns.boxplot(x=df['citric acid'], ax=ax[1]) #sns.boxplot(x=df['sulphates'], ax=ax[2]) plt.show() ``` ### 5 - Step **5.** Looking the alcohol features distribution bases on quality ``` sns.catplot(x="quality", y="alcohol", data=df) plt.show() ```
github_jupyter
<h1 align="center">Diabetes Data Analysis</h1> All information regarding the features and dataset can be found in this research arcticle: Impact of HbA1c Measurement on Hospital Readmission Rates: Analysis of 70,000 Clinical Database Patient Records In this project we want to know how different features affect diabetes in general. For this kernel, we will be using a diabetes readmission dataset to explore the different frameworks for model explainability Machine learning models that can be used in the medical field should be interpretable. Humans should know why these models decided on a conclusion. The problem is the more complex an ML model gets the less interpretable it gets. In this kernel we will examine techniques and frameworks in interpreting ML models. ``` #import libraries import numpy as np import pandas as pd import matplotlib.pyplot as plt %matplotlib inline # read the file and create a pandas dataframe data = pd.read_csv('dataset/diabetic_data.csv') # check the dimensions of the data data.shape # first 5 rows of data data.head() ``` we get discriptive statistics of numerical variable ``` #discribtion of numerical data data[['time_in_hospital','num_lab_procedures','num_procedures','num_medications', 'number_outpatient','number_emergency','number_inpatient', 'number_diagnoses']].describe() #no of unique patient len(np.unique(data['patient_nbr'])) ``` + Remove duplicate recod based on patient_nbr column ``` #remove duplicate patient recods data = data.drop_duplicates(subset = 'patient_nbr', keep = 'first') ``` + Plot some column data ``` # the response variable 'readmitted' in the original dataset contains three categories. # 11% of patients were readmitted within 30 days (<30) # 35% of patients were readmitted after 30 days (>30) # 54% of patients were never readmitted (NO) data.groupby('readmitted').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Readmitted Data") plt.show() data['readmitted'] = pd.Series([0 if val in ['NO', '>30'] else val for val in data['readmitted']], index=data.index) data['readmitted'] = pd.Series([1 if val in ['<30'] else val for val in data['readmitted']], index=data.index) #values counts of readmited column data.readmitted.value_counts() # the response variable 'readmitted' in the original dataset contains three categories. # 11% of patients were readmitted within 30 days (<30) # 35% of patients were readmitted after 30 days (>30) # 54% of patients were never readmitted (NO) data.groupby('readmitted').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Readmitted Data") plt.show() ``` + Drop unnecessary column from data ``` # remove irrelevant features data.drop(['encounter_id','patient_nbr', 'weight', 'payer_code','max_glu_serum','A1Cresult'], axis=1, inplace=True) # remove rows that have NA in 'race', 'diag_1', 'diag_2', or 'diag_3' and 'gender' # remove rows that have invalid values in 'gender' data = data[data['race'] != '?'] data = data[data['diag_1'] != '?'] data = data[data['diag_2'] != '?'] data = data[data['diag_3'] != '?'] data = data[data['gender'] != 'Unknown/Invalid'] # check 'age' feature data.groupby('age').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Age") plt.show() ``` > from above graph we see that 60 to 100 age group data is large so we make single group for this ``` # Recategorize 'age' so that the population is more evenly distributed data['age'] = pd.Series(['[20-60)' if val in ['[20-30)', '[30-40)', '[40-50)', '[50-60)'] else val for val in data['age']], index=data.index) data['age'] = pd.Series(['[60-100)' if val in ['[60-70)','[70-80)','[80-90)', '[90-100)'] else val for val in data['age']], index=data.index) data.groupby('age').size().plot(kind='bar') plt.ylabel('Count') plt.title("Distribution of Age") plt.show() data.groupby('number_outpatient').size().plot(kind='bar') plt.ylabel('Count') plt.title("number_outpatient v/s Count ") plt.show() data.groupby('number_emergency').size().plot(kind='bar') plt.title("number_emergency v/s Count") plt.ylabel('Count') plt.show() data.groupby('number_inpatient').size().plot(kind='bar') plt.title("number_inpatient v/s Count") plt.ylabel('Count') plt.show() # remove the other medications data.drop(['metformin', 'repaglinide', 'nateglinide', 'chlorpropamide', 'glimepiride', 'acetohexamide', 'glipizide', 'glyburide', 'tolbutamide', 'pioglitazone', 'rosiglitazone', 'acarbose', 'miglitol', 'troglitazone', 'tolazamide', 'examide', 'citoglipton', 'glyburide-metformin', 'glipizide-metformin', 'glimepiride-pioglitazone', 'metformin-rosiglitazone', 'metformin-pioglitazone','insulin'], axis=1, inplace=True) # Recategorize 'age' so that the population is more evenly distributed data['discharge_disposition_id'] = pd.Series(['Home' if val in [1] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['Anather' if val in [2,3,4,5,6] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['Expired' if val in [11,19,20,21] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['NaN' if val in [18,25,26] else val for val in data['discharge_disposition_id']], index=data.index) data['discharge_disposition_id'] = pd.Series(['other' if val in [7,8,9,10,12,13,14,15,16,17,22,23,24,27,28,29,30] else val for val in data['discharge_disposition_id']], index=data.index) # original 'admission_source_id' contains 25 levels # reduce 'admission_source_id' into 3 categories data['admission_source_id'] = pd.Series(['Emergency Room' if val == 7 else 'Referral' if val in [1,2,3] else 'NaN' if val in [15,17,20,21] else 'Other source' for val in data['admission_source_id']], index=data.index) # original 'admission_type_id' contains 8 levels # reduce 'admission_type_id' into 2 categories data['admission_type_id'] = pd.Series(['Emergency' if val == 1 else 'Other type' for val in data['admission_type_id']], index=data.index) # Extract codes related to heart disease data = data.loc[data['diag_1'].isin(['410','411','412','413','414','415','420','421','422','423','424','425','426','427','428','429','430']) | data['diag_2'].isin(['410','411','412','413','414','415','420','421','422','423','424','425','426','427','428','429','430']) | data['diag_3'].isin(['410','411','412','413','414','415','420','421','422','423','424','425','426','427','428','429','430'])] data.shape import random #create variable emergency visits data['emergency_visits'] = [random.randint(0, 5) for _ in range(26703)] #create variable emergency visits data['acuity_of_admission'] = [random.randint(1, 5) for _ in range(26703)] #create variable emergency visits data['comorbidity'] = [random.randint(1, 15) for _ in range(26703)] categarical_colmun=["age","race","gender","medical_specialty","change","diabetesMed","discharge_disposition_id","admission_source_id","admission_type_id","diag_1","diag_2","diag_3"] dtypes = {c: 'category' for c in categarical_colmun} data=data.astype(dtypes) # conver categarical variable into categary code for i in categarical_colmun: data[i]=data[i].cat.codes # apply square root transformation on right skewed count data to reduce the effects of extreme values. # here log transformation is not appropriate because the data is Poisson distributed and contains many zero values. data['number_outpatient'] = data['number_outpatient'].apply(lambda x: np.sqrt(x + 0.5)) data['number_emergency'] = data['number_emergency'].apply(lambda x: np.sqrt(x + 0.5)) data['number_inpatient'] = data['number_inpatient'].apply(lambda x: np.sqrt(x + 0.5)) # feature scaling, features are standardized to have zero mean and unit variance feature_scale_cols = ['time_in_hospital', 'num_lab_procedures', 'num_procedures', 'num_medications', 'number_diagnoses', 'number_inpatient', 'number_emergency', 'number_outpatient'] from sklearn import preprocessing scaler = preprocessing.StandardScaler().fit(data[feature_scale_cols]) data_scaler = scaler.transform(data[feature_scale_cols]) data_scaler_df = pd.DataFrame(data=data_scaler, columns=feature_scale_cols, index=data.index) data.drop(feature_scale_cols, axis=1, inplace=True) data = pd.concat([data, data_scaler_df], axis=1) # create X (features) and y (response) X = data.drop(['readmitted'], axis=1) y = data['readmitted'] y.value_counts() ``` #### Find Top Features in data ``` # split X and y into cross-validation (75%) and testing (25%) data sets from sklearn.model_selection import train_test_split X_cv, X_test, y_cv, y_test = train_test_split(X, y, test_size=0.25) # fit Random Forest model to the cross-validation data from sklearn.ensemble import RandomForestClassifier forest = RandomForestClassifier() forest.fit(X_cv, y_cv) importances = forest.feature_importances_ # make importance relative to the max importance feature_importance = 100.0 * (importances / importances.max()) sorted_idx = np.argsort(feature_importance) feature_names = list(X_cv.columns.values) feature_names_sort = [feature_names[indice] for indice in sorted_idx] pos = np.arange(sorted_idx.shape[0]) + .5 print('Top 10 features are: ') for feature in feature_names_sort[::-1][:10]: print(feature) # plot the result plt.figure(figsize=(12, 10)) plt.barh(pos, feature_importance[sorted_idx], align='center') plt.yticks(pos, feature_names_sort) plt.title('Relative Feature Importance', fontsize=20) plt.show() ``` ### Use over sampling method for handle imbalanace data ``` from imblearn.over_sampling import SMOTE from sklearn import metrics from collections import Counter oversample = SMOTE() X, y = oversample.fit_resample(X, y) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .3) ``` ### Logistic Regession ``` from sklearn.linear_model import LogisticRegression clf=LogisticRegression(solver='saga') clf.fit(X_train, y_train) print(clf.score(X_test, y_test)) from sklearn.metrics import cohen_kappa_score y_pred=clf.predict(X_test) cohen_kappa_score(y_test, y_pred) #Classification Score from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) ``` ### Random Forest ``` rf = RandomForestClassifier() rf.fit(X_train, y_train) print(rf.score(X_test, y_test)) # cohen kappa score from sklearn.metrics import cohen_kappa_score y_pred=rf.predict(X_test) cohen_kappa_score(y_test, y_pred) #Classification Score from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) ```
github_jupyter
``` import cv2 as cv from scipy.spatial import distance import numpy as np from collections import OrderedDict ``` ##### Object Tracking Class ``` class Tracker: def __init__(self, maxLost = 30): # maxLost: maximum object lost counted when the object is being tracked self.nextObjectID = 0 # ID of next object self.objects = OrderedDict() # stores ID:Locations self.lost = OrderedDict() # stores ID:Lost_count self.maxLost = maxLost # maximum number of frames object was not detected. def addObject(self, new_object_location): self.objects[self.nextObjectID] = new_object_location # store new object location self.lost[self.nextObjectID] = 0 # initialize frame_counts for when new object is undetected self.nextObjectID += 1 def removeObject(self, objectID): # remove tracker data after object is lost del self.objects[objectID] del self.lost[objectID] @staticmethod def getLocation(bounding_box): xlt, ylt, xrb, yrb = bounding_box return (int((xlt + xrb) / 2.0), int((ylt + yrb) / 2.0)) def update(self, detections): if len(detections) == 0: # if no object detected in the frame lost_ids = list(self.lost.keys()) for objectID in lost_ids: self.lost[objectID] +=1 if self.lost[objectID] > self.maxLost: self.removeObject(objectID) return self.objects new_object_locations = np.zeros((len(detections), 2), dtype="int") # current object locations for (i, detection) in enumerate(detections): new_object_locations[i] = self.getLocation(detection) if len(self.objects)==0: for i in range(0, len(detections)): self.addObject(new_object_locations[i]) else: objectIDs = list(self.objects.keys()) previous_object_locations = np.array(list(self.objects.values())) D = distance.cdist(previous_object_locations, new_object_locations) # pairwise distance between previous and current row_idx = D.min(axis=1).argsort() # (minimum distance of previous from current).sort_as_per_index cols_idx = D.argmin(axis=1)[row_idx] # index of minimum distance of previous from current assignedRows, assignedCols = set(), set() for (row, col) in zip(row_idx, cols_idx): if row in assignedRows or col in assignedCols: continue objectID = objectIDs[row] self.objects[objectID] = new_object_locations[col] self.lost[objectID] = 0 assignedRows.add(row) assignedCols.add(col) unassignedRows = set(range(0, D.shape[0])).difference(assignedRows) unassignedCols = set(range(0, D.shape[1])).difference(assignedCols) if D.shape[0]>=D.shape[1]: for row in unassignedRows: objectID = objectIDs[row] self.lost[objectID] += 1 if self.lost[objectID] > self.maxLost: self.removeObject(objectID) else: for col in unassignedCols: self.addObject(new_object_locations[col]) return self.objects ``` #### Loading Object Detector Model ##### Tensorflow model for Object Detection and Tracking Here, the SSD Object Detection Model is used. For more details about single shot detection (SSD), refer the following: - **Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., & Berg, A. C. (2016, October). Ssd: Single shot multibox detector. In European conference on computer vision (pp. 21-37). Springer, Cham.** - Research paper link: https://arxiv.org/abs/1512.02325 - The pretrained model: https://github.com/opencv/opencv/wiki/TensorFlow-Object-Detection-API#use-existing-config-file-for-your-model ``` model_info = {"config_path":"./tensorflow_model_dir/ssd_mobilenet_v2_coco_2018_03_29.pbtxt", "model_weights_path":"./tensorflow_model_dir/ssd_mobilenet_v2_coco_2018_03_29/frozen_inference_graph.pb", "object_names": {0: 'background', 1: 'person', 2: 'bicycle', 3: 'car', 4: 'motorcycle', 5: 'airplane', 6: 'bus', 7: 'train', 8: 'truck', 9: 'boat', 10: 'traffic light', 11: 'fire hydrant', 13: 'stop sign', 14: 'parking meter', 15: 'bench', 16: 'bird', 17: 'cat', 18: 'dog', 19: 'horse', 20: 'sheep', 21: 'cow', 22: 'elephant', 23: 'bear', 24: 'zebra', 25: 'giraffe', 27: 'backpack', 28: 'umbrella', 31: 'handbag', 32: 'tie', 33: 'suitcase', 34: 'frisbee', 35: 'skis', 36: 'snowboard', 37: 'sports ball', 38: 'kite', 39: 'baseball bat', 40: 'baseball glove', 41: 'skateboard', 42: 'surfboard', 43: 'tennis racket', 44: 'bottle', 46: 'wine glass', 47: 'cup', 48: 'fork', 49: 'knife', 50: 'spoon', 51: 'bowl', 52: 'banana', 53: 'apple', 54: 'sandwich', 55: 'orange', 56: 'broccoli', 57: 'carrot', 58: 'hot dog', 59: 'pizza', 60: 'donut', 61: 'cake', 62: 'chair', 63: 'couch', 64: 'potted plant', 65: 'bed', 67: 'dining table', 70: 'toilet', 72: 'tv', 73: 'laptop', 74: 'mouse', 75: 'remote', 76: 'keyboard', 77: 'cell phone', 78: 'microwave', 79: 'oven', 80: 'toaster', 81: 'sink', 82: 'refrigerator', 84: 'book', 85: 'clock', 86: 'vase', 87: 'scissors', 88: 'teddy bear', 89: 'hair drier', 90: 'toothbrush'}, "confidence_threshold": 0.5, "threshold": 0.4 } net = cv.dnn.readNetFromTensorflow(model_info["model_weights_path"], model_info["config_path"]) np.random.seed(12345) bbox_colors = {key: np.random.randint(0, 255, size=(3,)).tolist() for key in model_info['object_names'].keys()} ``` ##### Instantiate the Tracker Class ``` maxLost = 5 # maximum number of object losts counted when the object is being tracked tracker = Tracker(maxLost = maxLost) ``` ##### Initiate opencv video capture object The `video_src` can take two values: 1. If `video_src=0`: OpenCV accesses the camera connected through USB 2. If `video_src='video_file_path'`: OpenCV will access the video file at the given path (can be MP4, AVI, etc format) ``` video_src = "./data/video_test5.mp4"#0 cap = cv.VideoCapture(video_src) ``` ##### Start object detection and tracking ``` (H, W) = (None, None) # input image height and width for the network writer = None while(True): ok, image = cap.read() if not ok: print("Cannot read the video feed.") break if W is None or H is None: (H, W) = image.shape[:2] blob = cv.dnn.blobFromImage(image, size=(300, 300), swapRB=True, crop=False) net.setInput(blob) detections = net.forward() detections_bbox = [] # bounding box for detections boxes, confidences, classIDs = [], [], [] for detection in detections[0, 0, :, :]: classID = detection[1] confidence = detection[2] if confidence > model_info['confidence_threshold']: box = detection[3:7] * np.array([W, H, W, H]) (left, top, right, bottom) = box.astype("int") width = right - left + 1 height = bottom - top + 1 boxes.append([int(left), int(top), int(width), int(height)]) confidences.append(float(confidence)) classIDs.append(int(classID)) indices = cv.dnn.NMSBoxes(boxes, confidences, model_info["confidence_threshold"], model_info["threshold"]) if len(indices)>0: for i in indices.flatten(): x, y, w, h = boxes[i][0], boxes[i][1], boxes[i][2], boxes[i][3] detections_bbox.append((x, y, x+w, y+h)) clr = [int(c) for c in bbox_colors[classIDs[i]]] cv.rectangle(image, (x, y), (x+w, y+h), clr, 2) label = "{}:{:.4f}".format(model_info["object_names"][classIDs[i]], confidences[i]) (label_width, label_height), baseLine = cv.getTextSize(label, cv.FONT_HERSHEY_SIMPLEX, 0.5, 2) y_label = max(y, label_height) cv.rectangle(image, (x, y_label-label_height), (x+label_width, y_label+baseLine), (255, 255, 255), cv.FILLED) cv.putText(image, label, (x, y_label), cv.FONT_HERSHEY_SIMPLEX, 0.5, clr, 2) objects = tracker.update(detections_bbox) # update tracker based on the newly detected objects for (objectID, centroid) in objects.items(): text = "ID {}".format(objectID) cv.putText(image, text, (centroid[0] - 10, centroid[1] - 10), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2) cv.circle(image, (centroid[0], centroid[1]), 4, (0, 255, 0), -1) cv.imshow("image", image) if cv.waitKey(1) & 0xFF == ord('q'): break if writer is None: fourcc = cv.VideoWriter_fourcc(*"MJPG") writer = cv.VideoWriter("output.avi", fourcc, 30, (W, H), True) writer.write(image) writer.release() cap.release() cv.destroyWindow("image") ```
github_jupyter
``` import pandas as pd import numpy as np from datetime import datetime import pandas as pd from scipy import optimize from scipy import integrate %matplotlib inline import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns sns.set(style="darkgrid") mpl.rcParams['figure.figsize'] = (16, 9) pd.set_option('display.max_rows', 500) # try to parse the dates right at the beginning # it works out of the box if the date was stored ISO YYYY-MM-DD format df_analyse=pd.read_csv('../data/processed/COVID_small_flat_table.csv',sep=';') df_analyse.sort_values('date',ascending=True).head() def SIR_model(SIR,beta,gamma): ''' Simple SIR model S: susceptible population I: infected people R: recovered people beta: overall condition is that the sum of changes (differnces) sum up to 0 dS+dI+dR=0 S+I+R= N (constant size of population) ''' S,I,R=SIR dS_dt=-beta*S*I/N0 #S*I is the dI_dt=beta*S*I/N0-gamma*I dR_dt=gamma*I return([dS_dt,dI_dt,dR_dt]) ``` # Simulative approach to calculate SIR curves ``` # set some basic parameters # beta/gamma is denoted as 'basic reproduction number' def simulate_SIR(country='Germany'): N0=1000000 #max susceptible population beta=0.4 # infection spread dynamics gamma=0.1 # recovery rate # condition I0+S0+R0=N0 I0=df_analyse[country][35] S0=N0-I0 R0=0 SIR=np.array([S0,I0,R0]) propagation_rates_sumulation=pd.DataFrame(columns={'susceptible':S0, 'infected':I0, 'recoverd':R0}) for each_t in np.arange(100): new_delta_vec=SIR_model(SIR,beta,gamma) SIR=SIR+new_delta_vec propagation_rates_sumulation=propagation_rates_sumulation.append({'susceptible':SIR[0], 'infected':SIR[1], 'recovered':SIR[2]}, ignore_index=True) fig, ax1 = plt.subplots(1, 1) ax1.plot(propagation_rates_sumulation.index,propagation_rates_sumulation.infected,label='infected',color='k') ax1.plot(propagation_rates_sumulation.index,propagation_rates_sumulation.recovered,label='recovered') ax1.plot(propagation_rates_sumulation.index,propagation_rates_sumulation.susceptible,label='susceptible') ax1.set_ylim(10, 1000000) ax1.set_yscale('linear') ax1.set_title('Szenario SIR simulations (demonstration purposes only)',size=16) ax1.set_xlabel('time in days',size=16) ax1.legend(loc='best', prop={'size': 16}); simulate_SIR() simulate_SIR('Italy') ``` # Fitting the parameters of SIR model ``` # ensure re-initialization def dynamic_beda_SIR(country='Germany'): def SIR_model_t(SIR,t,beta,gamma): ''' Simple SIR model S: susceptible population t: time step, mandatory for integral.odeint I: infected people R: recovered people beta: overall condition is that the sum of changes (differnces) sum up to 0 dS+dI+dR=0 S+I+R= N (constant size of population) ''' S,I,R=SIR dS_dt=-beta*S*I/N0 #S*I is the dI_dt=beta*S*I/N0-gamma*I dR_dt=gamma*I return dS_dt,dI_dt,dR_dt def fit_odeint(x, beta, gamma): ''' helper function for the integration ''' return integrate.odeint(SIR_model_t, (S0, I0, R0), t, args=(beta, gamma))[:,1] # we only would like to get dI ydata = np.array(df_analyse[country][35:]) t=np.arange(len(ydata)) I0=ydata[0] S0=N0-I0 R0=0 beta=0.4 popt, pcov = optimize.curve_fit(fit_odeint, t, ydata) perr = np.sqrt(np.diag(pcov)) print('standard deviation errors : ',str(perr), ' start infect:',ydata[0]) print("Optimal parameters: beta =", popt[0], " and gamma = ", popt[1]) fitted=fit_odeint(t, *popt) plt.semilogy(t, ydata, 'o') plt.semilogy(t, fitted) plt.title("Fit of SIR model for Germany cases") plt.ylabel("Population infected") plt.xlabel("Days") plt.show() print("Optimal parameters: beta =", popt[0], " and gamma = ", popt[1]) print("Basic Reproduction Number R0 " , popt[0]/ popt[1]) print("This ratio is derived as the expected number of new infections (these new infections are sometimes called secondary infections from a single \ infection in a population where all subjects are susceptible. @wiki") dynamic_beda_SIR('Italy') dynamic_beda_SIR() ``` # Dynamic beta in SIR (infection rate) ``` def dynamic_beta_SIR(country='Germany'): t_initial=28 t_intro_measures=14 t_hold=21 t_relax=21 ydata = np.array(df_analyse[country][35:]) I0=ydata[0] N0=1000000 S0=N0-I0 R0=0 beta_max=0.4 beta_min=0.11 gamma=0.1 pd_beta=np.concatenate((np.array(t_initial*[beta_max]), np.linspace(beta_max,beta_min,t_intro_measures), np.array(t_hold*[beta_min]), np.linspace(beta_min,beta_max,t_relax), )) SIR=np.array([S0,I0,R0]) propagation_rates_dynamic=pd.DataFrame(columns={'susceptible':S0, 'infected':I0, 'recoverd':R0}) for each_beta in pd_beta: new_delta_vec=SIR_model(SIR,each_beta,gamma) SIR=SIR+new_delta_vec propagation_rates_dynamic=propagation_rates_dynamic.append({'susceptible':SIR[0], 'infected':SIR[1], 'recovered':SIR[2]}, ignore_index=True) fig, ax1 = plt.subplots(1, 1) ax1.plot(propagation_rates_dynamic.index,propagation_rates_dynamic.infected,label='infected',linewidth=3) t_phases=np.array([t_initial,t_intro_measures,t_hold,t_relax]).cumsum() ax1.bar(np.arange(len(ydata)),ydata, width=0.8,label=' current infected Germany',color='r') ax1.axvspan(0,t_phases[0], facecolor='b', alpha=0.2,label='no measures') ax1.axvspan(t_phases[0],t_phases[1], facecolor='b', alpha=0.3,label='hard measures introduced') ax1.axvspan(t_phases[1],t_phases[2], facecolor='b', alpha=0.4,label='hold measures') ax1.axvspan(t_phases[2],t_phases[3], facecolor='b', alpha=0.5,label='relax measures') ax1.axvspan(t_phases[3],len(propagation_rates_dynamic.infected), facecolor='b', alpha=0.6,label='repead hard measures') ax1.set_ylim(10, 1.5*max(propagation_rates_dynamic.infected)) ax1.set_yscale('log') ax1.set_title('Szenario SIR simulations (demonstration purposes only)',size=16) ax1.set_xlabel('time in days',size=16) ax1.legend(loc='best', prop={'size': 16}); dynamic_beta_SIR() dynamic_beta_SIR('Italy') ```
github_jupyter
# Step 2: Create Job Submission Script The next step is to create our job submission script. In the cell below, you will need to complete the job submission script and run the cell to generate the file using the magic `%%writefile` command. Your main task is to complete the following items of the script: * Create a variable `MODEL` and assign it the value of the first argument passed to the job submission script. * Create a variable `DEVICE` and assign it the value of the second argument passed to the job submission script. * Create a variable `VIDEO` and assign it the value of the third argument passed to the job submission script. * Create a variable `PEOPLE` and assign it the value of the sixth argument passed to the job submission script. ``` %%writefile queue_job.sh #!/bin/bash exec 1>/output/stdout.log 2>/output/stderr.log model=args.model device=args.device video_file=args.video max_people=args.max_people threshold=args.threshold output_path=args.output_path # TODO: Create MODEL variable MODEL=$2 # TODO: Create DEVICE variable DEVICE=$1 # TODO: Create VIDEO variable VIDEO=$3 QUEUE=$4 OUTPUT=$5 # TODO: Create PEOPLE variable PEOPLE=$6 mkdir -p $5 if echo "$DEVICE" | grep -q "FPGA"; then # if device passed in is FPGA, load bitstream to program FPGA #Environment variables and compilation for edge compute nodes with FPGAs export AOCL_BOARD_PACKAGE_ROOT=/opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/BSP/a10_1150_sg2 source /opt/altera/aocl-pro-rte/aclrte-linux64/init_opencl.sh aocl program acl0 /opt/intel/openvino/bitstreams/a10_vision_design_sg2_bitstreams/2020-2_PL2_FP16_MobileNet_Clamp.aocx export CL_CONTEXT_COMPILER_MODE_INTELFPGA=3 fi python3 person_detect.py --model ${MODEL} \ --device ${DEVICE} \ --video ${VIDEO} \ --queue_param ${QUEUE} \ --output_path ${OUTPUT}\ --max_people ${PEOPLE} \ cd /output tar zcvf output.tgz * ``` # Next Step Now that you've run the above cell and created your job submission script, you will work through each scenarios notebook in the next three workspaces. In each of these notebooks, you will submit jobs to Intel's DevCloud to load and run inference on each type of hardware and then review the results. **Note**: As a reminder, if you need to make any changes to the job submission script, you can come back to this workspace to edit and run the above cell to overwrite the file with your changes.
github_jupyter
``` import matplotlib.pyplot as plt import os import numpy as np import itertools from glob import glob import pandas as pd from itertools import product import os from annsa.model_classes import f1 from tensorflow.python.keras.models import load_model from pandas import read_csv from sklearn.metrics import auc from matplotlib.lines import Line2D ``` #### Import model, training function ``` def plot_learning_curve_points(sizes, errors, label='None', linestyle='-', color='k', marker='.', linewidth=2): ''' Plots the learning curve. Inputs: sizes : list, int List of traning dataset sizes errors : list, float List of final errors for some metric ''' average = np.average(errors, axis=1) std = np.var(errors) plt.plot(sizes, average, label=label, linestyle=linestyle, color=color, linewidth=linewidth,) plt.scatter(np.array([[size]*5 for size in sizes]).flatten(), np.array(errors).flatten(), color=color, marker=marker) import matplotlib.colors def categorical_cmap(nc, nsc, cmap="tab10", continuous=False): if nc > plt.get_cmap(cmap).N: raise ValueError("Too many categories for colormap.") if continuous: ccolors = plt.get_cmap(cmap)(np.linspace(0,1,nc)) else: ccolors = plt.get_cmap(cmap)(np.arange(nc, dtype=int)) cols = np.zeros((nc*nsc, 3)) for i, c in enumerate(ccolors): chsv = matplotlib.colors.rgb_to_hsv(c[:3]) arhsv = np.tile(chsv,nsc).reshape(nsc,3) arhsv[:,1] = np.linspace(chsv[1],0.25,nsc) arhsv[:,2] = np.linspace(chsv[2],1,nsc) rgb = matplotlib.colors.hsv_to_rgb(arhsv) cols[i*nsc:(i+1)*nsc,:] = rgb cmap = matplotlib.colors.ListedColormap(cols) return cmap c1 = categorical_cmap(5,1, cmap="tab10") plt.scatter(np.arange(5*1),np.ones(5*1)+1, c=np.arange(5*1), s=180, cmap=c1) line_colors = {'caednn' : c1.colors[0], 'daednn' : c1.colors[1], 'dnn' : c1.colors[2], 'cnn' : c1.colors[3], } line_styles = {'test' : '-', 'train' : '--',} marker_styles = {'test' : '', 'train' : '',} dependencies = {'f1' : f1} ``` # All models Full ``` plt.figure(figsize=(10,5)) matplotlib.rcParams.update({'font.size': 22}) dataset_modes = ['test', 'train'] models = [ 'dnn', 'cnn', 'caednn', 'daednn', ] model_modes = ['full'] train_sizes = [ '50', '100', '500', '1000', '5000', '10000', '15000', '20000', ] errors_all = {} for model, model_mode, dataset_mode in product (models, model_modes, dataset_modes): if dataset_mode == 'train': loss = 'f1' else: loss = 'val_f1' errors = [] for train_size in train_sizes: tmp_error = [] identifier = '-final_trainsize' if model == 'cnn': identifier = '-final-reluupdate_trainsize' file_path = os.path.join( '..', 'final_training_notebooks', 'final-models-keras', 'learningcurve-'+model+'-'+model_mode+identifier+train_size+'_'+'*.log',) for tmp_file_path in glob(file_path): history_temp = read_csv(tmp_file_path) tmp_error.append(history_temp.tail(1).iloc[0][loss]) errors.append(np.array(tmp_error)) errors = np.array(errors) errors_all[dataset_mode + '_' + model] = np.average(errors, axis=1) plot_learning_curve_points([int(train_size) for train_size in train_sizes], errors, label=model+' '+dataset_mode+'ing set', linestyle=line_styles[dataset_mode], color=line_colors[model], marker=marker_styles[dataset_mode], linewidth=2) custom_lines = [Line2D([0], [0], color=c1.colors[3], lw=4), Line2D([0], [0], color=c1.colors[2], lw=4), Line2D([0], [0], color=c1.colors[0], lw=4), Line2D([0], [0], color=c1.colors[1], lw=4), Line2D([0], [0], color='k', linestyle=line_styles['test'], marker=marker_styles['test'], markersize=15, lw=2), Line2D([0], [0], color='k', linestyle=line_styles['train'], marker=marker_styles['train'], markersize=15, lw=2), ] plt.legend(custom_lines, ['CNN', 'DNN', 'CAE', 'DAE', 'Validation', 'Training'], prop={'size': 15}) plt.ylim([0,1.1]) plt.xlabel('Number of Examples') plt.ylabel('F1 Score') plt.xticks([0, 5000, 10000, 15000, 20000], [0, 5000, 10000, 15000, 20000]) for item in errors_all: print(item, round((auc([int(train_size) for train_size in train_sizes], errors_all[item]))/20000., 2)) ``` # All models Easy ``` plt.figure(figsize=(10,5)) dataset_modes = ['test', 'train'] models = [ 'dnn', 'cnn', 'caednn', 'daednn', ] model_modes = ['easy'] train_sizes = [ '50', '100', '500', '1000', '5000', '10000', '15000', '20000', ] for model, model_mode, dataset_mode in product (models, model_modes, dataset_modes): if dataset_mode == 'train': loss = 'f1' else: loss = 'val_f1' errors = [] for train_size in train_sizes: tmp_error = [] identifier = '-final_trainsize' if model == 'cnn': identifier = '-final-reluupdate_trainsize' file_path = os.path.join( '..', 'final_training_notebooks', 'final-models-keras', 'learningcurve-'+model+'-'+model_mode+identifier+train_size+'_'+'*.log',) for tmp_file_path in glob(file_path): history_temp = read_csv(tmp_file_path) tmp_error.append(history_temp.tail(1).iloc[0][loss]) errors.append(np.array(tmp_error)) errors = np.array(errors) errors_all[dataset_mode + '_' + model] = np.average(errors, axis=1) plot_learning_curve_points([int(train_size) for train_size in train_sizes], errors, label=model+' '+dataset_mode+'ing set', linestyle=line_styles[dataset_mode], color=line_colors[model], marker=marker_styles[dataset_mode], linewidth=2) custom_lines = [Line2D([0], [0], color=c1.colors[3], lw=4), Line2D([0], [0], color=c1.colors[2], lw=4), Line2D([0], [0], color=c1.colors[0], lw=4), Line2D([0], [0], color=c1.colors[1], lw=4), Line2D([0], [0], color='k', linestyle=line_styles['test'], marker=marker_styles['test'], markersize=15, lw=2), Line2D([0], [0], color='k', linestyle=line_styles['train'], marker=marker_styles['train'], markersize=15, lw=2), ] plt.legend(custom_lines, ['CNN', 'DNN', 'CAE', 'DAE', 'validation', 'Training'], prop={'size': 15}) plt.ylim([0,1.1]) plt.xlabel('Number of Examples') plt.ylabel('F1 Score') plt.xticks([0, 5000, 10000, 15000, 20000], [0, 5000, 10000, 15000, 20000]) for item in errors_all: print(item, round((auc([int(train_size) for train_size in train_sizes], errors_all[item]))/20000., 2)) ```
github_jupyter
# Exercises ### 9. Adjust the learning rate. Try a value of 0.02. Does it make a difference? ** Solution ** This is the simplest exercise and you have do that before. Find the line: optimize = tf.train.AdamOptimizer(learning_rate=0.001).minimize(mean_loss) And change the learning_rate to 0.02. While Adam adapts to the problem, if the orders of magnitude are too different, it may not have time to adjust accordingly. We start overfitting before we can reach a neat solution. Therefore, for this problem, even 0.02 is a **HIGH** starting learning rate. It's a good practice to try 0.001, 0.0001, and 0.00001. If it makes no difference, pick whatever, otherwise it makes sense to fiddle with the learning rate. ## Deep Neural Network for MNIST Classification We'll apply all the knowledge from the lectures in this section to write a deep neural network. The problem we've chosen is referred to as the "Hello World" for machine learning because for most students it is their first example. The dataset is called MNIST and refers to handwritten digit recognition. You can find more about it on Yann LeCun's website (Director of AI Research, Facebook). He is one of the pioneers of what we've been talking about and of more complex approaches that are widely used today, such as covolutional networks. The dataset provides 28x28 images of handwritten digits (1 per image) and the goal is to write an algorithm that detects which digit is written. Since there are only 10 digits, this is a classification problem with 10 classes. In order to exemplify what we've talked about in this section, we will build a network with 2 hidden layers between inputs and outputs. ## Import the relevant packages ``` import numpy as np import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data # TensorFLow includes a data provider for MNIST that we'll use. # This function automatically downloads the MNIST dataset to the chosen directory. # The dataset is already split into training, validation, and test subsets. # Furthermore, it preprocess it into a particularly simple and useful format. # Every 28x28 image is flattened into a vector of length 28x28=784, where every value # corresponds to the intensity of the color of the corresponding pixel. # The samples are grayscale (but standardized from 0 to 1), so a value close to 0 is almost white and a value close to # 1 is almost purely black. This representation (flattening the image row by row into # a vector) is slightly naive but as you'll see it works surprisingly well. # Since this is a classification problem, our targets are categorical. # Recall from the lecture on that topic that one way to deal with that is to use one-hot encoding. # With it, the target for each individual sample is a vector of length 10 # which has nine 0s and a single 1 at the position which corresponds to the correct answer. # For instance, if the true answer is "1", the target will be [0,0,0,1,0,0,0,0,0,0] (counting from 0). # Have in mind that the very first time you execute this command it might take a little while to run # because it has to download the whole dataset. Following commands only extract it so they're faster. mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) ``` ## Outline the model The whole code is in one cell, so you can simply rerun this cell (instead of the whole notebook) and train a new model. The tf.reset_default_graph() function takes care of clearing the old parameters. From there on, a completely new training starts. ``` input_size = 784 output_size = 10 # Use same hidden layer size for both hidden layers. Not a necessity. hidden_layer_size = 50 # Reset any variables left in memory from previous runs. tf.reset_default_graph() # As in the previous example - declare placeholders where the data will be fed into. inputs = tf.placeholder(tf.float32, [None, input_size]) targets = tf.placeholder(tf.float32, [None, output_size]) # Weights and biases for the first linear combination between the inputs and the first hidden layer. # Use get_variable in order to make use of the default TensorFlow initializer which is Xavier. weights_1 = tf.get_variable("weights_1", [input_size, hidden_layer_size]) biases_1 = tf.get_variable("biases_1", [hidden_layer_size]) # Operation between the inputs and the first hidden layer. # We've chosen ReLu as our activation function. You can try playing with different non-linearities. outputs_1 = tf.nn.relu(tf.matmul(inputs, weights_1) + biases_1) # Weights and biases for the second linear combination. # This is between the first and second hidden layers. weights_2 = tf.get_variable("weights_2", [hidden_layer_size, hidden_layer_size]) biases_2 = tf.get_variable("biases_2", [hidden_layer_size]) # Operation between the first and the second hidden layers. Again, we use ReLu. outputs_2 = tf.nn.relu(tf.matmul(outputs_1, weights_2) + biases_2) # Weights and biases for the final linear combination. # That's between the second hidden layer and the output layer. weights_3 = tf.get_variable("weights_3", [hidden_layer_size, output_size]) biases_3 = tf.get_variable("biases_3", [output_size]) # Operation between the second hidden layer and the final output. # Notice we have not used an activation function because we'll use the trick to include it directly in # the loss function. This works for softmax and sigmoid with cross entropy. outputs = tf.matmul(outputs_2, weights_3) + biases_3 # Calculate the loss function for every output/target pair. # The function used is the same as applying softmax to the last layer and then calculating cross entropy # with the function we've seen in the lectures. This function, however, combines them in a clever way, # which makes it both faster and more numerically stable (when dealing with very small numbers). # Logits here means: unscaled probabilities (so, the outputs, before they are scaled by the softmax) # Naturally, the labels are the targets. loss = tf.nn.softmax_cross_entropy_with_logits(logits=outputs, labels=targets) # Get the average loss mean_loss = tf.reduce_mean(loss) # Define the optimization step. Using adaptive optimizers such as Adam in TensorFlow # is as simple as that. optimize = tf.train.AdamOptimizer(learning_rate=0.02).minimize(mean_loss) # Get a 0 or 1 for every input in the batch indicating whether it output the correct answer out of the 10. out_equals_target = tf.equal(tf.argmax(outputs, 1), tf.argmax(targets, 1)) # Get the average accuracy of the outputs. accuracy = tf.reduce_mean(tf.cast(out_equals_target, tf.float32)) # Declare the session variable. sess = tf.InteractiveSession() # Initialize the variables. Default initializer is Xavier. initializer = tf.global_variables_initializer() sess.run(initializer) # Batching batch_size = 100 # Calculate the number of batches per epoch for the training set. batches_number = mnist.train._num_examples // batch_size # Basic early stopping. Set a miximum number of epochs. max_epochs = 15 # Keep track of the validation loss of the previous epoch. # If the validation loss becomes increasing, we want to trigger early stopping. # We initially set it at some arbitrarily high number to make sure we don't trigger it # at the first epoch prev_validation_loss = 9999999. import time start_time = time.time() # Create a loop for the epochs. Epoch_counter is a variable which automatically starts from 0. for epoch_counter in range(max_epochs): # Keep track of the sum of batch losses in the epoch. curr_epoch_loss = 0. # Iterate over the batches in this epoch. for batch_counter in range(batches_number): # Input batch and target batch are assigned values from the train dataset, given a batch size input_batch, target_batch = mnist.train.next_batch(batch_size) # Run the optimization step and get the mean loss for this batch. # Feed it with the inputs and the targets we just got from the train dataset _, batch_loss = sess.run([optimize, mean_loss], feed_dict={inputs: input_batch, targets: target_batch}) # Increment the sum of batch losses. curr_epoch_loss += batch_loss # So far curr_epoch_loss contained the sum of all batches inside the epoch # We want to find the average batch losses over the whole epoch # The average batch loss is a good proxy for the current epoch loss curr_epoch_loss /= batches_number # At the end of each epoch, get the validation loss and accuracy # Get the input batch and the target batch from the validation dataset input_batch, target_batch = mnist.validation.next_batch(mnist.validation._num_examples) # Run without the optimization step (simply forward propagate) validation_loss, validation_accuracy = sess.run([mean_loss, accuracy], feed_dict={inputs: input_batch, targets: target_batch}) # Print statistics for the current epoch # Epoch counter + 1, because epoch_counter automatically starts from 0, instead of 1 # We format the losses with 3 digits after the dot # We format the accuracy in percentages for easier interpretation print('Epoch '+str(epoch_counter+1)+ '. Mean loss: '+'{0:.3f}'.format(curr_epoch_loss)+ '. Validation loss: '+'{0:.3f}'.format(validation_loss)+ '. Validation accuracy: '+'{0:.2f}'.format(validation_accuracy * 100.)+'%') # Trigger early stopping if validation loss begins increasing. if validation_loss > prev_validation_loss: break # Store this epoch's validation loss to be used as previous validation loss in the next iteration. prev_validation_loss = validation_loss # Not essential, but it is nice to know when the algorithm stopped working in the output section, rather than check the kernel print('End of training.') #Add the time it took the algorithm to train print("Training time: %s seconds" % (time.time() - start_time)) ``` ## Test the model As we discussed in the lectures, after training on the training and validation sets, we test the final prediction power of our model by running it on the test dataset that the algorithm has not seen before. It is very important to realize that fiddling with the hyperparameters overfits the validation dataset. The test is the absolute final instance. You should not test before you are completely done with adjusting your model. ``` input_batch, target_batch = mnist.test.next_batch(mnist.test._num_examples) test_accuracy = sess.run([accuracy], feed_dict={inputs: input_batch, targets: target_batch}) # Test accuracy is a list with 1 value, so we want to extract the value from it, using x[0] # Uncomment the print to see how it looks before the manipulation # print (test_accuracy) test_accuracy_percent = test_accuracy[0] * 100. # Print the test accuracy formatted in percentages print('Test accuracy: '+'{0:.2f}'.format(test_accuracy_percent)+'%') ``` Using the initial model and hyperparameters given in this notebook, the final test accuracy should be roughly between 97% and 98%. Each time the code is rerunned, we get a different accuracy as the batches are shuffled, the weights are initialized in a different way, etc. Finally, we have intentionally reached a suboptimal solution, so you can have space to build on it.
github_jupyter
<a href="https://colab.research.google.com/github/jpgill86/python-for-neuroscientists/blob/master/notebooks/02.3-The-Core-Language-of-Python.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # The Core Language of Python: Part 3 ## For Loops and List Comprehensions A `for` loop can **iterate** over a list or tuple, performing calculations for each item in the sequence. Like `if` statements, `for` loops require a **colon** to terminate the first line and consistent **indentation** (typically four spaces) below it for the block of code that will be executed for each item in the sequence. Each item in the sequence is assigned a temporary variable name that can be used within the block. In the example below, this temporary variable is called `i`: ``` my_list = [0, 1, 2, 3, 4, 5] # print the square of each item in my_list # - colon and indentation are important! for i in my_list: print(i**2) ``` Be careful what name you give the iterator variable, since its value will be overwritten again and again with the items in the sequence. If the variable had a value before the `for` loop, it will be lost, which may not be what you intended. ``` i = 'abc' print(f'i at the start = {i}') for i in my_list: print(i**2) print(f'i at the end = {i}') ``` If you wanted to store a result from each step of the `for` loop in another list, one way you could do it is 1. Initialize another variable as an empty list (`another_list = []`), and then 2. Append a result to the new list in each step (`another_list.append`). For example: ``` another_list = [] for i in my_list: another_list.append(i**2) another_list ``` If the calculation of the result (in this example, squaring `i`) is fairly simple, you can perform the same work using a more concise notation called **list comprehension**. The simplest version of list comprehension takes the form `[f(i) for i in my_list]`, where `f(i)` is some function or transformation of the list item `i`. Notice list comprehensions are enclosed in square brackets (`[]`) because they create lists. Here is a list comprehension equivalent to the example above: ``` # basic list comprehension # - this means "square the item for each item in my_list" another_list = [i**2 for i in my_list] another_list ``` All of the work is completed in a single line of code. Elegant! List comprehensions can be more complex than this. Suppose we modified the `for` loop to append a result only if the list item is an even number (`i % 2 == 0` means that `i` divided by 2 must have a remainder of 0): ``` another_list = [] for i in my_list: if i % 2 == 0: # append only if i is even another_list.append(i**2) # the squares of 0, 2, 4 another_list ``` To do this with list comprehension, just add `if i % 2 == 0` to the end: ``` # list comprehension with conditional # - this means "square the item for each item in my_list if it is even (otherwise skip it)" another_list = [i**2 for i in my_list if i % 2 == 0] another_list ``` Suppose we modify the `for` loop further to perform a different calculation if the list item is an odd number: ``` another_list = [] for i in my_list: if i % 2 == 0: # square if i is even another_list.append(i**2) else: # add 100 if i is odd another_list.append(i+100) another_list ``` This can be done with list comprehension by moving the `if i % 2 == 0` to an earlier position, just after `i**2`, and adding `else i+100`: ``` # list comprehension with complex conditional # - this means "square the item if it is even, otherwise add 100 to it, for each item in my_list" another_list = [i**2 if i%2==0 else i+100 for i in my_list] another_list ``` The results stored in `another_list` could be something other than a calculation using `i`. For example, strings: ``` # this means "store the string 'less than 2' if the item is less than 2, otherwise store '2 or greater', for each item in my_list" another_list = ['less than 2' if i < 2 else '2 or greater' for i in my_list] another_list ``` ## While Loops A `while` loop is another way to repeatedly perform calculations. Whereas `for` loops execute code for each item in a sequence, `while` loops execute code for as long as a condition is true. For example: ``` x = 0 while x < 5: print(x) x = x + 1 print(f'final value of x = {x}') ``` Generally, this means that the code within the `while` loop should take steps toward making the condition no longer true, even if it is unknown ahead of time how many steps that may require. In the simple example above, `x` was incremented each step until `x` was no longer less than 5. A more practical example would be a piece of code that reads a text file of unknown length one line at a time using a `while` loop that continues until a request for the next line yields nothing. Be warned: **If the condition never ceases to be true, the `while` loop will never stop**, which is probably not what you want! Try executing the code cell below, which will start an infinite loop because `x` is never incremented. You will see the icon in the left margin of the code cell spin and spin endlessly as the computer keeps executing the code within the `while` loop again and again, never stopping because the condition `x < 5` never stops being true. For this to end, you must **manually interrupt the code execution**, which you can do two ways: 1. Click the spinning stop icon in the left margin, or 2. Use the "Runtime" menu at the top of the page and click "Interrupt execution". Colab executes cells one at a time, so until you interrupt the execution of this cell, you will not be able to run any other code! ``` # this will run forever until interrupted! x = 0 while x < 5: pass # do nothing ``` ## Dictionaries Dictionaries are another important data type in Python. Dictionaries store **key-value pairs**, where each piece of data (the **value**) is assigned a name (the **key**) for easy access. Dictionaries are created using curly braces (`{}`). (This is different from the use of curly braces in f-strings!) Inside of the curly braces, key-value pairs are separated from one another by commas, and colons separate each key from its value. For example: ``` # create a new dictionary using curly braces {} my_dict = {'genus': 'Aplysia', 'species': 'californica', 'mass': 150} my_dict ``` The syntax for extracting a piece of data from a dictionary is similar to indexing into lists. It uses square brackets after the dictionary name (not curly braces like you might guess), but instead of a number indicating position, a key should be provided. ``` # select items by key my_dict['species'] ``` Like changing the value of an item in a list via its index, the value of an item in a dictionary can be changed via its key: ``` # change values by key my_dict['mass'] = 300 my_dict ``` New key-value pairs can be added to a dictionary the same way. In fact, you can start with an empty dictionary and build it up one key-value pair at a time: ``` my_dict2 = {} my_dict2['genus'] = 'Aplysia' my_dict2['species'] = 'californica' my_dict2['mass'] = 300 my_dict2 ``` Values can have any data type. Most basic data types are valid for keys too, but an important exception is lists: **lists are not allowed to be dictionary keys**. Tuples, on the other hand, are allowed to be keys. This is because keys must be immutable (uneditable), which is a property that tuples have but lists do not. ``` # lists cannot be keys, so this is NOT allowed # - the error "unhashable type" is a consequence of the fact that lists are not immutable (they can be changed) my_dict2[['x', 'y', 'z']] = [1, 2, 3] # tuples can be keys, so this IS allowed my_dict2[('x', 'y', 'z')] = [1, 2, 3] my_dict2 ``` You can delete a key-value pair from a dictionary using the `del` keyword: ``` del my_dict2['species'] my_dict2 ``` As a matter of fact, **`del` is how you unset any variable**: ``` del my_dict2 # now my_dict2 is not defined my_dict2 ``` Just like lists and tuples, `for` loops can iterate over a dictionary. In its simplest form, this actually iterates over the dictionary's keys. In the example below, we choose to use the name `k`, rather than `i`, for the temporary variable to reflect this. To access the value associated with key `k`, we must use `my_dict[k]`. ``` # iterate over keys for k in my_dict: print(f'key: {k} --> value: {my_dict[k]}') ``` The dictionary method `items()` returns a (special type of) list of tuples, where each tuple is a key-value pair: ``` # using list() here to simplify how the list of tuples is displayed list(my_dict.items()) ``` When using a `for` loop to iterate over any list of tuples (or a list of lists, or a tuple of lists, or a tuple of tuples...) such as this, you can assign a temporary variable name to each item in the inner tuple/list. This is an example of what is called **unpacking**. For example: ``` list_of_tuples = [ ('a', 1), ('b', 2), ('c', 3), ] for (letter, number) in list_of_tuples: print(letter, number) ``` In the example above, the parentheses around the iterator variables `(letter, number)` are actually optional, so the loop could be written without them: ``` for letter, number in list_of_tuples: print(letter, number) ``` Using the list of tuples produced by `my_dict.items()` and unpacking each key-value tuple into `k,v`, we can write the `for` loop this way: ``` # iterate over key-value pairs for k,v in my_dict.items(): print(f'key: {k} --> value: {v}') ``` Notice this does exactly the same thing as ```python for k in my_dict: print(f'key: {k} --> value: {my_dict[k]}') ``` seen earlier, except the version using `for k,v in my_dict.items()` conveniently assigns the name `v` to the value of each key-value pair, so that `my_dict[k]` does not need to be typed out. Like list comprehensions, there exists a concise way for constructing dictionaries from a sequence, called **dictionary comprehension**. The syntax is similar, but a key and a value must be computed for each iteration, separated by a colon. To set up an example, here is a long way of constructing a dictionary of squares using a `for` loop, where the keys are string versions of the numbers: ``` my_list = [0, 1, 2, 3, 4, 5] squares = {} for i in my_list: # store keys as strings and values as integers squares[str(i)] = i**2 squares ``` Here is a dictionary comprehension that does the same thing. Note it is enclosed with curly braces because it produces a dictionary, and a colon separates a key and its value. ``` # basic dictionary comprehension # - this means "pair a string version of the item with its square for each item in my_list" squares = {str(i): i**2 for i in my_list} squares ``` Like list comprehension, conditionals are allowed for the values: ``` # dictionary comprehension with complex conditional # - this means "pair a string version of the item with its square if it is even, otherwise with the item plus 100, for each item in my_list" squares = {str(i): i**2 if i%2==0 else i+100 for i in my_list} squares ``` ## A Practical Example for Dictionaries and Comprehensions So how are dictionaries useful? There are countless ways, but let's look at one example. Previously we saw in the section titled "Lists vs. Tuples" that a list of tuples with consistent structure is useful because pieces of data have predictable indices. For example, with this definition of `species_data`, the genus of every entry always has index 0: ``` species_data = [ # genus, species, date named, common name ('Aplysia', 'californica', 1863, 'California sea hare'), ('Macrocystis', 'pyrifera', 1820, 'Giant kelp'), ('Pagurus', 'samuelis', 1857, 'Blueband hermit crab'), ] # print every genus for sp in species_data: print(sp[0]) ``` However, this approach requires that we memorize the meaning of each index (0 = genus, 1 = species, etc.). If we use dictionaries instead of tuples, we can access data by name, rather than arbitrary indices. To do this, we could convert every tuple into a dictionary, so that the whole data set is a list of dictionaries with identical keys. To demonstrate this, we could write everything out, like this: ``` species_dicts = [ {'genus': 'Aplysia', 'species': 'californica', 'year': 1863, 'common': 'California sea hare'}, {'genus': 'Macrocystis', 'species': 'pyrifera', 'year': 1820, 'common': 'Giant kelp'}, {'genus': 'Pagurus', 'species': 'samuelis', 'year': 1857, 'common': 'Blueband hermit crab'}, ] ``` However, if the `species_data` list already existed and was much longer than it is here, this would be a lot of extra work! Instead, we could use what we have learned to programmatically construct the list of dictionaries from the existing list of tuples using a `for` loop. Here's one way to do that which uses tuple unpacking for naming each of the four pieces of data in every tuple: ``` # create a new empty list species_dicts = [] # for each tuple, unpack the 4 pieces of data into 4 temporary variables for genus, species, year, common in species_data: # build a new dictionary for this species d = {'genus': genus, 'species': species, 'year': year, 'common': common} # append the dictionary to the new list species_dicts.append(d) # display the new list of dictionaries species_dicts ``` Now the genus of every entry can be accessed using the key `'genus'` rather than index 0: ``` # print every genus for sp in species_dicts: print(sp['genus']) ``` If we want to be *really* clever, we can do the entire conversion in a single step by *constructing a dictionary inside a list comprehension*. For this, we need to first introduce another built-in function. The `zip()` function takes two or more sequences (e.g., lists or tuples) as inputs and groups the elements from each sequence in order. For example: ``` list1 = ['a', 'b', 'c'] list2 = [1, 2, 3] # using list() here to simplify how the result is displayed list(zip(list1, list2)) ``` How can we use `zip()` to help us convert our list of tuples into a list of dictionaries? First, define a variable containing the dictionary keys: ``` keys = ('genus', 'species', 'year', 'common') ``` With this, it is possible to pair the values from one of the tuples with these keys. For example, if we just look at the first tuple: ``` values = species_data[0] list(zip(keys, values)) ``` Here we have a list of tuples, where each tuple is a key-value pair. This is just like what the `items()` function returns for a dictionary. From this, we could construct a dictionary from this first tuple using dictionary comprehension: ``` {k:v for k,v in zip(keys, values)} ``` Equivalently, because there is no extra manipulation of `k` or `v` here, we could use the built-in function `dict()` to convert the list of key-value pairs directly into a dictionary: ``` dict(zip(keys, values)) ``` All we have to do now is generalize this to all tuples in `species_data`. We can use a list comprehension for this, which gives us the final expression that does the entire conversion in one step, from a list of tuples to a list of dictionaries: ``` keys = ('genus', 'species', 'year', 'common') species_dicts = [dict(zip(keys, values)) for values in species_data] species_dicts ``` This is much more elegant than writing out the list of dictionaries by hand! As a final proof of success, we can once again access the genus of every entry using a key: ``` # print every genus for sp in species_dicts: print(sp['genus']) ``` For me, the amount of work we get out of this single expression, `[dict(zip(keys, values)) for values in species_data]`, is delightful! # Continue to the Next Lesson Return to home to continue to the next lession: https://jpgill86.github.io/python-for-neuroscientists/ # External Resources The official language documentation: * [Python 3 documentation](https://docs.python.org/3/index.html) * [Built-in functions](https://docs.python.org/3/library/functions.html) * [Standard libraries](https://docs.python.org/3/library/index.html) * [Glossary of terms](https://docs.python.org/3/glossary.html) * [In-depth tutorial](https://docs.python.org/3/tutorial/index.html) Extended language documentation: * [IPython (Jupyter) vs. Python differences](https://ipython.readthedocs.io/en/stable/interactive/python-ipython-diff.html) * [IPython (Jupyter) "magic" (`%`) commands](https://ipython.readthedocs.io/en/stable/interactive/magics.html) Free interactive books created by Jake VanderPlas: * [A Whirlwind Tour of Python](https://colab.research.google.com/github/jakevdp/WhirlwindTourOfPython/blob/master/Index.ipynb) [[PDF version]](https://www.oreilly.com/programming/free/files/a-whirlwind-tour-of-python.pdf) * [Python Data Science Handbook](https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/Index.ipynb) # License [This work](https://github.com/jpgill86/python-for-neuroscientists) is licensed under a [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
github_jupyter
# A glimpse into the inner working of a 2 layer Neural network ``` %load_ext autoreload %autoreload 2 import numpy as np from numpy import random as nprand from cs771 import plotData as pd, utils, genSyntheticData as gsd from keras.models import Sequential from keras.layers import Dense as dense from keras import optimizers d = 2 n = 20 r = 2 tmp1 = gsd.genSphericalData( d, n, [-5, -5], r ) tmp2 = gsd.genSphericalData( d, n, [5, 5], r ) XPos = np.vstack( (tmp1, tmp2) ) yPos = np.ones( (XPos.shape[0],) ) tmp1 = gsd.genSphericalData( d, n, [-5, 5], r ) tmp2 = gsd.genSphericalData( d, n, [5, -5], r ) XNeg = np.vstack( (tmp1, tmp2) ) yNeg = np.zeros( (XNeg.shape[0],) ) X = np.vstack( (XPos, XNeg) ) y = np.concatenate( (yPos, yNeg) ) n = X.shape[0] idx = nprand.permutation( n ) X = X[idx] y = y[idx] mu = np.mean( X, axis = 0 ) sigma = np.std( X, axis = 0 ) X -= mu X /= sigma # You may get deprecation warnings about tensorflow when you run # this cell for the first time. This is okay and not an error # It seems TF has disabled several functional API in its new version # and keras routines have not (yet) been upgraded to use them and # continue to use the old (deprecated) routines hence the warnings model = Sequential() model.add( dense( units = 2, activation = "sigmoid", input_dim = 2, use_bias = True ) ) model.add( dense( units = 1, activation = "sigmoid", use_bias = True ) ) # Setting a very large learning rate lr may make the NN temperamental and cause # it to converge to a local optima. Keras supports "callbacks" which allow the # user to dynamically lower learning rate if progress has stalled opt = optimizers.Adam( lr = 0.1, beta_1 = 0.9, beta_2 = 0.999, amsgrad = True ) # Metrics are just for sake of display, not for sake of training # Set verbose = 1 or 2 to see metrics reported for every epoch of training # Notice that whereas loss value goes down almost monotonically, the accuracy # may fluctuate i.e. go down a bit before finally going up again model.compile( loss = "binary_crossentropy", optimizer = opt, metrics = ["binary_accuracy"] ) history = model.fit( X, y, epochs = 50, batch_size = n//8, verbose = 0 ) fig0, ax0 = pd.getFigList( nrows = 1, ncols = 2, sizex = 5, sizey = 4 ) ax0[0].plot(history.history['loss']) ax0[1].plot(history.history['binary_accuracy']) ax0[0].set_xlabel( "Epochs" ) ax0[0].set_ylabel( "Binary Cross Entropy Loss" ) ax0[1].set_xlabel( "Epochs" ) ax0[1].set_ylabel( "Classification Accuracy" ) def ffpredict( X ): # Our shading code anyway converts predictions to [0,1] scores return model.predict_classes( X ) fig = pd.getFigure( 10, 10 ) (xlim, ylim) = np.max( np.abs( X ), axis = 0 ) * 1.1 pd.shade2D( ffpredict, fig, mode = "batch", xlim = xlim, ylim = ylim ) pd.plot2D( X[y == 1], fig, color = 'g', marker = '+' ) pd.plot2D( X[y == 0], fig, color = 'r', marker = 'x' ) def sigmoid( a ): return 1/(1 + np.exp( -a )) def getHiddenLayerActivations( X ): return sigmoid( X.dot( w ) + b ) # Our network learns a function of the form (s = sigmoid function) # s( u.T * s( P.T * x + q ) + v ) # Weights that go to the hidden layer P = model.layers[0].get_weights()[0] q = model.layers[0].get_weights()[1] # Weights that go to the output layer u = model.layers[1].get_weights()[0] v = model.layers[1].get_weights()[1] # Get the post activations of the first hidden layer neuron # The multiplication with sign(u[0]) is just to make sure # that the colors turn out nicely in the plots w = P[:,0] * np.sign( u[0] ) b = q[0] * np.sign( u[0] ) fig2 = pd.getFigure( 10, 10 ) pd.shade2DProb( getHiddenLayerActivations, fig2, mode = "batch", xlim = xlim, ylim = ylim ) pd.plot2D( X[y == 1], fig2, color = 'g', marker = '+' ) pd.plot2D( X[y == 0], fig2, color = 'r', marker = 'x' ) # Get the post activations of the second hidden layer neuron # The multiplication with sign(u[1]) is yet again just to make # sure that the colors turn out nicely in the plots w = P[:,1] * np.sign( u[1] ) b = q[1] * np.sign( u[1] ) fig3 = pd.getFigure( 10, 10 ) pd.shade2DProb( getHiddenLayerActivations, fig3, mode = "batch", xlim = xlim, ylim = ylim ) pd.plot2D( X[y == 1], fig3, color = 'g', marker = '+' ) pd.plot2D( X[y == 0], fig3, color = 'r', marker = 'x' ) # Note that the two nodes in the hidden layer cooperate to learn the classifier # Neither node can fully classify the red points from the green points on its own # so they share the burden. Each node takes up the responsibility of isolating # one red clump from the rest of the data. Together they make a perfect classifier :) # One can interpret these two nodes as learning two useful features such that the # learning problem become linearly separable when given these two new features print( model.layers[0].get_weights() ) print( model.layers[1].get_weights() ) # See the value of the weights below and verify that they indeed are of the form # that we saw in the toy code (that demonstrated universality of NN) ```
github_jupyter
<table> <tr><td align="right" style="background-color:#ffffff;"> <img src="../images/logo.jpg" width="20%" align="right"> </td></tr> <tr><td align="right" style="color.:#777777;background-color:#ffffff;font-size:12px;"> Prepared by Abuzer Yakaryilmaz and Maksim Dimitrijev<br> Özlem Salehi | December 4, 2019 (updated) </td></tr> <tr><td align="right" style="color:#bbbbbb;background-color:#ffffff;font-size:11px;font-style:italic;"> This cell contains some macros. If there is a problem with displaying mathematical formulas, please run this cell to load these macros. </td></tr> </table> $ \newcommand{\bra}[1]{\langle #1|} $ $ \newcommand{\ket}[1]{|#1\rangle} $ $ \newcommand{\braket}[2]{\langle #1|#2\rangle} $ $ \newcommand{\dot}[2]{ #1 \cdot #2} $ $ \newcommand{\biginner}[2]{\left\langle #1,#2\right\rangle} $ $ \newcommand{\mymatrix}[2]{\left( \begin{array}{#1} #2\end{array} \right)} $ $ \newcommand{\myvector}[1]{\mymatrix{c}{#1}} $ $ \newcommand{\myrvector}[1]{\mymatrix{r}{#1}} $ $ \newcommand{\mypar}[1]{\left( #1 \right)} $ $ \newcommand{\mybigpar}[1]{ \Big( #1 \Big)} $ $ \newcommand{\sqrttwo}{\frac{1}{\sqrt{2}}} $ $ \newcommand{\dsqrttwo}{\dfrac{1}{\sqrt{2}}} $ $ \newcommand{\onehalf}{\frac{1}{2}} $ $ \newcommand{\donehalf}{\dfrac{1}{2}} $ $ \newcommand{\hadamard}{ \mymatrix{rr}{ \sqrttwo & \sqrttwo \\ \sqrttwo & -\sqrttwo }} $ $ \newcommand{\vzero}{\myvector{1\\0}} $ $ \newcommand{\vone}{\myvector{0\\1}} $ $ \newcommand{\vhadamardzero}{\myvector{ \sqrttwo \\ \sqrttwo } } $ $ \newcommand{\vhadamardone}{ \myrvector{ \sqrttwo \\ -\sqrttwo } } $ $ \newcommand{\myarray}[2]{ \begin{array}{#1}#2\end{array}} $ $ \newcommand{\X}{ \mymatrix{cc}{0 & 1 \\ 1 & 0} } $ $ \newcommand{\Z}{ \mymatrix{rr}{1 & 0 \\ 0 & -1} } $ $ \newcommand{\Htwo}{ \mymatrix{rrrr}{ \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} & -\frac{1}{2} & \frac{1}{2} } } $ $ \newcommand{\CNOT}{ \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} } $ $ \newcommand{\norm}[1]{ \left\lVert #1 \right\rVert } $ <h2>Controlled Operations</h2> We are going to look at controlled operators acting on multiple qubits. <h3> CNOT operator </h3> CNOT is an operator defined on two qubits: $$ CNOT = \mymatrix{cccc}{1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0} . $$ Its effect is very simple: if the state of the first qubit is one, then the state of the second qubit is flipped. If the state of the first bit is zero, then the state of the second qubit remains the same. In summary: <ul> <li>$ CNOT \ket{00} = \ket{00} $, </li> <li>$ CNOT \ket{01} = \ket{01} $, </li> <li>$ CNOT \ket{10} = \ket{11} $, </li> <li>$ CNOT \ket{11} = \ket{10} $. </li> </ul> CNOT refers to as Controlled-NOT: NOT operator is applied in a controlled way. <h3> cx-gate </h3> In Qiskit, CNOT operator is represented as cx-gate. It takes two arguments: controller-qubit and target-qubit. Its implementation is as follows: <i> <b>x-gate</b> (NOT operator) is applied to <u>the target qubit</u> that is <b>CONTROLLED</b> by <u>the controller qubit</u>.</i> <h3> Unitary backend</h3> Unitary_simulator gives a unitary representation of all gates in the circuit until this point. ``` python job = execute(circuit, Aer.get_backend('unitary_simulator')) current_unitary = job.result().get_unitary(circuit, decimals=3) print(current_unitary) ``` Let's check the unitary operator corresponding to the CNOT. We follow the qiskit order. ``` # draw the circuit from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg1 = QuantumRegister(2) creg1 = ClassicalRegister(2) mycircuit1 = QuantumCircuit(qreg1,creg1) mycircuit1.cx(qreg1[1],qreg1[0]) job = execute(mycircuit1,Aer.get_backend('unitary_simulator')) u=job.result().get_unitary(mycircuit1,decimals=3) for i in range(len(u)): s="" for j in range(len(u)): val = str(u[i][j].real) while(len(val)<5): val = " "+val s = s + val print(s) mycircuit1.draw(output="mpl") ``` Now, let's apply CNOT to the states $ \ket{00}, \ket{01}, \ket{10}, \ket{11} $ iteratively where qreg[1] is the control and qreg[0] is the target. ``` # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['00','01','10','11'] for input in all_inputs: qreg2 = QuantumRegister(2) # quantum register with 2 qubits creg2 = ClassicalRegister(2) # classical register with 2 bits mycircuit2 = QuantumCircuit(qreg2,creg2) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit2.x(qreg2[1]) # set the state of qreg[1] to |1> if input[1]=='1': mycircuit2.x(qreg2[0]) # set the state of qreg[0] to |1> # apply cx(first-qubit,second-qubit) mycircuit2.cx(qreg2[1],qreg2[0]) # measure both qubits mycircuit2.measure(qreg2,creg2) # execute the circuit 100 times in the local simulator job = execute(mycircuit2,Aer.get_backend('qasm_simulator'),shots=100) counts = job.result().get_counts(mycircuit2) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ``` <h3>Task 1</h3> Our task is to learn the behavior of the following quantum circuit by doing experiments. Our circuit has two qubits. <ul> <li> Apply Hadamard to both qubits. <li> Apply CNOT(qreg[1] is the control,qreg[0] is the target). <li> Apply Hadamard to both qubits. <li> Measure the circuit. </ul> Iteratively initialize the qubits to $ \ket{00} $, $ \ket{01} $, $ \ket{10} $, and $ \ket{11} $. Execute your program 100 times for each iteration, and then check the outcomes for each iteration. Observe that the overall circuit implements CNOT(qreg[0] is the control, qreg[1] is the target). ``` # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer # # your code is here from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['00','01','10','11'] for input in all_inputs: qreg1 = QuantumRegister(2) # quantum register with 2 qubits creg1 = ClassicalRegister(2) # classical register with 2 bits mycircuit1 = QuantumCircuit(qreg1,creg1) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit1.x(qreg1[1]) # set the state of the qubit to |1> if input[1]=='1': mycircuit1.x(qreg1[0]) # set the state of the qubit to |1> # apply h-gate to both qubits mycircuit1.h(qreg1[0]) mycircuit1.h(qreg1[1]) # apply cx mycircuit1.cx(qreg1[1],qreg1[0]) # apply h-gate to both qubits mycircuit1.h(qreg1[0]) mycircuit1.h(qreg1[1]) # measure both qubits mycircuit1.measure(qreg1,creg1) # execute the circuit 100 times in the local simulator job = execute(mycircuit1,Aer.get_backend('qasm_simulator'),shots=100) counts = job.result().get_counts(mycircuit1) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") # ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task1">click for our solution</a> <h3>Task 2</h3> Our task is to learn the behavior of the following quantum circuit by doing experiments. Our circuit has two qubits. <ul> <li> Apply CNOT(qreg[1] is the control, qreg[0] is the target). <li> Apply CNOT(qreg[0] is the control, qreg[1] is the target). <li> Apply CNOT(qreg[0] is the control, qreg[1] is the target). </ul> Iteratively initialize the qubits to $ \ket{00} $, $ \ket{01} $, $ \ket{10} $, and $ \ket{11} $. Execute your program 100 times for each iteration, and then check the outcomes for each iteration. Observe that the overall circuit swaps the values of the two qubits: <ul> <li> $\ket{00} \rightarrow \ket{00} $ </li> <li> $\ket{01} \rightarrow \ket{10} $ </li> <li> $\ket{10} \rightarrow \ket{01} $ </li> <li> $\ket{11} \rightarrow \ket{11} $ </li> </ul> ``` # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer # # your code is here from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['00','01','10','11'] for input in all_inputs: qreg2 = QuantumRegister(2) # quantum register with 2 qubits creg2 = ClassicalRegister(2) # classical register with 2 bits mycircuit2 = QuantumCircuit(qreg2,creg2) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit2.x(qreg2[1]) # set the value of the qubit to |1> if input[1]=='1': mycircuit2.x(qreg2[0]) # set the value of the qubit to |1> # apply cx(qreg2[0] is the target) mycircuit2.cx(qreg2[1],qreg2[0]) # apply cx(qreg2[1] is the target) mycircuit2.cx(qreg2[0],qreg2[1]) # apply cx(qreg2[1] is the target) mycircuit2.cx(qreg2[0],qreg2[1]) mycircuit2.measure(qreg2,creg2) # execute the circuit 100 times in the local simulator job = execute(mycircuit2,Aer.get_backend('qasm_simulator'),shots=100) counts = job.result().get_counts(mycircuit2) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") # ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task2">click for our solution</a> <h3> Task 3 [Extra] </h3> Create a quantum curcuit with $ n=5 $ qubits. Set each qubit to $ \ket{1} $. Repeat 4 times: <ul> <li>Randomly pick a pair of qubits, and apply cx-gate (CNOT operator) on the pair.</li> </ul> Draw your circuit, and execute your program 100 times. Verify your measurement results by checking the diagram of the circuit. ``` # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer # import randrange for random choices from random import randrange # # your code is here # ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task3">click for our solution</a> <h3> Task 4 </h3> In this task, our aim is to create an operator which will apply the NOT operator to the target qubit qreg[0] when the control qubit qreg[1] is in state $\ket{0}$. In other words, we want to obtain the following operator: $\mymatrix{cccc}{0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1}$. We can summarize its effect as follows: <ul> <li>$ \ket{00} \rightarrow \ket{01} $, </li> <li>$ \ket{01} \rightarrow \ket{00} $, </li> <li>$ \ket{10} \rightarrow \ket{10} $, </li> <li>$ \ket{11} \rightarrow \ket{11} $. </li> </ul> Write a function named c0x which takes the circuit name and the register as parameters and implements the operation. Check the corresponding unitary matrix using the code given below. <ul> <li>Apply NOT operator to qreg[1];</li> <li>Apply CNOT operator, where qreg[1] is control and qreg[0] is target;</li> <li>Apply NOT operator to qreg[1] - to revert it to the initial state.</li> </ul> <b>Idea:</b> We can use our regular CNOT operator, and to change the condition for the control qubit we can apply NOT operator to it before the CNOT - this way the NOT operator will be applied to the target qubit when initially the state of the control qubit was $\ket{0}$. Although this trick is quite simple, this approach is important and will be very useful in our following implementations. ``` def c0x(mycircuit,qreg): # # Your code here # from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg4 = QuantumRegister(2) creg4 = ClassicalRegister(2) mycircuit4 = QuantumCircuit(qreg4,creg4) #We apply the operator c0x by calling the function c0x(mycircuit4,qreg4) job = execute(mycircuit1,Aer.get_backend('unitary_simulator')) u=job.result().get_unitary(mycircuit1,decimals=3) for i in range(len(u)): s="" for j in range(len(u)): val = str(u[i][j].real) while(len(val)<5): val = " "+val s = s + val print(s) mycircuit1.draw(output="mpl") ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task4">click for our solution</a> <h3>CCNOT</h3> Now we will discuss CNOT gate controlled by two qubits (also called the Toffoli gate). The idea behind this gate is simple - NOT operator is applied to the target qubit when both control qubits are in state $\ket{1}$. Below you can see its matrix representation: $\mymatrix{cccccccc}{1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0}$. In summary: <ul> <li>$ CCNOT \ket{000} = \ket{000} $, </li> <li>$ CCNOT \ket{001} = \ket{001} $, </li> <li>$ CCNOT \ket{010} = \ket{010} $, </li> <li>$ CCNOT \ket{011} = \ket{011} $. </li> <li>$ CCNOT \ket{100} = \ket{100} $, </li> <li>$ CCNOT \ket{101} = \ket{101} $, </li> <li>$ CCNOT \ket{110} = \ket{111} $, </li> <li>$ CCNOT \ket{111} = \ket{110} $. </li> </ul> <h3> ccx-gate </h3> In Qiskit, CCNOT operator is represented as ccx-gate. It takes three arguments: two controller-qubits and target-qubit. circuit.ccx(control-qubit1,control-qubit2,target-qubit) <i> <b>x-gate</b> (NOT operator) is applied to <u>the target qubit</u> that is <b>CONTROLLED</b> by <u>the controller qubits</u>.</i> Now, let's apply CCNOT iteratively to see its effect. (Note that we follow the qiskit order.) ``` # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['000','001','010','011','100','101','110','111'] for input in all_inputs: qreg3 = QuantumRegister(3) # quantum register with 3 qubits creg3 = ClassicalRegister(3) # classical register with 3 bits mycircuit3 = QuantumCircuit(qreg3,creg3) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit3.x(qreg3[2]) # set the state of the first qubit to |1> if input[1]=='1': mycircuit3.x(qreg3[1]) # set the state of the second qubit to |1> if input[2]=='1': mycircuit3.x(qreg3[0]) # set the state of the third qubit to |1> # apply ccx(first-qubit,second-qubit,third-qubit) mycircuit3.ccx(qreg3[2],qreg3[1],qreg3[0]) # measure the qubits mycircuit3.measure(qreg3,creg3) # execute the circuit 100 times in the local simulator job = execute(mycircuit3,Aer.get_backend('qasm_simulator'),shots=100) counts = job.result().get_counts(mycircuit3) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ``` <hr> Recall Task 4. Similarly, we can create an operator which applies NOT operator to a target qubit, controlled by two qubits which are in states different than 1. For example, the following implementation allows to apply NOT operator to the target qubit if both control qubits are in state $\ket{0}$. The matrix form of the operator is given as follows: $\mymatrix{cccc}{0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0& 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0& 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1& 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0& 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0& 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0& 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0& 0 & 0 & 0 & 1}$. ``` def cc0x(mycircuit,qreg): mycircuit.x(qreg[2]) mycircuit.x(qreg[1]) mycircuit.ccx(qreg[2],qreg[1],qreg[0]) # Returning control qubits to the initial state mycircuit.x(qreg[1]) mycircuit.x(qreg[2]) from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer qreg4 = QuantumRegister(3) creg4 = ClassicalRegister(3) mycircuit4 = QuantumCircuit(qreg4,creg4) cc0x(mycircuit4,qreg4) job = execute(mycircuit4,Aer.get_backend('unitary_simulator')) u=job.result().get_unitary(mycircuit4,decimals=3) for i in range(len(u)): s="" for j in range(len(u)): val = str(u[i][j].real) while(len(val)<5): val = " "+val s = s + val print(s) mycircuit4.draw(output="mpl") ``` <h3>Task 5</h3> You have a circuit with three qubits. Apply NOT operator to qreg[1] if qreg[0] is in state 0 and qreg[2] is in state 1. Check its efffect on different inputs. ``` from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['000','001','010','011','100','101','110','111'] for input in all_inputs: qreg5 = QuantumRegister(3) # quantum register with 3 qubits creg5 = ClassicalRegister(3) # classical register with 3 bits mycircuit5 = QuantumCircuit(qreg5,creg5) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit5.x(qreg5[2]) # set the state of the first qubit to |1> if input[1]=='1': mycircuit5.x(qreg5[1]) # set the state of the second qubit to |1> if input[2]=='1': mycircuit5.x(qreg5[0]) # set the state of the third qubit to |1> # # You code here mycircuit5.x(qreg5[0]) mycircuit5.ccx(qreg5[2],qreg5[0],qreg5[1]) #Set back to initial value mycircuit5.x(qreg5[0]) # # measure the qubits mycircuit5.measure(qreg5,creg5) # execute the circuit 100 times in the local simulator job = execute(mycircuit5,Aer.get_backend('qasm_simulator'),shots=100) counts = job.result().get_counts(mycircuit5) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task5">click for our solution</a> <h3>More controls</h3> Suppose that you are given ccx operator which applies a not operator controlled by two qubits. You can use additional qubits to implement a not operator controlled by more than two qubits. The following code implements a NOT operator controlled by the three qubits qreg[1], qreg[2] and qreg[3], qreg[4] is used as the additional qubit and qreg[0] is the target. We apply it iteratively. Note that the first qubit in the output is due to additional qubit. ``` def cccx(mycircuit,qreg): #qreg[4] is set to 1 if qreg[1] and qreg[2] are 1 mycircuit.ccx(qreg[1],qreg[2],qreg[4]) #NOT operator is applied to qreg[0] if all three qubits are 1 mycircuit5.ccx(qreg[3],qreg[4],qreg[0]) #We set back qreg[4] back to its initial value mycircuit5.ccx(qreg[1],qreg[2],qreg[4]) # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['0000','0001','0010','0011','0100','0101','0110','0111','0000','1001','1010','1011','1100', '1101','1110','1111'] for input in all_inputs: qreg5 = QuantumRegister(5) # quantum register with 5 qubits creg5 = ClassicalRegister(5) # classical register with 5 bits mycircuit5 = QuantumCircuit(qreg5,creg5) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit5.x(qreg5[3]) # set the state of the qubit to |1> if input[1]=='1': mycircuit5.x(qreg5[2]) # set the state of the qubit to |1> if input[2]=='1': mycircuit5.x(qreg5[1]) # set the state of the qubit to |1> if input[3]=='1': mycircuit5.x(qreg5[0]) # set the state of the qubit to |1> cccx(mycircuit5,qreg5) # measure the qubits mycircuit5.measure(qreg5,creg5) # execute the circuit 100 times in the local simulator job = execute(mycircuit5,Aer.get_backend('qasm_simulator'),shots=100) counts = job.result().get_counts(mycircuit5) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ``` <h3>Task 6</h3> Implement the NOT operator controlled by 4 qubits where qreg[0] is the target and apply it iteratively to all possible states. Note that you will need additional qubits. ``` def ccccx(mycircuit,qreg): # #Your code here mycircuit.ccx(qreg[4],qreg[3],qreg[5]) mycircuit.ccx(qreg[2],qreg[1],qreg[6]) mycircuit.ccx(qreg[5],qreg[6],qreg[0]) # Returning additional qubits to the initial state mycircuit.ccx(qreg[2],qreg[1],qreg[6]) mycircuit.ccx(qreg[4],qreg[3],qreg[5]) # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['00000','00001','00010','00011','00100','00101','00110','00111','00000', '01001','01010','01011','01100','01101','01110','01111','10000','10001', '10010','10011','10100','10101','10110','10111','10000','11001','11010', '11011','11100','11101','11110','11111'] for input in all_inputs: qreg6 = QuantumRegister(7) # quantum register with 7 qubits creg6 = ClassicalRegister(7) # classical register with 7 bits mycircuit6 = QuantumCircuit(qreg6,creg6) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit6.x(qreg6[4]) # set the state of the first qubit to |1> if input[1]=='1': mycircuit6.x(qreg6[3]) # set the state of the second qubit to |1> if input[2]=='1': mycircuit6.x(qreg6[2]) # set the state of the third qubit to |1> if input[3]=='1': mycircuit6.x(qreg6[1]) # set the state of the fourth qubit to |1> if input[4]=='1': mycircuit6.x(qreg6[0]) # set the state of the fifth qubit to |1> ccccx(mycircuit6,qreg6) mycircuit6.measure(qreg6,creg6) job = execute(mycircuit6,Aer.get_backend('qasm_simulator'),shots=10000) counts = job.result().get_counts(mycircuit6) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task6">click for our solution</a> <h3>Task 7</h3> Implement the following control: the NOT operator is applied to the target qubit qreg[0] if 5 control qubits qreg[5] to qreg[1] are initially in the state $\ket{10101}$. Check your operator by trying different initial states. You may define a function or write your code directly. ``` # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer #Try different initial states all_inputs=['101010','101011','100000','111111'] for input in all_inputs: qreg7 = QuantumRegister(9) # quantum register with 9 qubits creg7 = ClassicalRegister(9) # classical register with 9 bits mycircuit7 = QuantumCircuit(qreg7,creg7) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit7.x(qreg7[5]) # set the state of the first qubit to |1> if input[1]=='1': mycircuit7.x(qreg7[4]) # set the state of the second qubit to |1> if input[2]=='1': mycircuit7.x(qreg7[3]) # set the state of the third qubit to |1> if input[3]=='1': mycircuit7.x(qreg7[2]) # set the state of the fourth qubit to |1> if input[4]=='1': mycircuit7.x(qreg7[1]) # set the state of the fifth qubit to |1> if input[5]=='1': mycircuit7.x(qreg7[0]) # set the state of the fifth qubit to |1> # # Your code here # mycircuit7.measure(qreg7,creg7) job = execute(mycircuit7,Aer.get_backend('qasm_simulator'),shots=10000) counts = job.result().get_counts(mycircuit7) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task7">click for our solution</a> <h3>Task 8 (Optional)</h3> Implement the parametrized controlled NOT operator with 4 control qubits, where parameter will be the state of control qubits for which NOT operator will be applied to the target qubit. As a result you need to define the following function: <i>control(circuit,quantum_reg,number)</i>, where: <ul> <li><i>circuit</i> allows to pass the quantum circuit;</li> <li><i>quantum_reg</i> allows to pass the quantum register;</li> <li><i>state</i> is the state of control qubits, between 0 and 15, where 0 corresponds to 0000 and 15 corresponds to 1111 (like binary numbers :) ).</li> </ul> ``` #state - the state of control qubits, between 0 and 15. def control(circuit,quantum_reg,state): # # your code is here # ``` You can try different inputs to see that your function is implementing the mentioned control operation. ``` #Try different initial states all_inputs=['01010','01011','10000','11111'] for input in all_inputs: qreg8 = QuantumRegister(7) # quantum register with 7 qubits creg8 = ClassicalRegister(7) # classical register with 7 bits mycircuit8 = QuantumCircuit(qreg8,creg8) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit8.x(qreg8[4]) # set the state of the first qubit to |1> if input[1]=='1': mycircuit8.x(qreg8[3]) # set the state of the second qubit to |1> if input[2]=='1': mycircuit8.x(qreg8[2]) # set the state of the third qubit to |1> if input[3]=='1': mycircuit8.x(qreg8[1]) # set the state of the fourth qubit to |1> if input[4]=='1': mycircuit8.x(qreg8[0]) # set the state of the fifth qubit to |1> control(mycircuit8,qreg8,5) mycircuit8.measure(qreg8,creg8) job = execute(mycircuit8,Aer.get_backend('qasm_simulator'),shots=10000) counts = job.result().get_counts(mycircuit8) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ``` <a href="B39_Controlled_Operations_Solutions.ipynb#task8">click for our solution</a> <h3> Multi-controlled Not Gate </h3> In Qiskit there is a multi-controlled not gate, known as the multi controlled Toffoli gate. It is represented by mct. circuit.mct(control_list,target_qubit,ancilla_list) <i> <b>x-gate</b> (NOT operator) is applied to <u>the target qubit</u> <b>CONTROLLED</b> by <u>the list of control qubits</u> using the <u>ancilla list</u> as the additional qubits</u>.</i> If there are $n$ control qubits, how many additional qubits do you need? Let's apply a NOT operator controlled by the four qubits qreg[1], qreg[2], qreg[3] and qreg[4]. Let qreg[5] and qreg[6] be the additional qubits and let qreg[0] be the target. Let's check the inputs 11111 and 11110. ``` # import all necessary objects and methods for quantum circuits from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute, Aer all_inputs=['11110','11111'] for input in all_inputs: qreg = QuantumRegister(7) # quantum register with 7 qubits creg = ClassicalRegister(7) # classical register with 7 bits mycircuit = QuantumCircuit(qreg,creg) # quantum circuit with quantum and classical registers #initialize the inputs if input[0]=='1': mycircuit.x(qreg[4]) # set the state of the qubit to |1> if input[1]=='1': mycircuit.x(qreg[3]) # set the state of the qubit to |1> if input[2]=='1': mycircuit.x(qreg[2]) # set the state of the qubit to |1> if input[3]=='1': mycircuit.x(qreg[1]) # set the state of the qubit to |1> if input[4]=='1': mycircuit.x(qreg[0]) # set the state of the qubit to |1> control_list=[] for i in range(1,5): control_list.append(qreg[i]) mycircuit.mct(control_list,qreg[0],[qreg[5],qreg[6]]) # measure the qubits mycircuit.measure(qreg,creg) # execute the circuit 100 times in the local simulator job = execute(mycircuit,Aer.get_backend('qasm_simulator'),shots=100) counts = job.result().get_counts(mycircuit) for outcome in counts: # print the reverse of the outcomes print("our input is",input,": ",outcome,"is observed",counts[outcome],"times") ```
github_jupyter
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D2_DynamicNetworks/W3D2_Tutorial1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> # Neuromatch Academy: Week 3, Day 2, Tutorial 1 # Neuronal Network Dynamics: Neural Rate Models ## Background The brain is a complex system, not because it is composed of a large number of diverse types of neurons, but mainly because of how neurons are connected to each other. The brain is a very large network of densely interconnected neurons. The activity of neurons is constantly evolving in time. For this reason, neurons can be modeled as dynamical systems. The dynamical system approach is only one of the many modeling approaches that computational neuroscientists have developed (other points of views include information processing, network science, and statistical models). How the dynamics of neuronal networks affect the representation and processing of information in the brain is an open question. However, signatures of altered brain dynamics present in many brain diseases (e.g., in epilepsy or Parkinson's disease) tell us that it is crucial to study neuronal dynamics if we want to understand the brain. In this tutorial, we will simulate and study one of the simplest models of biological neuronal networks. Instead of modeling and simulating individual excitatory neurons (e.g., LIF models that you implemented yesterday), we will treat them as a single homogeneous population and approximate their dynamics using a single one-dimensional equation describing the evolution of their average spiking rate in time. ## Objectives In this tutorial we will learn how to build a firing rate model of a single population of excitatory neurons. Steps: - Write the equation for the firing rate dynamics of a 1D excitatory population. - Visualize the response of the population as a function of parameters such as threshold level and gain, using the frequency-current (F-I) curve. - Numerically simulate the dynamics of the excitatory population and find the fixed points of the system. - Investigate the stability of the fixed points by linearizing the dynamics around them. # Setup ``` # Imports import matplotlib.pyplot as plt # import matplotlib import numpy as np # import numpy import scipy.optimize as opt # import root-finding algorithm import ipywidgets as widgets # interactive display #@title Figure Settings %matplotlib inline fig_w, fig_h = 6, 4 my_fontsize = 16 my_params = {'axes.labelsize': my_fontsize, 'axes.titlesize': my_fontsize, 'figure.figsize': [fig_w, fig_h], 'font.size': my_fontsize, 'legend.fontsize': my_fontsize-4, 'lines.markersize': 8., 'lines.linewidth': 2., 'xtick.labelsize': my_fontsize-2, 'ytick.labelsize': my_fontsize-2} plt.rcParams.update(my_params) # @title Helper functions def plot_fI(x, f): plt.figure(figsize=(6,4)) # plot the figure plt.plot(x, f, 'k') plt.xlabel('x (a.u.)', fontsize=14.) plt.ylabel('F(x)', fontsize=14.) plt.show() #@title Helper functions def plot_dE_E(E, dEdt): plt.figure() plt.plot(E_grid, dEdt, 'k') plt.plot(E_grid, 0.*E_grid, 'k--') plt.xlabel('E activity') plt.ylabel(r'$\frac{dE}{dt}$', fontsize=20) plt.ylim(-0.1, 0.1) def plot_dFdt(x,dFdt): plt.figure() plt.plot(x, dFdt, 'r') plt.xlabel('x (a.u.)', fontsize=14.) plt.ylabel('dF(x)', fontsize=14.) plt.show() ``` # Neuronal network dynamics ``` #@title Video: Dynamic networks from IPython.display import YouTubeVideo video = YouTubeVideo(id="ZSsAaeaG9ZM", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` ## Dynamics of a single excitatory population Individual neurons respond by spiking. When we average the spikes of neurons in a population, we can define the average firing activity of the population. In this model, we are interested in how the population-averaged firing varies as a function of different network parameters. \begin{align} \tau_E \frac{dE}{dt} &= -E + F(w_{EE}E + I^{\text{ext}}_E) \quad\qquad (1) \end{align} $E(t)$ represents the average firing rate of the excitatory population at time $t$, $\tau_E$ controls the timescale of the evolution of the average firing rate, $w_{EE}$ denotes the strength (synaptic weight) of the recurrent excitatory input to the population, $I^{\text{ext}}_E$ represents the external input, and the transfer function $F(\cdot)$ (which can be related to f-I curve of individual neurons described in the next sections) represents the population activation function in response to all received inputs. To start building the model, please execute the cell below to initialize the simulation parameters. ``` #@title Default parameters for a single excitatory population model def default_parsE( **kwargs): pars = {} ### Excitatory parameters ### pars['tau_E'] = 1. # Timescale of the E population [ms] pars['a_E'] = 1.2 # Gain of the E population pars['theta_E'] = 2.8 # Threshold of the E population ### Connection strength ### pars['wEE'] = 0. # E to E, we first set it to 0 ### External input ### pars['I_ext_E'] = 0. ### simulation parameters ### pars['T'] = 20. # Total duration of simulation [ms] pars['dt'] = .1 # Simulation time step [ms] pars['E_init'] = 0.2 # Initial value of E ### External parameters if any ### for k in kwargs: pars[k] = kwargs[k] pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized time points [ms] return pars ``` You can use: - `pars = default_parsE()` to get all the parameters, and then you can execute `print(pars)` to check these parameters. - `pars = default_parsE(T=T_sim, dt=time_step)` to set new simulation time and time step - After `pars = default_parsE()`, use `pars['New_para'] = value` to add an new parameter with its value ## F-I curves In electrophysiology, a neuron is often characterized by its spike rate output in response to input currents. This is often called the **F-I** curve, denoting the spike frequency (**F**) in response to different injected currents (**I**). We estimated this for an LIF neuron in yesterday's tutorial. The transfer function $F(\cdot)$ in Equation (1) represents the gain of the population as a function of the total input. The gain is often modeled as a sigmoidal function, i.e., more input drive leads to a nonlinear increase in the population firing rate. The output firing rate will eventually saturate for high input values. A sigmoidal $F(\cdot)$ is parameterized by its gain $a$ and threshold $\theta$. $$ F(x;a,\theta) = \frac{1}{1+\text{e}^{-a(x-\theta)}} - \frac{1}{1+\text{e}^{a\theta}} \quad(2)$$ The argument $x$ represents the input to the population. Note that the second term is chosen so that $F(0;a,\theta)=0$. Many other transfer functions (generally monotonic) can be also used. Examples are the rectified linear function $ReLU(x)$ or the hyperbolic tangent $tanh(x)$. ### Exercise 1: Implement F-I curve Let's first investigate the activation functions before simulating the dynamics of the entire population. In this exercise, you will implement a sigmoidal **F-I** curve or transfer function $F(x)$, with gain $a$ and threshold level $\theta$ as parameters. ``` # Excercise 1 def F(x,a,theta): """ Population activation function. Args: x (float): the population input a (float): the gain of the function theta (float): the threshold of the function Returns: float: the population activation response F(x) for input x """ ################################################################################# ## TODO for students: compute f = F(x), remove the NotImplementedError once done# ################################################################################# # the exponential function: np.exp(.) # f = ... raise NotImplementedError("Student excercise: implement the f-I function") return f # Uncomment these lines when you've filled the function, then run the cell again # to plot the f-I curve. pars = default_parsE() # get default parameters # print(pars) # print out pars to get familiar with parameters x = np.arange(0,10,.1) # set the range of input # Uncomment this when you fill the exercise, and call the function # plot_fI(x, F(x,pars['a_E'],pars['theta_E'])) # to_remove solution def F(x,a,theta): """ Population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: the population activation response F(x) for input x """ # add the expression of f = F(x) f = (1+np.exp(-a*(x-theta)))**-1 - (1+np.exp(a*theta))**-1 return f pars = default_parsE() # get default parameters x = np.arange(0,10,.1) # set the range of input with plt.xkcd(): plot_fI(x, F(x,pars['a_E'],pars['theta_E'])) ``` ### Interactive Demo: Parameter exploration of F-I curve Here's an interactive demo that shows how the F-I curve is changing for different values of the gain and threshold parameters. **Remember to enable the demo by running the cell.** ``` #@title F-I curve Explorer def interactive_plot_FI(a, theta): ''' Population activation function. Expecxts: a : the gain of the function theta : the threshold of the function Returns: plot the F-I curve with give parameters ''' # set the range of input x = np.arange(0,10,.1) plt.figure() plt.plot(x, F(x, a, theta), 'k') plt.xlabel('x (a.u.)', fontsize=14.) plt.ylabel('F(x)', fontsize=14.) plt.show() _ = widgets.interact(interactive_plot_FI, a = (0.3, 3., 0.3), \ theta = (2., 4., 0.2)) ``` ## Simulation scheme of E dynamics Because $F(\cdot)$ is a nonlinear function, the exact solution of Equation $1$ can not be determined via analytical methods. Therefore, numerical methods must be used to find the solution. In practice, the derivative on the left-hand side of Equation (1) can be approximated using the Euler method on a time-grid of stepsize $\Delta t$: \begin{align} &\frac{dE}{dt} \approx \frac{E[k+1]-E[k]}{\Delta t} \end{align} where $E[k] = E(k\Delta t)$. Thus, $$\Delta E[k] = \frac{\Delta t}{\tau_E}[-E[k] + F(w_{EE}E[k] + I^{\text{ext}}_E(k;a_E,\theta_E)]$$ Hence, Equation (1) is updated at each time step by: $$E[k+1] = E[k] + \Delta E[k]$$ **_Please execute the following cell to enable the WC simulator_** ``` #@title E population simulator: `simulate_E` def simulate_E(pars): """ Simulate an excitatory population of neurons Args: pars : Parameter dictionary Returns: E : Activity of excitatory population (array) """ # Set parameters tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] wEE = pars['wEE'] I_ext_E = pars['I_ext_E'] E_init = pars['E_init'] dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size # Initialize activity E = np.zeros(Lt) E[0] = E_init I_ext_E = I_ext_E*np.ones(Lt) # Update the E activity for k in range(Lt-1): dE = dt/tau_E * (-E[k] + F(wEE*E[k]+I_ext_E[k], a_E, theta_E)) E[k+1] = E[k] + dE return E print(help(simulate_E)) ``` #### Interactive Demo: Parameter Exploration of single population dynamics Note that $w_{EE}=0$, as in the default setting, means no recurrent input to the excitatory population in Equation (1). Hence, the dynamics is entirely determined by the external input $I_{E}^{\text{ext}}$. Try to explore how $E_{sim}(t)$ changes with different $I_{E}^{\text{ext}}$ and $\tau_E$ parameter values, and investigate the relationship between $F(I_{E}^{\text{ext}}; a_E, \theta_E)$ and the steady value of E. Note that, $E_{ana}(t)$ denotes the analytical solution. ``` #@title Mean-field model Explorer # get default parameters pars = default_parsE(T=20.) def Myplot_E_diffI_difftau(I_ext, tau_E): # set external input and time constant pars['I_ext_E'] = I_ext pars['tau_E'] = tau_E # simulation E = simulate_E(pars) # Analytical Solution E_ana = pars['E_init'] + (F(I_ext,pars['a_E'],pars['theta_E'])-pars['E_init'])*\ (1.-np.exp(-pars['range_t']/pars['tau_E'])) # plot plt.figure() plt.plot(pars['range_t'], E, 'b', label=r'$E_{\mathrm{sim}}$(t)', alpha=0.5, zorder=1) plt.plot(pars['range_t'], E_ana, 'b--', lw=5, dashes=(2,2),\ label=r'$E_{\mathrm{ana}}$(t)', zorder=2) plt.plot(pars['range_t'], F(I_ext,pars['a_E'],pars['theta_E'])\ *np.ones(pars['range_t'].size), 'k--', label=r'$F(I_E^{\mathrm{ext}})$') plt.xlabel('t (ms)', fontsize=16.) plt.ylabel('E activity', fontsize=16.) plt.legend(loc='best', fontsize=14.) plt.show() _ = widgets.interact(Myplot_E_diffI_difftau, I_ext = (0.0, 10., 1.),\ tau_E = (1., 5., 0.2)) ``` ### Think! Above, we have numerically solved a system driven by a positive input and that, if $w_{EE} \neq 0$, receives an excitatory recurrent input (**try changing the value of $w_{EE}$ to a positive number**). Yet, $E(t)$ either decays to zero or reaches a fixed non-zero value. - Why doesn't the solution of the system "explode" in a finite time? In other words, what guarantees that E(t) stays finite? - Which parameter would you change in order to increase the maximum value of the response? ## Fixed points of the E system ``` #@title Video: Fixed point from IPython.display import YouTubeVideo video = YouTubeVideo(id="B31fX6V0PZ4", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` As you varied the two parameters in the last Interactive Demo, you noticed that, while at first the system output quickly changes, with time, it reaches its maximum/minimum value and does not change anymore. The value eventually reached by the system is called the **steady state** of the system, or the **fixed point**. Essentially, in the steady states the derivative with respect to time of the activity ($E$) is zero, i.e. $\frac{dE}{dt}=0$. We can find that the steady state of the Equation $1$ by setting $\displaystyle{\frac{dE}{dt}=0}$ and solve for $E$: $$E_{\text{steady}} = F(w_{EE}E_{\text{steady}} + I^{\text{ext}}_E;a_E,\theta_E) = 0, \qquad (3)$$ When it exists, the solution of Equation $3$ defines a **fixed point** of the dynamics which satisfies $\displaystyle{\frac{dE}{dt}=0}$ (and determines steady state of the system). Notice that the right-hand side of the last equation depends itself on $E_{steady}$. If $F(x)$ is nonlinear it is not always possible to find an analytical solution that can instead be found via numerical simulations, as we will do later. From the Interactive Demo one could also notice that the value of $\tau_E$ influences how quickly the activity will converge to the steady state from its initial value. In the specific case of $w_{EE}=0$, we can also analytically compute the analytical solution of Equation $1$ (i.e., the thick blue dashed line) and deduce the role of $\tau_E$ in determining the convergence to the fixed point: $$\displaystyle{E(t) = \big{[}F(I^{\text{ext}}_E;a_E,\theta_E) -E(t=0)\big{]} (1-\text{e}^{-\frac{t}{\tau_E}})} + E(t=0)$$ \\ We can now numerically calculate the fixed point with the `scipy.optimize.root` function. <font size=3><font color='gray'>_(note that at the very beginning, we `import scipy.optimize as opt` )_</font></font>. \\ Please execute the cell below to define the functions `my_fpE`, `check_fpE`, and `plot_fpE` ``` #@title Function of calculating the fixed point def my_fpE(pars, E_init): # get the parameters a_E, theta_E = pars['a_E'], pars['theta_E'] wEE = pars['wEE'] I_ext_E = pars['I_ext_E'] # define the right hand of E dynamics def my_WCr(x): E = x[0] dEdt=(-E + F(wEE*E+I_ext_E,a_E,theta_E)) y = np.array(dEdt) return y x0 = np.array(E_init) x_fp = opt.root(my_WCr, x0).x return x_fp def check_fpE(pars, x_fp): a_E, theta_E = pars['a_E'], pars['theta_E'] wEE = pars['wEE'] I_ext_E = pars['I_ext_E'] # calculate Equation(3) y = x_fp- F(wEE*x_fp+I_ext_E, a_E, theta_E) return np.abs(y)<1e-4 def plot_fpE(pars, x_fp, mycolor): wEE = pars['wEE'] I_ext_E = pars['I_ext_E'] plt.plot(wEE*x_fp+I_ext_E, x_fp, 'o', color=mycolor) ``` #### Exercise 2: Visualization of the fixed point When no analytical solution of Equation $3$ can be found, it is often useful to plot $\displaystyle{\frac{dE}{dt}=0}$ as a function of $E$. The values of E for which the plotted function crosses zero on the y axis correspond to fixed points. Here, let us, for example, set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$. Define $\displaystyle{\frac{dE}{dt}}$ using Equation $1$, plot the result, and check for the presence of fixed points. We will now try to find the fixed points using the previously defined function `my_fpE(pars, E_init)` with different initial values ($E_{\text{init}}$). Use the previously defined function `check_fpE(pars, x_fp)` to verify that the values of $E$ for which $\displaystyle{\frac{dE}{dt}} = 0$ are the true fixed points. ``` # Exercise 2 pars = default_parsE() # get default parameters # set your external input and wEE pars['I_ext_E'] = 0.5 pars['wEE'] = 5.0 E_grid = np.linspace(0, 1., 1000)# give E_grid #figure, line (E, dEdt) ############################### ## TODO for students: # ## Calculate dEdt = -E + F(.) # ## Then plot the lines # ############################### # Calculate dEdt # dEdt = ... # Uncomment this to plot the dEdt across E # plot_dE_E(E_grid, dEdt) # Add fixed point ##################################################### ## TODO for students: # # Calculate the fixed point with your initial value # # verify your fixed point and plot the corret ones # ##################################################### # Calculate the fixed point with your initial value x_fp_1 = my_fpE(pars, 1) #check if x_fp is the intersection of the lines with the given function check_fpE(pars, x_fp) #vary different initial values to find the correct fixed point (Should be 3) # Use blue, red and yellow colors, respectively ('b', 'r', 'y' codenames) # if check_fpE(pars, x_fp_1): # plt.plot(x_fp_1, 0, 'bo', ms=8) # Replicate the code above (lines 35-36) for all fixed points. # to_remove solution pars = default_parsE() # get default parameters #set your external input and wEE pars['I_ext_E'] = 0.5 pars['wEE'] = 5.0 # give E_grid E_grid = np.linspace(0, 1., 1000) # Calculate dEdt dEdt = -E_grid + F(pars['wEE']*E_grid+pars['I_ext_E'], pars['a_E'], pars['theta_E']) with plt.xkcd(): plot_dE_E(E_grid, dEdt) #Calculate the fixed point with your initial value x_fp_1 = my_fpE(pars, 0.) if check_fpE(pars, x_fp_1): plt.plot(x_fp_1, 0, 'bo', ms=8) x_fp_2 = my_fpE(pars, 0.4) if check_fpE(pars, x_fp_2): plt.plot(x_fp_2, 0, 'ro', ms=8) x_fp_3 = my_fpE(pars, 0.9) if check_fpE(pars, x_fp_3): plt.plot(x_fp_3, 0, 'yo', ms=8) plt.show() ``` #### Interactive Demo: fixed points as a function of recurrent and external inputs. You can now explore how the previous plot changes when the recurrent coupling $w_{\text{EE}}$ and the external input $I_E^{\text{ext}}$ take different values. ``` #@title Fixed point Explorer def plot_intersection_E(wEE, I_ext_E): #set your parameters pars['wEE'] = wEE pars['I_ext_E'] = I_ext_E #note that wEE !=0 if wEE>0: # find fixed point x_fp_1 = my_fpE(pars, 0.) x_fp_2 = my_fpE(pars, 0.4) x_fp_3 = my_fpE(pars, 0.9) plt.figure() E_grid = np.linspace(0, 1., 1000) dEdt = -E_grid + F(wEE*E_grid+I_ext_E, pars['a_E'], pars['theta_E']) plt.plot(E_grid, dEdt, 'k') plt.plot(E_grid, 0.*E_grid, 'k--') if check_fpE(pars, x_fp_1): plt.plot(x_fp_1, 0, 'bo', ms=8) if check_fpE(pars, x_fp_2): plt.plot(x_fp_2, 0, 'bo', ms=8) if check_fpE(pars, x_fp_3): plt.plot(x_fp_3, 0, 'bo', ms=8) plt.xlabel('E activity', fontsize=14.) plt.ylabel(r'$\frac{dE}{dt}$', fontsize=18.) plt.show() _ = widgets.interact(plot_intersection_E, wEE = (1., 7., 0.2), \ I_ext_E = (0., 3., 0.1)) ``` ## Summary In this tutorial, we have investigated the dynamics of a rate-based single excitatory population of neurons. We learned about: - The effect of the input parameters and the time constant of the network on the dynamics of the population. - How to find the fixed point(s) of the system. Next, we have two Bonus, but important concepts in dynamical system analysis and simulation. If you have time left, watch the next video and proceed to solve the exercises. You will learn: - How to determine the stability of a fixed point by linearizing the system. - How to add realistic inputs to our model. ## Bonus 1: Stability of a fixed point ``` #@title Video: Stability of fixed points from IPython.display import YouTubeVideo video = YouTubeVideo(id="nvxxf59w2EA", width=854, height=480, fs=1) print("Video available at https://youtube.com/watch?v=" + video.id) video ``` #### Initial values and trajectories Here, let us first set $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, and investigate the dynamics of $E(t)$ starting with different initial values $E(0) \equiv E_{\text{init}}$. We will plot the trajectories of $E(t)$ with $E_{\text{init}} = 0.0, 0.1, 0.2,..., 0.9$. ``` #@title Initial values pars = default_parsE() pars['wEE'] = 5.0 pars['I_ext_E'] = 0.5 plt.figure(figsize=(10,6)) for ie in range(10): pars['E_init'] = 0.1*ie # set the initial value E = simulate_E(pars) # run the simulation # plot the activity with given initial plt.plot(pars['range_t'], E, 'b', alpha=0.1 + 0.1*ie, label= r'E$_{\mathrm{init}}$=%.1f' % (0.1*ie)) plt.xlabel('t (ms)') plt.title('Two steady states?') plt.ylabel('E(t)') plt.legend(loc=[0.72, 0.13], fontsize=14) plt.show() ``` #### Interactive Demo: dynamics as a function of the initial value. Let's now set $E_{init}$ to a value of your choice in this demo. How does the solution change? What do you observe? ``` #@title Initial value Explorer pars = default_parsE() pars['wEE'] = 5.0 pars['I_ext_E'] = 0.5 def plot_E_diffEinit(E_init): pars['E_init'] = E_init E = simulate_E(pars) plt.figure() plt.plot(pars['range_t'], E, 'b', label='E(t)') plt.xlabel('t (ms)', fontsize=16.) plt.ylabel('E activity', fontsize=16.) plt.show() _ = widgets.interact(plot_E_diffEinit, E_init = (0., 1., 0.02)) ``` ### Stability analysis via linearization of the dynamics Just like Equation $1$ in the case ($w_{EE}=0$) discussed above, a generic linear system $$\frac{dx}{dt} = \lambda (x - b),$$ has a fixed point for $x=b$. The analytical solution of such a system can be found to be: $$x(t) = b + \big{(} x(0) - b \big{)} \text{e}^{\lambda t}.$$ Now consider a small perturbation of the activity around the fixed point: $x(0) = b+ \epsilon$, where $|\epsilon| \ll 1$. Will the perturbation $\epsilon(t)$ grow with time or will it decay to the fixed point? The evolution of the perturbation with time can be written, using the analytical solution for $x(t)$, as: $$\epsilon (t) = x(t) - b = \epsilon \text{e}^{\lambda t}$$ - if $\lambda < 0$, $\epsilon(t)$ decays to zero, $x(t)$ will still converge to $b$ and the fixed point is "**stable**". - if $\lambda > 0$, $\epsilon(t)$ grows with time, $x(t)$ will leave the fixed point $b$ exponentially and the fixed point is, therefore, "**unstable**" . ### Compute the stability of Equation (1) Similar to what we did in the linear system above, in order to determine the stability of a fixed point $E_{\rm fp}$ of the excitatory population dynamics, we perturb Equation $1$ around $E_{\rm fp}$ by $\epsilon$, i.e. $E = E_{\rm fp} + \epsilon$. We can plug in Equation $1$ and obtain the equation determining the time evolution of the perturbation $\epsilon(t)$: \begin{align} \tau_E \frac{d\epsilon}{dt} \approx -\epsilon + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E) \epsilon \end{align} where $F'(\cdot)$ is the derivative of the transfer function $F(\cdot)$. We can rewrite the above equation as: \begin{align} \frac{d\epsilon}{dt} \approx \frac{\epsilon}{\tau_E }[-1 + w_{EE} F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)] \end{align} That is, as in the linear system above, the value of $\lambda = [-1+ w_{EE}F'(w_{EE}E_{\text{fp}} + I^{\text{ext}}_E;a_E,\theta_E)]/\tau_E$ determines whether the perturbation will grow or decay to zero, i.e., $\lambda$ defines the stability of the fixed point. This value is called the **eigenvalue** of the dynamical system. ### Exercise 4: Compute $dF$ and Eigenvalue The derivative of the sigmoid transfer function is: \begin{align} \frac{dF}{dx} & = \frac{d}{dx} (1+\exp\{-a(x-\theta)\})^{-1} \\ & = a\exp\{-a(x-\theta)\} (1+\exp\{-a(x-\theta)\})^{-2}. \end{align} Let's now find the expression for the derivative $\displaystyle{\frac{dF}{dx}}$ in the following cell and plot it. ``` # Exercise 4 def dF(x,a,theta): """ Population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : the population activation response F(x) for input x """ ##################################################################### ## TODO for students: compute dFdx, then remove NotImplementedError # ##################################################################### # dFdx = ... raise NotImplementedError("Student excercise: compute the deravitive of F(x)") return dFdx pars = default_parsE() # get default parameters x = np.arange(0,10,.1) # set the range of input # Uncomment below lines after completing the dF function # plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E'])) # to_remove solution def dF(x,a,theta): """ Population activation function. Args: x : the population input a : the gain of the function theta : the threshold of the function Returns: dFdx : the population activation response F(x) for input x """ dFdx = a*np.exp(-a*(x-theta))*(1+np.exp(-a*(x-theta)))**-2 return dFdx # get default parameters pars = default_parsE() # set the range of input x = np.arange(0,10,.1) # plot figure with plt.xkcd(): plot_dFdt(x,dF(x,pars['a_E'],pars['theta_E'])) ``` ### Exercise 5: Compute eigenvalues As discussed above, for the case with $w_{EE}=5.0$ and $I^{\text{ext}}_E=0.5$, the system displays **3** fixed points. However, when we simulated the dynamics and varied the initial conditions $E_{\rm init}$, we could only obtain **two** steady states. In this exercise, we will now check the stability of each of the $3$ fixed points by calculating the corresponding eigenvalues with the function `eig_E` defined above. Check the sign of each eigenvalue (i.e., stability of each fixed point). How many of the fixed points are stable? ``` # Exercise 5 pars = default_parsE() pars['wEE'] = 5.0 pars['I_ext_E'] = 0.5 def eig_E(pars, fp): """ Args: pars : Parameter dictionary fp : fixed point E Returns: eig : eigevalue of the linearized system """ #get the parameters tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] wEE, I_ext_E = pars['wEE'], pars['I_ext_E'] # fixed point E = fp ####################################################################### ## TODO for students: compute eigenvalue, remove NotImplementedError # ####################################################################### # eig = ... raise NotImplementedError("Student excercise: compute the eigenvalue") return eig # Uncomment below lines after completing the eigE function. # x_fp_1 = fpE(pars, 0.) # eig_fp_1 = eig_E(pars, x_fp_1) # print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1)) # Continue by finding the eigenvalues for all fixed points of Exercise 2 # to_remove solution pars = default_parsE() pars['wEE'] = 5.0 pars['I_ext_E'] = 0.5 def eig_E(pars, fp): """ Args: pars : Parameter dictionary fp : fixed point E Returns: eig : eigevalue of the linearized system """ #get the parameters tau_E, a_E, theta_E = pars['tau_E'], pars['a_E'], pars['theta_E'] wEE, I_ext_E = pars['wEE'], pars['I_ext_E'] # fixed point E = fp eig = (-1. + wEE*dF(wEE*E + I_ext_E, a_E, theta_E)) / tau_E return eig # Uncomment below lines after completing the eigE function x_fp_1 = my_fpE(pars, 0.) eig_E1 = eig_E(pars, x_fp_1) print('Fixed point1=%.3f, Eigenvalue=%.3f' % (x_fp_1, eig_E1)) # Continue by finding the eigenvalues for all fixed points of Exercise 2 x_fp_2 = my_fpE(pars, 0.4) eig_E2 = eig_E(pars, x_fp_2) print('Fixed point2=%.3f, Eigenvalue=%.3f' % (x_fp_2, eig_E2)) x_fp_3 = my_fpE(pars, 0.9) eig_E3 = eig_E(pars, x_fp_3) print('Fixed point3=%.3f, Eigenvalue=%.3f' % (x_fp_3, eig_E3)) ``` ### Think! Throughout the tutorial, we have assumed $w_{\rm EE}> 0 $, i.e., we considered a single population of **excitatory** neurons. What do you think will be the behavior of a population of inhibitory neurons, i.e., where $w_{\rm EE}> 0$ is replaced by $w_{\rm II}< 0$? ## Bonus 2: Noisy input drives transition between two stable states ### Ornstein-Uhlenbeck (OU) process As discussed in several previous tutorials, the OU process is usually used to generate a noisy input into the neuron. The OU input $\eta(t)$ follows: $$\tau_\eta \frac{d}{dt}\eta(t) = -\eta (t) + \sigma_\eta\sqrt{2\tau_\eta}\xi(t)$$ Execute the following function `my_OU(pars, sig, myseed=False)` to generate an OU process. ``` #@title OU process `my_OU(pars, sig, myseed=False)` def my_OU(pars, sig, myseed=False): """ A functions that generates Ornstein-Uhlenback process Args: pars : parameter dictionary sig : noise amplitute myseed : random seed. int or boolean Returns: I : Ornstein-Uhlenbeck input current """ # Retrieve simulation parameters dt, range_t = pars['dt'], pars['range_t'] Lt = range_t.size tau_ou = pars['tau_ou'] # [ms] # set random seed if myseed: np.random.seed(seed=myseed) else: np.random.seed() # Initialize noise = np.random.randn(Lt) I = np.zeros(Lt) I[0] = noise[0] * sig #generate OU for it in range(Lt-1): I[it+1] = I[it] + dt/tau_ou*(0.-I[it]) + np.sqrt(2.*dt/tau_ou) * sig * noise[it+1] return I pars = default_parsE(T=100) pars['tau_ou'] = 1. #[ms] sig_ou = 0.1 I_ou = my_OU(pars, sig=sig_ou, myseed=1998) plt.figure(figsize=(10, 4)) plt.plot(pars['range_t'], I_ou, 'b') plt.xlabel('Time (ms)') plt.ylabel(r'$I_{\mathrm{OU}}$'); ``` ### Bonus Example: Up-Down transition In the presence of two or more fixed points, noisy inputs can drive a transition between the fixed points! Here, we stimulate an E population for 1,000 ms applying OU inputs. ``` #@title Simulation of an E population with OU inputs pars = default_parsE(T = 1000) pars['wEE'] = 5.0 sig_ou = 0.7 pars['tau_ou'] = 1. #[ms] pars['I_ext_E'] = 0.56 + my_OU(pars, sig=sig_ou, myseed=2020) E = simulate_E(pars) plt.figure(figsize=(10, 4)) plt.plot(pars['range_t'], E, 'r', alpha=0.8) plt.xlabel('t (ms)') plt.ylabel('E activity') plt.show() ```
github_jupyter
``` import numpy as np import pandas as pd yelp = pd.read_csv('https://raw.githubusercontent.com/shrikumarp/shrikumarpp1/master/yelp.csv') yelp.head() yelp.info() yelp.describe() yelp['text length'] = yelp['text'].apply(len) import matplotlib.pyplot as plt import seaborn as sns sns.set_style('white') %matplotlib inline #visualising average text length for star reviews g = sns.FacetGrid(yelp,col='stars') g.map(plt.hist,'text length') sns.boxplot(x='stars',y='text length',data=yelp,palette='rainbow') sns.countplot(x='stars',data=yelp,palette='rainbow') stars = yelp.groupby('stars').mean() stars stars.corr() sns.heatmap(stars.corr(),cmap='rainbow',annot=True) ``` . Machine Learning models for text classification: ``` yelp_class = yelp[(yelp.stars==1) | (yelp.stars==5)] X = yelp_class['text'] y = yelp_class['stars'] from sklearn.feature_extraction.text import CountVectorizer cv = CountVectorizer() # count vectorization converts the text into integerized count vectors corresponding to the occurence of a particular word in the sentence. X = cv.fit_transform(X) from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.3,random_state=101) # Frst we use Naive Bayes Algorithm. from sklearn.naive_bayes import MultinomialNB nb = MultinomialNB() nb.fit(X_train,y_train) predictions = nb.predict(X_test) from sklearn.metrics import confusion_matrix,classification_report print(confusion_matrix(y_test,predictions)) print('\n') print(classification_report(y_test,predictions)) ``` . Support Vector Classifier ``` from sklearn.model_selection import GridSearchCV parameters = [{'C': [1, 10, 100], 'kernel': ['linear']}, {'C': [1, 10, 100], 'kernel': ['rbf']}] from sklearn.svm import SVC svmmodel = SVC() svmmodel.fit(X_train,y_train) predsvm= svmmodel.predict(X_test) print(confusion_matrix(y_test,predsvm)) print('\n') print(classification_report(y_test,predsvm)) ``` Now we use the parameters list defined above to try to tune the parameter 'C' to see if we can squeeze out more accuracy out of this classifier. ``` grid_search_svm = GridSearchCV(estimator = svmmodel, param_grid = parameters, scoring = 'accuracy', cv = 10, n_jobs = -1) #GridSearchCV(estimator, param_grid, scoring=None, fit_params=None, n_jobs=1, iid=True, refit=True, cv=None, verbose=0, pre_dispatch=‘2*n_jobs’, error_score=’raise’, return_train_score=’warn’ # scoring and comparison within models is based on accuracy.The most accurate model will be chosen after training and croass validation. grid_search_svm = grid_search_svm.fit(X_train, y_train) grid_search_svm.best_score_ grid_search_svm.best_params_ ``` As we can wee, the best selected parameters from the Girid search of better parameters turned out to be C= 1 and kernel ='Linear', and this model performed with the accuracy of 91.57%. . Using Random Forest Classifier Algorithm ``` from sklearn.ensemble import RandomForestClassifier rforest = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0) rforest.fit(X_train, y_train) rfpred= rforest.predict(X_test) print(confusion_matrix(y_test,rfpred)) print('\n') print(classification_report(y_test,rfpred)) ``` . Using Logistic Regression ``` from sklearn.linear_model import LogisticRegression from sklearn.model_selection import GridSearchCV ``` We set the parameter grid again with a random space of values for parameter C, and we try to get the maximum accuracy by tuning the hyperparameters. ``` c_space = np.logspace(-5, 8, 15) param_grid = {'C': c_space} logreg = LogisticRegression() logreg_cv = GridSearchCV(logreg, param_grid, cv= 10) # here we pass the parameters of the Logistic regression model, the parameter values array and the number of Cross validations as 10 to the GridSearchCV function. logreg_cv.fit(X_train,y_train) print("Tuned Logistic Regression Parameters: {}".format(logreg_cv.best_params_)) print("Best score is {}".format(logreg_cv.best_score_)) # we predict on the test set pred = logreg_cv.predict(X_test) print(classification_report(y_test,pred)) ``` As we can see, Logistic regression with hyperparameter tuning performs with the best accuracy with 94%.
github_jupyter
##### Copyright 2018 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Distributed Training in TensorFlow <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/distribute_strategy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> > Note: This is an archived TF1 notebook. These are configured to run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate) but will run in TF1 as well. To use TF1 in Colab, use the [%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb) magic. ## Overview `tf.distribute.Strategy` is a TensorFlow API to distribute training across multiple GPUs, multiple machines or TPUs. Using this API, users can distribute their existing models and training code with minimal code changes. `tf.distribute.Strategy` has been designed with these key goals in mind: * Easy to use and support multiple user segments, including researchers, ML engineers, etc. * Provide good performance out of the box. * Easy switching between strategies. `tf.distribute.Strategy` can be used with TensorFlow's high level APIs, [tf.keras](https://www.tensorflow.org/r1/guide/keras) and [tf.estimator](https://www.tensorflow.org/r1/guide/estimators), with just a couple of lines of code change. It also provides an API that can be used to distribute custom training loops (and in general any computation using TensorFlow). In TensorFlow 2.0, users can execute their programs eagerly, or in a graph using [`tf.function`](../tutorials/eager/tf_function.ipynb). `tf.distribute.Strategy` intends to support both these modes of execution. Note that we may talk about training most of the time in this guide, but this API can also be used for distributing evaluation and prediction on different platforms. As you will see in a bit, very few changes are needed to use `tf.distribute.Strategy` with your code. This is because we have changed the underlying components of TensorFlow to become strategy-aware. This includes variables, layers, models, optimizers, metrics, summaries, and checkpoints. In this guide, we will talk about various types of strategies and how one can use them in different situations. Note: For a deeper understanding of the concepts, please watch [this deep-dive presentation](https://youtu.be/jKV53r9-H14). This is especially recommended if you plan to write your own training loop. ``` import tensorflow.compat.v1 as tf tf.disable_v2_behavior() ``` ## Types of strategies `tf.distribute.Strategy` intends to cover a number of use cases along different axes. Some of these combinations are currently supported and others will be added in the future. Some of these axes are: * Syncronous vs asynchronous training: These are two common ways of distributing training with data parallelism. In sync training, all workers train over different slices of input data in sync, and aggregating gradients at each step. In async training, all workers are independently training over the input data and updating variables asynchronously. Typically sync training is supported via all-reduce and async through parameter server architecture. * Hardware platform: Users may want to scale their training onto multiple GPUs on one machine, or multiple machines in a network (with 0 or more GPUs each), or on Cloud TPUs. In order to support these use cases, we have 4 strategies available. In the next section we will talk about which of these are supported in which scenarios in TF. ### MirroredStrategy `tf.distribute.MirroredStrategy` support synchronous distributed training on multiple GPUs on one machine. It creates one model replica per GPU device. Each variable in the model is mirrored across all the replicas. Together, these variables form a single conceptual variable called `MirroredVariable`. These variables are kept in sync with each other by applying identical updates. Efficient all-reduce algorithms are used to communicate the variable updates across the devices. All-reduce aggregates tensors across all the devices by adding them up, and makes them available on each device. It’s a fused algorithm that is very efficient and can reduce the overhead of synchronization significantly. There are many all-reduce algorithms and implementations available, depending on the type of communication available between devices. By default, it uses NVIDIA NCCL as the all-reduce implementation. The user can also choose between a few other options we provide, or write their own. Here is the simplest way of creating `MirroredStrategy`: ``` mirrored_strategy = tf.distribute.MirroredStrategy() ``` This will create a `MirroredStrategy` instance which will use all the GPUs that are visible to TensorFlow, and use NCCL as the cross device communication. If you wish to use only some of the GPUs on your machine, you can do so like this: ``` mirrored_strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"]) ``` If you wish to override the cross device communication, you can do so using the `cross_device_ops` argument by supplying an instance of `tf.distribute.CrossDeviceOps`. Currently we provide `tf.distribute.HierarchicalCopyAllReduce` and `tf.distribute.ReductionToOneDevice` as 2 other options other than `tf.distribute.NcclAllReduce` which is the default. ``` mirrored_strategy = tf.distribute.MirroredStrategy( cross_device_ops=tf.distribute.HierarchicalCopyAllReduce()) ``` ### CentralStorageStrategy `tf.distribute.experimental.CentralStorageStrategy` does synchronous training as well. Variables are not mirrored, instead they are placed on the CPU and operations are replicated across all local GPUs. If there is only one GPU, all variables and operations will be placed on that GPU. Create a `CentralStorageStrategy` by: ``` central_storage_strategy = tf.distribute.experimental.CentralStorageStrategy() ``` This will create a `CentralStorageStrategy` instance which will use all visible GPUs and CPU. Update to variables on replicas will be aggragated before being applied to variables. Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future. ### MultiWorkerMirroredStrategy `tf.distribute.experimental.MultiWorkerMirroredStrategy` is very similar to `MirroredStrategy`. It implements synchronous distributed training across multiple workers, each with potentially multiple GPUs. Similar to `MirroredStrategy`, it creates copies of all variables in the model on each device across all workers. It uses [CollectiveOps](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/collective_ops.py) as the multi-worker all-reduce communication method used to keep variables in sync. A collective op is a single op in the TensorFlow graph which can automatically choose an all-reduce algorithm in the TensorFlow runtime according to hardware, network topology and tensor sizes. It also implements additional performance optimizations. For example, it includes a static optimization that converts multiple all-reductions on small tensors into fewer all-reductions on larger tensors. In addition, we are designing it to have a plugin architecture - so that in the future, users will be able to plugin algorithms that are better tuned for their hardware. Note that collective ops also implement other collective operations such as broadcast and all-gather. Here is the simplest way of creating `MultiWorkerMirroredStrategy`: ``` multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ``` `MultiWorkerMirroredStrategy` currently allows you to choose between two different implementations of collective ops. `CollectiveCommunication.RING` implements ring-based collectives using gRPC as the communication layer. `CollectiveCommunication.NCCL` uses [Nvidia's NCCL](https://developer.nvidia.com/nccl) to implement collectives. `CollectiveCommunication.AUTO` defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. You can specify them like so: ``` multiworker_strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy( tf.distribute.experimental.CollectiveCommunication.NCCL) ``` One of the key differences to get multi worker training going, as compared to multi-GPU training, is the multi-worker setup. "TF_CONFIG" environment variable is the standard way in TensorFlow to specify the cluster configuration to each worker that is part of the cluster. See section on ["TF_CONFIG" below](#TF_CONFIG) for more details on how this can be done. Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future. ### TPUStrategy `tf.distribute.experimental.TPUStrategy` lets users run their TensorFlow training on Tensor Processing Units (TPUs). TPUs are Google's specialized ASICs designed to dramatically accelerate machine learning workloads. They are available on Google Colab, the [TensorFlow Research Cloud](https://www.tensorflow.org/tfrc) and [Google Compute Engine](https://cloud.google.com/tpu). In terms of distributed training architecture, TPUStrategy is the same `MirroredStrategy` - it implements synchronous distributed training. TPUs provide their own implementation of efficient all-reduce and other collective operations across multiple TPU cores, which are used in `TPUStrategy`. Here is how you would instantiate `TPUStrategy`. Note: To run this code in Colab, you should select TPU as the Colab runtime. See [Using TPUs]( tpu.ipynb) guide for a runnable version. ``` resolver = tf.distribute.cluster_resolver.TPUClusterResolver() tf.tpu.experimental.initialize_tpu_system(resolver) tpu_strategy = tf.distribute.experimental.TPUStrategy(resolver) ``` `TPUClusterResolver` instance helps locate the TPUs. In Colab, you don't need to specify any arguments to it. If you want to use this for Cloud TPUs, you will need to specify the name of your TPU resource in `tpu` argument. We also need to initialize the tpu system explicitly at the start of the program. This is required before TPUs can be used for computation and should ideally be done at the beginning because it also wipes out the TPU memory so all state will be lost. Note: This strategy is [`experimental`](https://www.tensorflow.org/r1/guide/version_compat#what_is_not_covered) as we are currently improving it and making it work for more scenarios. As part of this, please expect the APIs to change in the future. ### ParameterServerStrategy `tf.distribute.experimental.ParameterServerStrategy` supports parameter servers training on multiple machines. In this setup, some machines are designated as workers and some as parameter servers. Each variable of the model is placed on one parameter server. Computation is replicated across all GPUs of all the workers. In terms of code, it looks similar to other strategies: ``` ps_strategy = tf.distribute.experimental.ParameterServerStrategy() ``` For multi worker training, "TF_CONFIG" needs to specify the configuration of parameter servers and workers in your cluster, which you can read more about in [TF_CONFIG](#TF_CONFIG) below. So far we've talked about what are the different stategies available and how you can instantiate them. In the next few sections, we will talk about the different ways in which you can use them to distribute your training. We will show short code snippets in this guide and link off to full tutorials which you can run end to end. ## Using `tf.distribute.Strategy` with Keras We've integrated `tf.distribute.Strategy` into `tf.keras` which is TensorFlow's implementation of the [Keras API specification](https://keras.io). `tf.keras` is a high-level API to build and train models. By integrating into `tf.keras` backend, we've made it seamless for Keras users to distribute their training written in the Keras training framework. The only things that need to change in a user's program are: (1) Create an instance of the appropriate `tf.distribute.Strategy` and (2) Move the creation and compiling of Keras model inside `strategy.scope`. Here is a snippet of code to do this for a very simple Keras model with one dense layer: ``` mirrored_strategy = tf.distribute.MirroredStrategy() with mirrored_strategy.scope(): model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))]) model.compile(loss='mse', optimizer='sgd') ``` In this example we used `MirroredStrategy` so we can run this on a machine with multiple GPUs. `strategy.scope()` indicated which parts of the code to run distributed. Creating a model inside this scope allows us to create mirrored variables instead of regular variables. Compiling under the scope allows us to know that the user intends to train this model using this strategy. Once this is setup, you can fit your model like you would normally. `MirroredStrategy` takes care of replicating the model's training on the available GPUs, aggregating gradients etc. ``` dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100).batch(10) model.fit(dataset, epochs=2) model.evaluate(dataset) ``` Here we used a `tf.data.Dataset` to provide the training and eval input. You can also use numpy arrays: ``` import numpy as np inputs, targets = np.ones((100, 1)), np.ones((100, 1)) model.fit(inputs, targets, epochs=2, batch_size=10) ``` In both cases (dataset or numpy), each batch of the given input is divided equally among the multiple replicas. For instance, if using `MirroredStrategy` with 2 GPUs, each batch of size 10 will get divided among the 2 GPUs, with each receiving 5 input examples in each step. Each epoch will then train faster as you add more GPUs. Typically, you would want to increase your batch size as you add more accelerators so as to make effective use of the extra computing power. You will also need to re-tune your learning rate, depending on the model. You can use `strategy.num_replicas_in_sync` to get the number of replicas. ``` # Compute global batch size using number of replicas. BATCH_SIZE_PER_REPLICA = 5 global_batch_size = (BATCH_SIZE_PER_REPLICA * mirrored_strategy.num_replicas_in_sync) dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(100) dataset = dataset.batch(global_batch_size) LEARNING_RATES_BY_BATCH_SIZE = {5: 0.1, 10: 0.15} learning_rate = LEARNING_RATES_BY_BATCH_SIZE[global_batch_size] ``` ### What's supported now? In [TF nightly release](https://pypi.org/project/tf-nightly-gpu/), we now support training with Keras using all strategies. Note: When using `TPUStrategy` with TPU pods with Keras, currently the user will have to explicitly shard or shuffle the data for different workers, but we will change this in the future to automatically shard the input data intelligently. ### Examples and Tutorials Here is a list of tutorials and examples that illustrate the above integration end to end with Keras: 1. [Tutorial](../tutorials/distribute/keras.ipynb) to train MNIST with `MirroredStrategy`. 2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/vision/image_classification/resnet_imagenet_main.py) training with ImageNet data using `MirroredStrategy`. 3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/resnet50_keras/resnet50.py) trained with Imagenet data on Cloud TPus with `TPUStrategy`. ## Using `tf.distribute.Strategy` with Estimator `tf.estimator` is a distributed training TensorFlow API that originally supported the async parameter server approach. Like with Keras, we've integrated `tf.distribute.Strategy` into `tf.Estimator` so that a user who is using Estimator for their training can easily change their training is distributed with very few changes to your their code. With this, estimator users can now do synchronous distributed training on multiple GPUs and multiple workers, as well as use TPUs. The usage of `tf.distribute.Strategy` with Estimator is slightly different than the Keras case. Instead of using `strategy.scope`, now we pass the strategy object into the [`RunConfig`](https://www.tensorflow.org/api_docs/python/tf/estimator/RunConfig) for the Estimator. Here is a snippet of code that shows this with a premade estimator `LinearRegressor` and `MirroredStrategy`: ``` mirrored_strategy = tf.distribute.MirroredStrategy() config = tf.estimator.RunConfig( train_distribute=mirrored_strategy, eval_distribute=mirrored_strategy) regressor = tf.estimator.LinearRegressor( feature_columns=[tf.feature_column.numeric_column('feats')], optimizer='SGD', config=config) ``` We use a premade Estimator here, but the same code works with a custom Estimator as well. `train_distribute` determines how training will be distributed, and `eval_distribute` determines how evaluation will be distributed. This is another difference from Keras where we use the same strategy for both training and eval. Now we can train and evaluate this Estimator with an input function: ``` def input_fn(): dataset = tf.data.Dataset.from_tensors(({"feats":[1.]}, [1.])) return dataset.repeat(1000).batch(10) regressor.train(input_fn=input_fn, steps=10) regressor.evaluate(input_fn=input_fn, steps=10) ``` Another difference to highlight here between Estimator and Keras is the input handling. In Keras, we mentioned that each batch of the dataset is split across the multiple replicas. In Estimator, however, the user provides an `input_fn` and have full control over how they want their data to be distributed across workers and devices. We do not do automatic splitting of batch, nor automatically shard the data across different workers. The provided `input_fn` is called once per worker, thus giving one dataset per worker. Then one batch from that dataset is fed to one replica on that worker, thereby consuming N batches for N replicas on 1 worker. In other words, the dataset returned by the `input_fn` should provide batches of size `PER_REPLICA_BATCH_SIZE`. And the global batch size for a step can be obtained as `PER_REPLICA_BATCH_SIZE * strategy.num_replicas_in_sync`. When doing multi worker training, users will also want to either split their data across the workers, or shuffle with a random seed on each. You can see an example of how to do this in the [Multi-worker Training with Estimator](../tutorials/distribute/multi_worker_with_estimator.ipynb). We showed an example of using `MirroredStrategy` with Estimator. You can also use `TPUStrategy` with Estimator as well, in the exact same way: ``` config = tf.estimator.RunConfig( train_distribute=tpu_strategy, eval_distribute=tpu_strategy) ``` And similarly, you can use multi worker and parameter server strategies as well. The code remains the same, but you need to use `tf.estimator.train_and_evaluate`, and set "TF_CONFIG" environment variables for each binary running in your cluster. ### What's supported now? In TF nightly release, we support training with Estimator using all strategies. ### Examples and Tutorials Here are some examples that show end to end usage of various strategies with Estimator: 1. [End to end example](https://github.com/tensorflow/ecosystem/tree/master/distribution_strategy) for multi worker training in tensorflow/ecosystem using Kuberentes templates. This example starts with a Keras model and converts it to an Estimator using the `tf.keras.estimator.model_to_estimator` API. 2. Official [ResNet50](https://github.com/tensorflow/models/blob/master/official/r1/resnet/imagenet_main.py) model, which can be trained using either `MirroredStrategy` or `MultiWorkerMirroredStrategy`. 3. [ResNet50](https://github.com/tensorflow/tpu/blob/master/models/experimental/distribution_strategy/resnet_estimator.py) example with TPUStrategy. ## Using `tf.distribute.Strategy` with custom training loops As you've seen, using `tf.distrbute.Strategy` with high level APIs is only a couple lines of code change. With a little more effort, `tf.distrbute.Strategy` can also be used by other users who are not using these frameworks. TensorFlow is used for a wide variety of use cases and some users (such as researchers) require more flexibility and control over their training loops. This makes it hard for them to use the high level frameworks such as Estimator or Keras. For instance, someone using a GAN may want to take a different number of generator or discriminator steps each round. Similarly, the high level frameworks are not very suitable for Reinforcement Learning training. So these users will usually write their own training loops. For these users, we provide a core set of methods through the `tf.distrbute.Strategy` classes. Using these may require minor restructuring of the code initially, but once that is done, the user should be able to switch between GPUs / TPUs / multiple machines by just changing the strategy instance. Here we will show a brief snippet illustrating this use case for a simple training example using the same Keras model as before. Note: These APIs are still experimental and we are improving them to make them more user friendly. First, we create the model and optimizer inside the strategy's scope. This ensures that any variables created with the model and optimizer are mirrored variables. ``` with mirrored_strategy.scope(): model = tf.keras.Sequential([tf.keras.layers.Dense(1, input_shape=(1,))]) optimizer = tf.train.GradientDescentOptimizer(0.1) ``` Next, we create the input dataset and call `tf.distribute.Strategy.experimental_distribute_dataset` to distribute the dataset based on the strategy. ``` dataset = tf.data.Dataset.from_tensors(([1.], [1.])).repeat(1000).batch( global_batch_size) dist_dataset = mirrored_strategy.experimental_distribute_dataset(dataset) ``` Then, we define one step of the training. We will use `tf.GradientTape` to compute gradients and optimizer to apply those gradients to update our model's variables. To distribute this training step, we put it in a function `step_fn` and pass it to `tf.distribute.Strategy.run` along with the inputs from the iterator: ``` def train_step(dist_inputs): def step_fn(inputs): features, labels = inputs logits = model(features) cross_entropy = tf.nn.softmax_cross_entropy_with_logits( logits=logits, labels=labels) loss = tf.reduce_sum(cross_entropy) * (1.0 / global_batch_size) train_op = optimizer.minimize(loss) with tf.control_dependencies([train_op]): return tf.identity(loss) per_replica_losses = mirrored_strategy.run( step_fn, args=(dist_inputs,)) mean_loss = mirrored_strategy.reduce( tf.distribute.ReduceOp.SUM, per_replica_losses, axis=None) return mean_loss ``` A few other things to note in the code above: 1. We used `tf.nn.softmax_cross_entropy_with_logits` to compute the loss. And then we scaled the total loss by the global batch size. This is important because all the replicas are training in sync and number of examples in each step of training is the global batch. So the loss needs to be divided by the global batch size and not by the replica (local) batch size. 2. We used the `strategy.reduce` API to aggregate the results returned by `tf.distribute.Strategy.run`. `tf.distribute.Strategy.run` returns results from each local replica in the strategy, and there are multiple ways to consume this result. You can `reduce` them to get an aggregated value. You can also do `tf.distribute.Strategy.experimental_local_results(results)`to get the list of values contained in the result, one per local replica. Finally, once we have defined the training step, we can initialize the iterator and variables and run the training in a loop: ``` with mirrored_strategy.scope(): input_iterator = dist_dataset.make_initializable_iterator() iterator_init = input_iterator.initializer var_init = tf.global_variables_initializer() loss = train_step(input_iterator.get_next()) with tf.Session() as sess: sess.run([var_init, iterator_init]) for _ in range(10): print(sess.run(loss)) ``` In the example above, we used `tf.distribute.Strategy.experimental_distribute_dataset` to provide input to your training. We also provide the `tf.distribute.Strategy.make_experimental_numpy_dataset` to support numpy inputs. You can use this API to create a dataset before calling `tf.distribute.Strategy.experimental_distribute_dataset`. This covers the simplest case of using `tf.distribute.Strategy` API to distribute custom training loops. We are in the process of improving these APIs. Since this use case requires more work on the part of the user, we will be publishing a separate detailed guide in the future. ### What's supported now? In TF nightly release, we support training with custom training loops using `MirroredStrategy` and `TPUStrategy` as shown above. Support for other strategies will be coming in soon. `MultiWorkerMirorredStrategy` support will be coming in the future. ### Examples and Tutorials Here are some examples for using distribution strategy with custom training loops: 1. [Example](https://github.com/tensorflow/tensorflow/blob/5456cc28f3f8d9c17c645d9a409e495969e584ae/tensorflow/contrib/distribute/python/examples/mnist_tf1_tpu.py) to train MNIST using `TPUStrategy`. ## Other topics In this section, we will cover some topics that are relevant to multiple use cases. <a id="TF_CONFIG"> ### Setting up TF\_CONFIG environment variable </a> For multi-worker training, as mentioned before, you need to set "TF\_CONFIG" environment variable for each binary running in your cluster. The "TF\_CONFIG" environment variable is a JSON string which specifies what tasks constitute a cluster, their addresses and each task's role in the cluster. We provide a Kubernetes template in the [tensorflow/ecosystem](https://github.com/tensorflow/ecosystem) repo which sets "TF\_CONFIG" for your training tasks. One example of "TF\_CONFIG" is: ``` os.environ["TF_CONFIG"] = json.dumps({ "cluster": { "worker": ["host1:port", "host2:port", "host3:port"], "ps": ["host4:port", "host5:port"] }, "task": {"type": "worker", "index": 1} }) ``` This "TF\_CONFIG" specifies that there are three workers and two ps tasks in the cluster along with their hosts and ports. The "task" part specifies that the role of the current task in the cluster, worker 1 (the second worker). Valid roles in a cluster is "chief", "worker", "ps" and "evaluator". There should be no "ps" job except when using `tf.distribute.experimental.ParameterServerStrategy`. ## What's next? `tf.distribute.Strategy` is actively under development. We welcome you to try it out and provide your feedback via [issues on GitHub](https://github.com/tensorflow/tensorflow/issues/new).
github_jupyter
``` from keras.preprocessing import sequence from keras.preprocessing import text import numpy as np from keras.models import Sequential from keras.layers import Dense, Dropout, Activation from keras.layers import Embedding, LSTM from keras.layers import Conv1D, Flatten from keras.preprocessing import text from keras.models import Sequential,Model from keras.layers import Dense ,Activation,MaxPool1D,Conv1D,Flatten,Dropout,Activation,Dropout,Input,Lambda,concatenate from keras.utils import np_utils from nltk.corpus import stopwords from nltk.tokenize import RegexpTokenizer from nltk.stem.porter import PorterStemmer import nltk import csv import pandas as pd from keras.preprocessing import text as keras_text, sequence as keras_seq data = pd.read_csv('drive/My Drive/ML Internship IIIT Dharwad/train.csv') pd.set_option('display.max_colwidth',80) data.head() data.shape print(data['is_duplicate'].value_counts()) import matplotlib.pyplot as plt data['is_duplicate'].value_counts().plot(kind='bar', color='green') print(data.dtypes) print(data['question1'].dtypes) print(data['question2'].dtypes) type(data['question1']) ``` # Setting target or labelfor each input ``` label_oneDimension=data['is_duplicate'] label_oneDimension.head(2) import numpy as np from keras.utils.np_utils import to_categorical label_twoDimension = to_categorical(data['is_duplicate'], num_classes=2) label_twoDimension[0:1] question_one=data['question1'].astype(str) print(question_one.head()) question_two=data['question2'].astype(str) print(question_two.head()) ``` # Reading test data and preprocessing ``` #Data reading ''' data_test = pd.read_csv('drive/My Drive/Summer Internship 2020 July/My Test File/Sunil/test.csv') data_test_sample=data_test.dropna() #data_test_sample=data_test_sample.head(100) data_test_sample.head() ''' ''' question_one_test=data_test_sample['question1'].astype(str) print(question_one_test.head()) ''' ''' question_two_test=data_test_sample['question2'].astype(str) print(question_two_test.head()) ''' ``` # Fitting text on a single tokenized object ``` from keras.preprocessing.text import Tokenizer tok_all = Tokenizer(filters='!"#$%&\'()*+,-./:;<=>?@[\\]^_`{|}~', char_level = False) tok_all.fit_on_texts(question_one+question_two) #tok_all.fit_on_texts(question_one+question_two+question_one_test+question_two_test) vocabulary_all=len(tok_all.word_counts) print(vocabulary_all) ``` # Train data Sequencing and Encoding ``` #Encoding question 1 encoded_q1=tok_all.texts_to_sequences(question_one) print(question_one[0]) encoded_q1[0] #Encoding question 2 encoded_q2=tok_all.texts_to_sequences(question_two) print(question_two[0]) encoded_q2[0] ``` # Pre-Padding on Train data ``` #####Padding encoded sequence of words from keras.preprocessing import sequence max_length=100 padded_docs_q1 = sequence.pad_sequences(encoded_q1, maxlen=max_length, padding='pre') #####Padding encoded sequence of words from keras.preprocessing import sequence max_length=100 padded_docs_q2 = sequence.pad_sequences(encoded_q2, maxlen=max_length, padding='pre') ``` # Encoding on Test data ``` ''' #Encoding question 1 encoded_q1_test=tok_all.texts_to_sequences(question_one_test) print(question_one_test[0]) encoded_q1_test[0] ''' '''#Encoding question 1 encoded_q2_test=tok_all.texts_to_sequences(question_two_test) print(question_two_test[0]) encoded_q2_test[0]''' ``` # Pre-Padding on test data ``` '''#####Padding encoded sequence of words padded_docs_q1_test = sequence.pad_sequences(encoded_q1_test, maxlen=max_length, padding='pre') padded_docs_q2_test = sequence.pad_sequences(encoded_q2_test, maxlen=max_length, padding='pre')''' ``` # Reading Embedding Vector from Glove ``` import os import numpy as np embeddings_index = {} f = open('drive/My Drive/ML Internship IIIT Dharwad/Copy of glove.6B.300d.txt') for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() print('Loaded %s word vectors.' % len(embeddings_index)) embedding_matrix = np.zeros((vocabulary_all+1, 300)) for word, i in tok_all.word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector ``` # Defining Input Shape for Model ``` Question1_shape= Input(shape=[max_length]) Question1_shape.shape Question2_shape= Input(shape=[max_length]) Question2_shape.shape ``` # Embedding Layer ``` Embedding_Layer = Embedding(vocabulary_all+1,300,weights=[embedding_matrix], input_length=max_length, trainable=False) ``` # CNN Network ``` CNN2_network=Sequential([Embedding_Layer, Conv1D(32,3,activation="relu",padding='same'), Dropout(0.2), MaxPool1D(2), Conv1D(64,5,activation="relu",padding='same'), Dropout(0.2), MaxPool1D(2), Flatten(), Dense(128,activation="linear"), Dropout(0.3) ]) ``` # Printing Model summary ``` CNN2_network.summary() from keras.utils.vis_utils import plot_model plot_model(CNN2_network, to_file='CNN2_network.png', show_shapes=True, show_layer_names=True) ``` # create siamese network from CNN model and store output feature vectors ``` Question1_CNN_feature=CNN2_network(Question1_shape) Question2_CNN_feature=CNN2_network(Question2_shape) ``` # Adding and multiplying features obtained from Siamese CNN network ``` from keras import backend as K from keras.optimizers import Adam lamda_function=Lambda(lambda tensor:K.abs(tensor[0]-tensor[1]),name="Absolute_distance") abs_distance_vector=lamda_function([Question1_CNN_feature,Question2_CNN_feature]) lamda_function2=Lambda(lambda tensor:K.abs(tensor[0]*tensor[1]),name="Hamadard_multiplication") hamadard_vector=lamda_function2([Question1_CNN_feature,Question2_CNN_feature]) ``` # Adding abs_distance_vector and hamadard_vector ``` from keras.layers import Add added_vecotr = Add()([abs_distance_vector, hamadard_vector]) ``` # Final Model prediction ``` predict=Dense(2,activation="sigmoid")(added_vecotr) ``` # Creating sequential model using Model() class and compilation ``` from sklearn.metrics import roc_auc_score, roc_curve, accuracy_score Siamese2_Network=Model(inputs=[Question1_shape,Question2_shape],outputs=predict) Siamese2_Network.compile(loss = "binary_crossentropy", optimizer=Adam(lr=0.00003), metrics=["accuracy"]) Siamese2_Network.summary() ``` # Plot model ``` from keras.utils import plot_model plot_model(Siamese2_Network, to_file='Siamese2_Network.png',show_shapes=True, show_layer_names=True) ``` # Setting hyperparameter for training ``` from keras.callbacks import EarlyStopping, ReduceLROnPlateau,ModelCheckpoint earlystopper = EarlyStopping(patience=8, verbose=1) #checkpointer = ModelCheckpoint(filepath = 'cnn_model_one_.{epoch:02d}-{val_loss:.6f}.hdf5', # verbose=1, # save_best_only=True, save_weights_only = True) reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.9, patience=2, min_lr=0.00001, verbose=1) ``` # Data split into train and validation set ``` # Splitting data into train and test from sklearn.model_selection import train_test_split q1_train, q1_val,q2_train, q2_val, label_train, label_val, label_oneD_train, label_oneD_val = train_test_split(padded_docs_q1,padded_docs_q2, label_twoDimension, label_oneDimension, test_size=0.30, random_state=42) ``` # Model fitting or training ``` history = Siamese2_Network.fit([q1_train,q2_train],label_train, batch_size=32,epochs=100,validation_data=([q1_val,q2_val],label_val),callbacks=[earlystopper, reduce_lr]) ``` # Model Prediction ``` Siamese2_Network_predictions = Siamese2_Network.predict([q1_val,q2_val]) #Siamese2_Network_predictions = Siamese2_Network.predict([padded_docs_q1_test,padded_docs_q2_test]) #Siamese2_Network_predictions_testData = Siamese2_Network.predict([padded_docs_q1_test,padded_docs_q1_test]) ``` # Log loss ``` from sklearn.metrics import log_loss log_loss_val= log_loss(label_val,Siamese2_Network_predictions) log_loss_val ``` # Classification report ``` predictions = np.zeros_like(Siamese2_Network_predictions) predictions[np.arange(len(Siamese2_Network_predictions)), Siamese2_Network_predictions.argmax(1)] = 1 predictionInteger=(np.argmax(predictions, axis=1)) #print('np.argmax(a, axis=1): {0}'.format(np.argmax(predictions, axis=1))) predictionInteger from sklearn.metrics import classification_report print(classification_report(label_val,predictions)) from sklearn.metrics import precision_recall_fscore_support print ("Precision, Recall, F1_score : macro ",precision_recall_fscore_support(label_oneD_val,predictionInteger, average='macro')) print ("Precision, Recall, F1_score : micro ",precision_recall_fscore_support(label_oneD_val,predictionInteger, average='micro')) print ("Precision, Recall, F1_score : weighted ",precision_recall_fscore_support(label_oneD_val,predictionInteger, average='weighted')) ``` # Final train and val loss ``` min_val_loss = min(history.history["val_loss"]) min_train_loss = min(history.history["loss"]) max_val_acc = max(history.history["val_accuracy"]) max_train_acc = max(history.history["accuracy"]) print("min_train_loss=%g, min_val_loss=%g, max_train_acc=%g, max_val_acc=%g" % (min_train_loss,min_val_loss,max_train_acc,max_val_acc)) ``` # Plot epoch Vs loss ``` from matplotlib import pyplot as plt plt.plot(history.history["loss"],color = 'red', label = 'train_loss') plt.plot(history.history["val_loss"],color = 'blue', label = 'val_loss') plt.title('Loss Visualisation') plt.xlabel('Epochs') plt.ylabel('Loss') plt.legend() plt.savefig('2Layer_CNN_lossPlot_siamese.pdf',dpi=1000) from google.colab import files files.download('2Layer_CNN_lossPlot_siamese.pdf') ``` # Plot Epoch Vs Accuracy ``` plt.plot(history.history["accuracy"],color = 'red', label = 'train_accuracy') plt.plot(history.history["val_accuracy"],color = 'blue', label = 'val_accuracy') plt.title('Accuracy Visualisation') plt.xlabel('Epochs') plt.ylabel('Accuracy') plt.legend() plt.savefig('2Layer_CNN_accuracyPlot_siamese.pdf',dpi=1000) files.download('2Layer_CNN_accuracyPlot_siamese.pdf') ``` # Area Under Curve- ROC ``` #pred_test = Siamese2_Network.predict([padded_docs_q1_test,padded_docs_q2_test]) pred_train = Siamese2_Network.predict([q1_train,q2_train]) pred_val = Siamese2_Network.predict([q1_val,q2_val]) import numpy as np import matplotlib.pyplot as plt from itertools import cycle from sklearn import svm, datasets from sklearn.metrics import roc_curve, auc from sklearn.model_selection import train_test_split from sklearn.preprocessing import label_binarize from sklearn.multiclass import OneVsRestClassifier from scipy import interp def plot_AUC_ROC(y_true, y_pred): n_classes = 2 #change this value according to class value # Compute ROC curve and ROC area for each class fpr = dict() tpr = dict() roc_auc = dict() for i in range(n_classes): fpr[i], tpr[i], _ = roc_curve(y_true[:, i], y_pred[:, i]) roc_auc[i] = auc(fpr[i], tpr[i]) # Compute micro-average ROC curve and ROC area fpr["micro"], tpr["micro"], _ = roc_curve(y_true.ravel(), y_pred.ravel()) roc_auc["micro"] = auc(fpr["micro"], tpr["micro"]) ############################################################################################ lw = 2 # Compute macro-average ROC curve and ROC area # First aggregate all false positive rates all_fpr = np.unique(np.concatenate([fpr[i] for i in range(n_classes)])) # Then interpolate all ROC curves at this points mean_tpr = np.zeros_like(all_fpr) for i in range(n_classes): mean_tpr += interp(all_fpr, fpr[i], tpr[i]) # Finally average it and compute AUC mean_tpr /= n_classes fpr["macro"] = all_fpr tpr["macro"] = mean_tpr roc_auc["macro"] = auc(fpr["macro"], tpr["macro"]) # Plot all ROC curves plt.figure() plt.plot(fpr["micro"], tpr["micro"], label='micro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["micro"]), color='deeppink', linestyle=':', linewidth=4) plt.plot(fpr["macro"], tpr["macro"], label='macro-average ROC curve (area = {0:0.2f})' ''.format(roc_auc["macro"]), color='navy', linestyle=':', linewidth=4) colors = cycle(['aqua', 'darkorange']) #classes_list1 = ["DE","NE","DK"] classes_list1 = ["Non-duplicate","Duplicate"] for i, color,c in zip(range(n_classes), colors,classes_list1): plt.plot(fpr[i], tpr[i], color=color, lw=lw, label='{0} (AUC = {1:0.2f})' ''.format(c, roc_auc[i])) plt.plot([0, 1], [0, 1], 'k--', lw=lw) plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic curve') plt.legend(loc="lower right") #plt.show() plt.savefig('2Layer_CNN_RocPlot_siamese.pdf',dpi=1000) files.download('2Layer_CNN_RocPlot_siamese.pdf') # Plot of a ROC curve for a specific class ''' plt.figure() lw = 2 plt.plot(fpr[0], tpr[0], color='darkorange', lw=lw, label='ROC curve (area = %0.2f)' % roc_auc[0]) plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('Receiver operating characteristic example') plt.legend(loc="lower right") plt.show() ''' plot_AUC_ROC(label_val,pred_val) from sklearn.metrics import roc_auc_score, roc_curve, accuracy_score auc_val = roc_auc_score(label_val,pred_val) accuracy_val = accuracy_score(label_val,pred_val>0.5) auc_train = roc_auc_score(label_train,pred_train) accuracy_train = accuracy_score(label_train,pred_train>0.5) print("auc_train=%g, auc_val=%g, accuracy_train=%g, accuracy_val=%g" % (auc_train, auc_val, accuracy_train, accuracy_val)) ''' fpr_train, tpr_train, thresholds_train = roc_curve(label_train,pred_train) fpr_test, tpr_test, thresholds_test = roc_curve(label_val,pred_val) #fpr_train, tpr_train, thresholds_train = roc_curve(label_oneD_train,pred_train_final) #fpr_test, tpr_test, thresholds_test = roc_curve(label_oneD_val,pred_val_final) plt.plot(fpr_train,tpr_train, color="blue", label="train roc, auc=%g" % (auc_train,)) plt.plot(fpr_test,tpr_test, color="green", label="val roc, auc=%g" % (auc_val,)) plt.plot([0,1], [0,1], color='orange', linestyle='--') plt.xticks(np.arange(0.0, 1.1, step=0.1)) plt.xlabel("Flase Positive Rate", fontsize=15) plt.yticks(np.arange(0.0, 1.1, step=0.1)) plt.ylabel("True Positive Rate", fontsize=15) plt.title('ROC Curve Analysis', fontweight='bold', fontsize=15) plt.legend(prop={'size':13}, loc='lower right') plt.savefig('AUC_CURVE_cnn4.pdf',dpi=1000) #files.download('AUC_CURVE_cnn4.pdf') ''' ```
github_jupyter
# Hierarchical Clustering **Hierarchical clustering** refers to a class of clustering methods that seek to build a **hierarchy** of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means. **Note to Amazon EC2 users**: To conserve memory, make sure to stop all the other notebooks before running this notebook. ## Import packages The following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read [this page](https://turi.com/download/upgrade-graphlab-create.html). ``` import graphlab import matplotlib.pyplot as plt import numpy as np import sys import os import time from scipy.sparse import csr_matrix from sklearn.cluster import KMeans from sklearn.metrics import pairwise_distances %matplotlib inline '''Check GraphLab Create version''' from distutils.version import StrictVersion assert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.' ``` ## Load the Wikipedia dataset ``` wiki = graphlab.SFrame('people_wiki.gl/') ``` As we did in previous assignments, let's extract the TF-IDF features: ``` wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text']) ``` To run k-means on this dataset, we should convert the data matrix into a sparse matrix. ``` from em_utilities import sframe_to_scipy # converter # This will take about a minute or two. tf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf') ``` To be consistent with the k-means assignment, let's normalize all vectors to have unit norm. ``` from sklearn.preprocessing import normalize tf_idf = normalize(tf_idf) ``` ## Bipartition the Wikipedia dataset using k-means Recall our workflow for clustering text data with k-means: 1. Load the dataframe containing a dataset, such as the Wikipedia text dataset. 2. Extract the data matrix from the dataframe. 3. Run k-means on the data matrix with some value of k. 4. Visualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article). Let us modify the workflow to perform bipartitioning: 1. Load the dataframe containing a dataset, such as the Wikipedia text dataset. 2. Extract the data matrix from the dataframe. 3. Run k-means on the data matrix with k=2. 4. Divide the data matrix into two parts using the cluster assignments. 5. Divide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization. 6. Visualize the bipartition of data. We'd like to be able to repeat Steps 3-6 multiple times to produce a **hierarchy** of clusters such as the following: ``` (root) | +------------+-------------+ | | Cluster Cluster +------+-----+ +------+-----+ | | | | Cluster Cluster Cluster Cluster ``` Each **parent cluster** is bipartitioned to produce two **child clusters**. At the very top is the **root cluster**, which consists of the entire dataset. Now we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster: * `dataframe`: a subset of the original dataframe that correspond to member rows of the cluster * `matrix`: same set of rows, stored in sparse matrix format * `centroid`: the centroid of the cluster (not applicable for the root cluster) Rather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters). ``` def bipartition(cluster, maxiter=400, num_runs=4, seed=None): '''cluster: should be a dictionary containing the following keys * dataframe: original dataframe * matrix: same data, in matrix format * centroid: centroid for this particular cluster''' data_matrix = cluster['matrix'] dataframe = cluster['dataframe'] # Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow. kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=1) kmeans_model.fit(data_matrix) centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_ # Divide the data matrix into two parts using the cluster assignments. data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \ data_matrix[cluster_assignment==1] # Divide the dataframe into two parts, again using the cluster assignments. cluster_assignment_sa = graphlab.SArray(cluster_assignment) # minor format conversion dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \ dataframe[cluster_assignment_sa==1] # Package relevant variables for the child clusters cluster_left_child = {'matrix': data_matrix_left_child, 'dataframe': dataframe_left_child, 'centroid': centroids[0]} cluster_right_child = {'matrix': data_matrix_right_child, 'dataframe': dataframe_right_child, 'centroid': centroids[1]} return (cluster_left_child, cluster_right_child) ``` The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish. Note. For the purpose of the assignment, we set an explicit seed (`seed=1`) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs. ``` wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster left_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=6, seed=1) ``` Let's examine the contents of one of the two clusters, which we call the `left_child`, referring to the tree visualization above. ``` left_child ``` And here is the content of the other cluster we named `right_child`. ``` right_child ``` ## Visualize the bipartition We provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid. ``` def display_single_tf_idf_cluster(cluster, map_index_to_word): '''map_index_to_word: SFrame specifying the mapping betweeen words and column indices''' wiki_subset = cluster['dataframe'] tf_idf_subset = cluster['matrix'] centroid = cluster['centroid'] # Print top 5 words with largest TF-IDF weights in the cluster idx = centroid.argsort()[::-1] for i in xrange(5): print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroid[idx[i]])), print('') # Compute distances from the centroid to all data points in the cluster. distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten() # compute nearest neighbors of the centroid within the cluster. nearest_neighbors = distances.argsort() # For 8 nearest neighbors, print the title as well as first 180 characters of text. # Wrap the text at 80-character mark. for i in xrange(8): text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25]) print('* {0:50s} {1:.5f}\n {2:s}\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'], distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else '')) print('') ``` Let's visualize the two child clusters: ``` display_single_tf_idf_cluster(left_child, map_index_to_word) display_single_tf_idf_cluster(right_child, map_index_to_word) ``` The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes ``` Is this hierarchy good enough? **When building a hierarchy of clusters, we must keep our particular application in mind.** For instance, we might want to build a **directory** for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the `athletes` and `non-athletes` clusters. ## Perform recursive bipartitioning ### Cluster of athletes To help identify the clusters we've built so far, let's give them easy-to-read aliases: ``` athletes = left_child non_athletes = right_child ``` Using the bipartition function, we produce two child clusters of the athlete cluster: ``` # Bipartition the cluster of athletes left_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=6, seed=1) ``` The left child cluster mainly consists of baseball players: ``` display_single_tf_idf_cluster(left_child_athletes, map_index_to_word) ``` On the other hand, the right child cluster is a mix of players in association football, Austrailian rules football and ice hockey: ``` display_single_tf_idf_cluster(right_child_athletes, map_index_to_word) ``` Our hierarchy of clusters now looks like this: ``` Wikipedia + | +--------------------------+--------------------+ | | + + Athletes Non-athletes + | +-----------+--------+ | | | association football/ + Austrailian rules football/ baseball ice hockey ``` Should we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, **we would like to achieve similar level of granularity for all clusters.** Notice that the right child cluster is more coarse than the left child cluster. The right cluster posseses a greater variety of topics than the left (ice hockey/association football/Austrialian football vs. baseball). So the right child cluster should be subdivided further to produce finer child clusters. Let's give the clusters aliases as well: ``` baseball = left_child_athletes ice_hockey_football = right_child_athletes ``` ### Cluster of ice hockey players and football players In answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights. Let us bipartition the cluster of ice hockey and football players. ``` left_child_ihs, right_child_ihs = bipartition(ice_hockey_football, maxiter=100, num_runs=6, seed=1) display_single_tf_idf_cluster(left_child_ihs, map_index_to_word) display_single_tf_idf_cluster(right_child_ihs, map_index_to_word) ``` **Quiz Question**. Which diagram best describes the hierarchy right after splitting the `ice_hockey_football` cluster? Refer to the quiz form for the diagrams. **Caution**. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters. * **If a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster.** Thus, we may be misled if we judge the purity of clusters solely by their top documents and words. * **Many interesting topics are hidden somewhere inside the clusters but do not appear in the visualization.** We may need to subdivide further to discover new topics. For instance, subdividing the `ice_hockey_football` cluster led to the appearance of runners and golfers. ### Cluster of non-athletes Now let us subdivide the cluster of non-athletes. ``` # Bipartition the cluster of non-athletes left_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=6, seed=1) display_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word) display_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word) ``` Neither of the clusters show clear topics, apart from the genders. Let us divide them further. ``` male_non_athletes = left_child_non_athletes female_non_athletes = right_child_non_athletes ``` **Quiz Question**. Let us bipartition the clusters `male_non_athletes` and `female_non_athletes`. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams. **Note**. Use `maxiter=100, num_runs=6, seed=1` for consistency of output. ``` # Bipartition the cluster of males left_child_males, right_child_males = bipartition(male_non_athletes, maxiter=100, num_runs=6, seed=1) display_single_tf_idf_cluster(left_child_males, map_index_to_word) display_single_tf_idf_cluster(right_child_males, map_index_to_word) # Bipartition the cluster of females left_child_female, right_child_female = bipartition(female_non_athletes, maxiter=100, num_runs=6, seed=1) display_single_tf_idf_cluster(left_child_female, map_index_to_word) display_single_tf_idf_cluster(right_child_female, map_index_to_word) ```
github_jupyter
# Choropleth Maps Exercise Welcome to the Choropleth Maps Exercise! In this exercise we will give you some simple datasets and ask you to create Choropleth Maps from them. Due to the Nature of Plotly we can't show you examples embedded inside the notebook. [Full Documentation Reference](https://plot.ly/python/reference/#choropleth) ## Plotly Imports ``` import plotly.graph_objs as go from plotly.offline import init_notebook_mode,iplot,plot init_notebook_mode(connected=True) ``` ** Import pandas and read the csv file: 2014_World_Power_Consumption** ``` import pandas as pd df = pd.read_csv('2014_World_Power_Consumption') ``` ** Check the head of the DataFrame. ** ``` df.head() ``` ** Referencing the lecture notes, create a Choropleth Plot of the Power Consumption for Countries using the data and layout dictionary. ** ``` data = dict( type = 'choropleth', colorscale = 'Viridis', reversescale = True, locations = df['Country'], locationmode = "country names", z = df['Power Consumption KWH'], text = df['Country'], colorbar = {'title' : 'Power Consumption KWH'}, ) layout = dict(title = '2014 Power Consumption KWH', geo = dict(showframe = False,projection = {'type':'Mercator'}) ) choromap = go.Figure(data = [data],layout = layout) plot(choromap,validate=False) ``` ## USA Choropleth ** Import the 2012_Election_Data csv file using pandas. ** ``` usdf = pd.read_csv('2012_Election_Data') ``` ** Check the head of the DataFrame. ** ``` usdf.head() ``` ** Now create a plot that displays the Voting-Age Population (VAP) per state. If you later want to play around with other columns, make sure you consider their data type. VAP has already been transformed to a float for you. ** ``` data = dict(type='choropleth', colorscale = 'Viridis', reversescale = True, locations = usdf['State Abv'], z = usdf['Voting-Age Population (VAP)'], locationmode = 'USA-states', text = usdf['State'], marker = dict(line = dict(color = 'rgb(255,255,255)',width = 1)), colorbar = {'title':"Voting-Age Population (VAP)"} ) layout = dict(title = '2012 General Election Voting Data', geo = dict(scope='usa', showlakes = True, lakecolor = 'rgb(85,173,240)') ) choromap = go.Figure(data = [data],layout = layout) plot(choromap,validate=False) ``` # Great Job!
github_jupyter
# COMPSCI-589 HW3: Random Forest name: Harry (Haochen) Wang ``` from evaluationmatrix import * from utils import * from decisiontree import * from randomforest import * from run import * housedata, housecategory = importhousedata() winedata, winecategory = importwinedata() cancerdata, cancercategory = importcancerdata() cmcdata,cmccategory = importcmcdata() parameterofn = [1, 5, 10, 20, 30, 40, 50] # n of ntrees def ploter(data, title, xlabel, ylabel, error = None, n = parameterofn): plt.errorbar(n, data, yerr=error , fmt = '-o') plt.xlabel(xlabel) plt.ylabel(ylabel) plt.title(title) plt.plot(n,data) plt.show() ``` ### I. Wine Dataset ``` wineaccuracy, wineprecision, winerecall, winef1 = [], [], [], [] wineprecisionstd, winerecallstd, winef1std = [], [] ,[] for n in parameterofn: lists = kfoldcrossvalid(winedata, winecategory, 10, n, 10, 5, 0.01, 'id3', 0.1)[0] beta = 1 a,p0,r0,f0,all = evaluate(lists, 1, beta) a,p1,r1,f1,all = evaluate(lists, 2, beta) a,p2,r2,f2,all = evaluate(lists, 3, beta) p = p0+p1+p2 r = r0+r1+r2 f = f0+f1+f2 wineprecisionstd.append((np.std(p)/2)) winerecallstd.append((np.std(r)/2)) winef1std.append((np.std(f)/2)) acc0, pre0, rec0, fsc0 = meanevaluation(lists, 1, beta) acc1, pre1, rec1, fsc1 = meanevaluation(lists, 2, beta) acc2, pre2, rec2, fsc2 = meanevaluation(lists, 3, beta) acc, pre, rec, fsc = (acc0+acc1+acc2)/3, (pre0+pre1+pre2)/3, (rec0+rec1+rec2)/3, (fsc0+fsc1+fsc2)/3 wineaccuracy.append(acc) wineprecision.append(pre) winerecall.append(rec) winef1.append(fsc) markdownaprf(acc, pre, rec, fsc, beta, n, 'Wine with information gain') ``` Result/Stat of 1 trees random forest of Wine with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.927 | 0.92 | 0.891 | 0.894 | Result/Stat of 5 trees random forest of Wine with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.974 | 0.969 | 0.957 | 0.96 | Result/Stat of 10 trees random forest of Wine with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.974 | 0.967 | 0.963 | 0.962 | Result/Stat of 20 trees random forest of Wine with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.977 | 0.973 | 0.969 | 0.967 | Result/Stat of 30 trees random forest of Wine with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.977 | 0.97 | 0.968 | 0.967 | Result/Stat of 40 trees random forest of Wine with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.981 | 0.976 | 0.975 | 0.973 | Result/Stat of 50 trees random forest of Wine with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.981 | 0.974 | 0.975 | 0.972 | ``` ploter(wineaccuracy, '# of n in ntree V.S. accuracy of Wine with info gain', '# of n in ntree', 'Accuracy') ploter(wineprecision, '# of n in ntree V.S. precision of Wine with info gain', '# of n in ntree', 'Precision', wineprecisionstd) ploter(winerecall, '# of n in ntree V.S. recall of Wine with info gain', '# of n in ntree', 'Recall', winerecallstd) ploter(winef1, '# of n in ntree V.S. F-1 Score of Wine with info gain', '# of n in ntree', 'F-1 score with beta = 1', winef1std) ``` For this algorithm, here are the parameters I have: | **k (Fold)** | **max_depth** | **min_size_for_split** | **min_gain** |**bootstrap_ratio** | | :---: | :---: | :---: | :---: | :---: | | 10 | 10 | 5 | 0.01 | 0.1 | The K is the fold value, I just use the recommend k = 10 The max_depth is the maximum depth for the tree, (traverse depth), since there are only 178 instance of data, I just set max_depth to 10 because that won't matter to the algorithm. The min_size_for_split, say n, is when there are less than 'n' values in the subdata left, I don't do the split anymore. I set it to 5 because we have a small dataset for this. The min_gain is the minimal gain I need to keep decide. I set it to 0.01 so that's very close to 0 but not 0. The bootstrap_ratio is the ratio # of instance in training set that got resample in the bagging/bootstrap method. In this model, it's 0.1 so 10% of the data got resampled. ##### (4) For each metric being evaluated (and for each dataset), discuss which value of ntree you would select if you were to deploy this classifier in real life. Explain your reasoning. ANSWER: For the number of n of ntree in the random forest, I would pick n = 20. It's clearly that there are huge improvement in all accuracy, precision and recall (and f1, I will not mention them as frequently since it's the harmonic mean of precision and recall) when n raise from 1 to 5 and 10. However, higher than that, the rate of increasing in [accuracy, precison & recall] is getting slower. So the gain we have in the [accuracy, precison, recall, f1] is getting slower, while the cost of training the tree is getting larger, so I would pick n = 20, which seems to be a fiiting in the middle of [accuracy, precison & recall] and cost(in time and space) ##### (5) Discuss (on a high level) which metrics were more directly affected by changing the value of ntree and, more generally, how such changes affected the performance of your algorithm. For instance: was the accuracy of the random forest particularly sensitive to increasing ntree past a given value? Was the F1 score a “harder” metric to optimize, possibly requiring a significant number of trees in the ensemble? Is there a point beyond which adding more trees does not improve performance—or makes the performance worse? ANSWER: First, notice that the accuracy, precision, recall and f-1 score might have the same shape, since we need to "calculate the average of recall and precision by considering each class as positive once, calculating precision and recall then taking the average overall classes." (I quoted this answer from piazza) So this actually smooth out the precision and recall curve because we find the precision of all three classes and calculate the arithmetic mean. For the algorithm, I think it increase the most from n = 1 to n = 5. Actually, I think as long as n = 3, there's going to be a huge leap because instead of one, there are three trees thinking together and come up with a result. (can't be two because there no third voter to decide their disagreements.) And as n growth, the rate of increase in the performance decreases. (the rate of change decreases, not the performance decreases). It's like an log curve. I think, there will be some point that adding more trees doesn't improve the performance, might not toward one due to the limit of the algorithm (Maybe for NN performance of 1 is possible). But I don't think, generally speaking, the performance would getting worse at some point: there might be fluctuation, but not huge drops. ``` ploter(wineprecisionstd, '# of n in ntree V.S. std of precision of Wine with info gain', '# of n in ntree', 'Precision') ploter(winerecallstd, '# of n in ntree V.S. std of recall of Wine with info gain', '# of n in ntree', 'Recall') ploter(winef1std, '# of n in ntree V.S. std of F-1 Score of Wine with info gain', '# of n in ntree', 'F-1 score with beta = 1') ``` For the F score and so, we can find out that it's almost the same shape of curve as accuracy, since they have similar calculation process the (the harmonic mean of recall and precision; the accuracy is kind of similar to arithmetic mean of those two.) So F1 score is not a harder matrix to optimize. Notice that the st-dev of std would decrease with more tree, that means for all the k-folds the performance would be more 'stable' ### II. 1984 US Congressional Voting Dataset ``` houseaccuracy, houseprecision, houserecall, housef1 = [], [], [], [] houseprecisionstd, houserecallstd, housef1std = [], [] ,[] for n in parameterofn: lists = kfoldcrossvalid(housedata, housecategory, 10, n, 10, 5, 0.01, 'id3', 0.1)[0] beta = 1 a,p0,r0,f0,all = evaluate(lists, 0, beta) a,p1,r1,f1,all = evaluate(lists, 1, beta) p = p0+p1 r = r0+r1 f = f0+f1 houseprecisionstd.append(np.std(p)/2) houserecallstd.append(np.std(r)/2) housef1std.append(np.std(f)/2) acc0, pre0, rec0, fsc0 = meanevaluation(lists, 0, beta) acc1, pre1, rec1, fsc1 = meanevaluation(lists, 1, beta) acc, pre, rec, fsc = (acc0+acc1)/2, (pre0+pre1)/2, (rec0+rec1)/2, (fsc0+fsc1)/2 houseaccuracy.append(acc) houseprecision.append(pre) houserecall.append(rec) housef1.append(fsc) markdownaprf(acc, pre, rec, fsc, beta, n, 'House with information gain') ``` Result/Stat of 1 trees random forest of House with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.904 | 0.905 | 0.896 | 0.898 | Result/Stat of 5 trees random forest of House with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.947 | 0.948 | 0.942 | 0.943 | Result/Stat of 10 trees random forest of House with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.961 | 0.959 | 0.962 | 0.959 | Result/Stat of 20 trees random forest of House with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.961 | 0.96 | 0.96 | 0.958 | Result/Stat of 30 trees random forest of House with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.961 | 0.96 | 0.961 | 0.959 | Result/Stat of 40 trees random forest of House with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.954 | 0.952 | 0.954 | 0.952 | Result/Stat of 50 trees random forest of House with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.961 | 0.96 | 0.959 | 0.959 | ``` ploter(houseaccuracy, '# of n in ntree V.S. accuracy of HouseVote with info gain', '# of n in ntree', 'Accuracy') ploter(houseprecision, '# of n in ntree V.S. precision of HouseVote with info gain', '# of n in ntree', 'Precision', houseprecisionstd) ploter(houserecall, '# of n in ntree V.S. recall of HouseVote with info gain', '# of n in ntree', 'Recall', houserecallstd) ploter(housef1, '# of n in ntree V.S. F-1 Score of HouseVote with info gain', '# of n in ntree', 'F-1 score with beta = 1', housef1std) ``` For this algorithm, here are the parameters I have: | **k (Fold)** | **max_depth** | **min_size_for_split** | **min_gain** |**bootstrap_ratio** | | :---: | :---: | :---: | :---: | :---: | | 10 | 10 | 5 | 0.01 | 0.1 | I use the same setup as the first dataset. ##### (4) ANSWER: I would also pick 10 to 20, for this model and 20 is prefered for the low variance (st_dev). Indeed, I'd pick 19 (or 21), since it's better if it's odd, so that when there is a 10 vs 10 disagreement there could be a tree stanout to determine the prediction. ##### (5) The performance model is very similar to the previous model. One thing better is that it's faster but that make sence becuase it's a categorical tree. In these anlysis plots we can see that the rate of change in the performance is getting very close to zero as the # of n of ntree approach around 40-50. So in this case, it's not making sence to adding the value of n to improve. (maybe boosting would work!) ``` ploter(houseprecisionstd, '# of n in ntree V.S. std of precision of HouseVote with info gain', '# of n in ntree', 'Precision') ploter(houserecallstd, '# of n in ntree V.S. std of recall of HouseVote with info gain', '# of n in ntree', 'Recall') ploter(housef1std, '# of n in ntree V.S. std of F-1 Score of HouseVote with info gain', '# of n in ntree', 'F-1 score with beta = 1') ``` We can see the drop in st_dev as the n growth, but it is slight increase when the n get larger (anyway the growth is less than 0.005 so I guess it's just random error/fluctuation.) ### III. Extra Points #### (Extra Points #1: 6 Points) Reconstruct the same graphs as above, but now using the Gini criterion. You should present the same analyses and graphs mentioned above. Discuss whether (and how) different performance metrics were affected (positively or negatively) by changing the splitting criterion, and explain why you think that was the case. ``` wineaccuracygini, wineprecisiongini, winerecallgini, winef1gini = [], [], [], [] wineprecisionginistd, winerecallginistd, winef1ginistd = [], [] ,[] for n in parameterofn: lists = kfoldcrossvalid(winedata, winecategory, 10, n, 10, 5, 0.01, 'gini', 0.1)[0] beta = 1 a,p0,r0,f0,all = evaluate(lists, 1, beta) a,p1,r1,f1,all = evaluate(lists, 2, beta) a,p2,r2,f2,all = evaluate(lists, 3, beta) p = p0+p1+p2 r = r0+r1+r2 f = f0+f1+f2 wineprecisionginistd.append(np.std(p)/2) winerecallginistd.append(np.std(r)/2) winef1ginistd.append(np.std(f)/2) acc0, pre0, rec0, fsc0 = meanevaluation(lists, 1, beta) acc1, pre1, rec1, fsc1 = meanevaluation(lists, 2, beta) acc2, pre2, rec2, fsc2 = meanevaluation(lists, 3, beta) acc, pre, rec, fsc = (acc0+acc1+acc2)/3, (pre0+pre1+pre2)/3, (rec0+rec1+rec2)/3, (fsc0+fsc1+fsc2)/3 wineaccuracygini.append(acc) wineprecisiongini.append(pre) winerecallgini.append(rec) winef1gini.append(fsc) markdownaprf(acc, pre, rec, fsc, beta, n, 'Wine with Gini index') ``` Result/Stat of 1 trees random forest of Wine with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.93 | 0.917 | 0.89 | 0.893 | Result/Stat of 5 trees random forest of Wine with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.962 | 0.955 | 0.942 | 0.944 | Result/Stat of 10 trees random forest of Wine with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.977 | 0.969 | 0.969 | 0.966 | Result/Stat of 20 trees random forest of Wine with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.985 | 0.98 | 0.98 | 0.978 | Result/Stat of 30 trees random forest of Wine with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.977 | 0.972 | 0.969 | 0.967 | Result/Stat of 40 trees random forest of Wine with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.98 | 0.973 | 0.973 | 0.97 | Result/Stat of 50 trees random forest of Wine with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.989 | 0.984 | 0.985 | 0.983 | ``` ploter(wineaccuracygini, '# of n in ntree V.S. accuracy of Wine with Gini index', '# of n in ntree', 'Accuracy') ploter(wineprecisiongini, '# of n in ntree V.S. precision of Wine with Gini index', '# of n in ntree', 'Precision', wineprecisionginistd) ploter(winerecallgini, '# of n in ntree V.S. recall of Wine with Gini index', '# of n in ntree', 'Recall', winerecallginistd) ploter(winef1gini, '# of n in ntree V.S. F-1 Score of Wine with Gini index', '# of n in ntree', 'F-1 score with beta = 1', winef1ginistd) ploter(wineprecisionginistd, '# of n in ntree V.S. std of precision of Wine with Gini index', '# of n in ntree', 'Precision') ploter(winerecallginistd, '# of n in ntree V.S. std of recall of Wine with Gini index', '# of n in ntree', 'Recall') ploter(winef1ginistd, '# of n in ntree V.S. std of F-1 Score of Wine with Gini index', '# of n in ntree', 'F-1 score with beta = 1') houseaccuracygini, houseprecisiongini, houserecallgini, housef1gini = [], [], [], [] houseprecisionginistd, houserecallginistd, housef1ginistd = [], [] ,[] for n in parameterofn: lists = kfoldcrossvalid(housedata, housecategory, 10, n, 10, 10, 0.01, 'gini', 0.1)[0] beta = 1 a,p0,r0,f0,all = evaluate(lists, 0, beta) a,p1,r1,f1,all = evaluate(lists, 1, beta) p = p0+p1 r = r0+r1 f = f0+f1 houseprecisionginistd.append(np.std(p)/2) houserecallginistd.append(np.std(r)/2) housef1ginistd.append(np.std(f)/2) acc0, pre0, rec0, fsc0 = meanevaluation(lists, 0, beta) acc1, pre1, rec1, fsc1 = meanevaluation(lists, 1, beta) acc, pre, rec, fsc = (acc0+acc1)/2, (pre0+pre1)/2, (rec0+rec1)/2, (fsc0+fsc1)/2 houseaccuracygini.append(acc) houseprecisiongini.append(pre) houserecallgini.append(rec) housef1gini.append(fsc) markdownaprf(acc, pre, rec, fsc, beta, n, 'House with Gini index') ``` Result/Stat of 1 trees random forest of House with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.947 | 0.948 | 0.946 | 0.944 | Result/Stat of 5 trees random forest of House with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.945 | 0.945 | 0.942 | 0.941 | Result/Stat of 10 trees random forest of House with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.968 | 0.966 | 0.967 | 0.966 | Result/Stat of 20 trees random forest of House with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.952 | 0.949 | 0.951 | 0.949 | Result/Stat of 30 trees random forest of House with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.959 | 0.957 | 0.959 | 0.957 | Result/Stat of 40 trees random forest of House with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.954 | 0.954 | 0.954 | 0.952 | Result/Stat of 50 trees random forest of House with Gini index: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.954 | 0.95 | 0.954 | 0.951 | ``` ploter(houseaccuracygini, '# of n in ntree V.S. accuracy of HouseVote with Gini index', '# of n in ntree', 'Accuracy') ploter(houseprecisiongini, '# of n in ntree V.S. precision of HouseVote with Gini index', '# of n in ntree', 'Precision', houseprecisionginistd) ploter(houserecallgini, '# of n in ntree V.S. recall of HouseVote with Gini index', '# of n in ntree', 'Recall', houserecallginistd) ploter(housef1gini, '# of n in ntree V.S. F-1 Score of HouseVote with Gini index', '# of n in ntree', 'F-1 score with beta = 1', housef1ginistd) ploter(houseprecisionginistd, '# of n in ntree V.S. std of precision of HouseVote with Gini index', '# of n in ntree', 'Precision') ploter(houserecallginistd, '# of n in ntree V.S. std of recall of HouseVote with Gini index', '# of n in ntree', 'Recall') ploter(housef1ginistd, '# of n in ntree V.S. std of F-1 Score of HouseVote with Gini index', '# of n in ntree', 'F-1 score with beta = 1') ``` ANALYSIS for extra I: for those two, we are getting similar result on how the performance improves as n changes. For the house data, our performance is slightly better (around 96%) but the rate of increase is slower. For the wine data, the difference between info gain and gini are neglectable. #### (Extra Points #2: 6 Points) Analyze a third dataset: the Breast Cancer Dataset. The goal, here, is to classify whether tissue removed via a biopsy indicates whether a person may or may not have breast cancer. There are 699 instances in this dataset. Each instance is described by 9 numerical attributes, and there are 2 classes. You should present the same analyses and graphs as discussed above. This dataset can be found in the same zip file as the two main datasets. ``` # canceraccuracy, cancerprecision, cancerrecall, cancerf1 = [], [], [], [] # cancerprecisionstd, cancerrecallstd, cancerf1std = [], [], [] # for n in parameterofn: # lists = kfoldcrossvalid(cancerdata, cancercategory, 10, n, 7, 10, 0.01, 'id3', 0.1)[0] # beta = 1 # a,p,r,f,all = evaluate(lists, 1, beta) # cancerprecisionstd.append(np.std(p)/2) # cancerrecallstd.append(np.std(r)/2) # cancerf1std.append(np.std(f)/2) # acc, pre, rec, fsc = meanevaluation(lists, 1, beta) # canceraccuracy.append(acc) # cancerprecision.append(pre) # cancerrecall.append(rec) # cancerf1.append(fsc) # markdownaprf(acc, pre, rec, fsc, beta, n, 'Cancer with information gain') ``` Result/Stat of 1 trees random forest of Cancer with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.937 | 0.896 | 0.929 | 0.911 | Result/Stat of 5 trees random forest of Cancer with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.951 | 0.928 | 0.934 | 0.93 | Result/Stat of 10 trees random forest of Cancer with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.964 | 0.945 | 0.954 | 0.948 | Result/Stat of 20 trees random forest of Cancer with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.967 | 0.942 | 0.967 | 0.953 | Result/Stat of 30 trees random forest of Cancer with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.969 | 0.937 | 0.975 | 0.955 | Result/Stat of 40 trees random forest of Cancer with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.969 | 0.936 | 0.979 | 0.956 | Result/Stat of 50 trees random forest of Cancer with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.962 | 0.932 | 0.963 | 0.946 | ``` # ploter(canceraccuracy, '# of n in ntree V.S. accuracy of CancerData with information gain', '# of n in ntree', 'Accuracy') # ploter(cancerprecision, '# of n in ntree V.S. precision of CancerData with information gain', '# of n in ntree', 'Precision', cancerprecisionstd) # ploter(cancerrecall, '# of n in ntree V.S. recall of CancerData with information gain', '# of n in ntree', 'Recall', cancerrecallstd) # ploter(cancerf1, '# of n in ntree V.S. F-1 Score of CancerData with information gain', '# of n in ntree', 'F-1 score with beta = 1', cancerf1std) ``` For this algorithm, here are the parameters I have: | **k (Fold)** | **max_depth** | **min_size_for_split** | **min_gain** |**bootstrap_ratio** | | :---: | :---: | :---: | :---: | :---: | | 10 | 7 | 10 | 0.01 | 0.1 | I changed the max_depth to 7 because there are more instances in this data. and I also increased the min_size_for_split for the same reason. ``` # ploter(cancerprecisionstd, '# of n in ntree V.S. std of precision of CancerData with information gain', '# of n in ntree', 'Precision') # ploter(cancerrecallstd, '# of n in ntree V.S. std of recall of CancerData with information gain', '# of n in ntree', 'Recall') # ploter(cancerf1std, '# of n in ntree V.S. std of F-1 Score of CancerData with information gain', '# of n in ntree', 'F-1 score with beta = 1') ``` ANALYSIS for extra II: In this question, we finally have one fixed positive class. Since it's for detecting cancer, hence we should focus on the recall. For optizmating recall, since it's not a probability classfier, so what we can do is to resample more positive training data, and when analyzing, we could change the beta value (I didn't do it here since I want a more general view of it.) #### (Extra Points #3: 12 Points) Analyze a fourth, more challenging dataset: the Contraceptive Method Choice Dataset. The goal, here, is to predict the type of contraceptive method used by a person based on many attributes describing that person. This dataset is more challenging because it combines both numerical and categorical attributes. There are 1473 instances in this dataset. Each instance is described by 9 attributes, and there are 3 classes. The dataset can be downloaded here. You should present the same analyses and graphs discussed above. ``` cmcaccuracy, cmcprecision, cmcrecall, cmcf1 = [], [], [], [] cmcprecisionstd, cmcrecallstd, cmcf1std = [], [], [] for n in parameterofn: lists,accu = kfoldcrossvalid(cmcdata, cmccategory, 10, n, 10, 10, 0.01, 'id3', 0.1) beta = 1 a,p0,r0,f0,all = evaluate(lists, 1, beta) a,p1,r1,f1,all = evaluate(lists, 2, beta) a,p2,r2,f2,all = evaluate(lists, 3, beta) p = p0+p1+p2 r = r0+r1+r2 f = f0+f1+f2 cmcprecisionstd.append(np.std(p)/3) cmcrecallstd.append(np.std(r)/3) cmcf1std.append(np.std(f)/3) acc0, pre0, rec0, fsc0 = meanevaluation(lists, 1, beta) acc1, pre1, rec1, fsc1 = meanevaluation(lists, 2, beta) acc2, pre2, rec2, fsc2 = meanevaluation(lists, 3, beta) acc, pre, rec, fsc = (acc0+acc1+acc2)/3, (pre0+pre1+pre2)/3, (rec0+rec1+rec2)/3, (fsc0+fsc1+fsc2)/3 cmcaccuracy.append(accu) cmcprecision.append(pre) cmcrecall.append(rec) cmcf1.append(fsc) markdownaprf(accu, pre, rec, fsc, beta, n, 'CMC with information gain') ``` Result/Stat of 1 trees random forest of CMC with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.496 | 0.488 | 0.485 | 0.476 | Result/Stat of 5 trees random forest of CMC with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.5 | 0.483 | 0.484 | 0.477 | Result/Stat of 10 trees random forest of CMC with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.504 | 0.486 | 0.483 | 0.48 | Result/Stat of 20 trees random forest of CMC with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.519 | 0.499 | 0.495 | 0.492 | Result/Stat of 30 trees random forest of CMC with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.521 | 0.501 | 0.496 | 0.494 | Result/Stat of 40 trees random forest of CMC with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.513 | 0.49 | 0.487 | 0.484 | Result/Stat of 50 trees random forest of CMC with information gain: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta=1** | | :---: | :---: | :---: | :---: | |0.52 | 0.5 | 0.497 | 0.494 | ``` ploter(cmcaccuracy, '# of n in ntree V.S. accuracy of CMC Data with information gain', '# of n in ntree', 'Accuracy') ploter(cmcprecision, '# of n in ntree V.S. precision of CMC Data with information gain', '# of n in ntree', 'Precision', cmcprecisionstd) ploter(cmcrecall, '# of n in ntree V.S. recall of CMC Data with information gain', '# of n in ntree', 'Recall', cmcrecallstd) ploter(cmcf1, '# of n in ntree V.S. F-1 Score of CMC Data with information gain', '# of n in ntree', 'F-1 score with beta = 1', cmcf1std) ploter(cmcprecisionstd, '# of n in ntree V.S. std of precision of CMC Data with information gain', '# of n in ntree', 'Precision') ploter(cmcrecallstd, '# of n in ntree V.S. std of recall of CMC Data with information gain', '# of n in ntree', 'Recall') ploter(cmcf1std, '# of n in ntree V.S. std of F-1 Score of CMC Data with information gain', '# of n in ntree', 'F-1 score with beta = 1') ``` The result is kind of weird. with max accuracy around 52-53%, I tried to modified all the parameters, like maxdepth to 5 and so, but that only helps me to improve from 50% to 52%.. ### IV. Appendix: Code ##### 1. evaluationmatrix.py ``` import math import numpy as np import matplotlib.pyplot as plt from IPython.display import display, Markdown def accuracy(truePosi, trueNega, falsePosi, falseNega): # Count of all four return (truePosi+trueNega)/(truePosi+trueNega+falseNega+falsePosi) def precision(truePosi, trueNega, falsePosi, falseNega): if (truePosi+falsePosi) == 0: return 0 preposi = truePosi/(truePosi+falsePosi) # prenega = trueNega/(trueNega+falseNega) return preposi def recall(truePosi, trueNega, falsePosi, falseNega): if (truePosi+falseNega)== 0: return 0 recposi = truePosi/(truePosi+falseNega) # recnega = trueNega/(trueNega+falsePosi) return recposi def fscore(truePosi, trueNega, falsePosi, falseNega, beta: 1): pre = precision(truePosi, trueNega, falsePosi, falseNega) rec = recall(truePosi, trueNega, falsePosi, falseNega) if (pre*(beta**2)+rec) == 0: return 0 f = (1+beta**2)*((pre*rec)/(pre*(beta**2)+rec)) return f def evaluate(listsofoutput, positivelabel, beta=1): # list is list of [predicted, actual] listoftptnfpfn = [] accuarcylists = [] precisionlists = [] recalllists = [] fscorelists = [] for output in listsofoutput: tp, tn, fp, fn, = 0, 0, 0, 0 for i in range(len(output)): if output[i][0] == positivelabel and output[i][1] == positivelabel: tp += 1 elif output[i][0] != positivelabel and output[i][0] == output[i][1]: tn += 1 elif output[i][0] == positivelabel and output[i][1] != positivelabel: fp += 1 elif output[i][0] != positivelabel and output[i][1] == positivelabel: fn += 1 tptnfpfn = [tp, tn, fp, fn] listoftptnfpfn.append(tptnfpfn) accuarcylists.append(accuracy(tp, tn, fp, fn)) precisionlists.append(precision(tp, tn, fp, fn)) recalllists.append(recall(tp, tn, fp, fn)) fscorelists.append(fscore(tp, tn, fp, fn, beta)) return accuarcylists, precisionlists, recalllists, fscorelists, listoftptnfpfn def meanevaluation(listsofoutput, positivelabel, beta=1): accuarcylists, precisionlists, recalllists, fscorelists, notused = evaluate(listsofoutput, positivelabel, beta) return sum(accuarcylists)/len(accuarcylists), sum(precisionlists)/len(precisionlists), sum(recalllists)/len(recalllists), sum(fscorelists)/len(fscorelists) def markdownaprf(acc,pre,rec,fsc,beta,nvalue,title): acc, pre, rec, fsc = round(acc,3), round(pre,3), round(rec,3), round(fsc,3) display(Markdown(rf""" Result/Stat of {nvalue} trees random forest of {title}: | **Accuracy** | **Precision** | **Recall** | **F-Score, Beta={beta}** | | :---: | :---: | :---: | :---: | |{acc} | {pre} | {rec} | {fsc} | """)) def markdownmatrix(tptnfpfn,title): tp, tn, fp, fn = tptnfpfn[0], tptnfpfn[1], tptnfpfn[2], tptnfpfn[3] display(Markdown(rf""" Confusion Matrix: {title} | | **Predicted +** | **Predicted-** | | :--- | :--- | :--- | | **Actual +** | {tp} | {fp} | | **Actual -** | {fn} | {tn} | """)) def confusionmatrix(truePosi, trueNega, falsePosi, falseNega, title=""): fig = plt.figure() plt.title(title) col_labels = ['Predict:+', 'Predict:-'] row_labels = ['Real:+', 'Real:-'] table_vals = [[truePosi, falseNega], [falsePosi, trueNega]] the_table = plt.table(cellText=table_vals, colWidths=[0.1] * 3, rowLabels=row_labels, colLabels=col_labels, loc='center') the_table.auto_set_font_size(False) the_table.set_fontsize(24) the_table.scale(4, 4) plt.tick_params(axis='x', which='both', bottom=False, top=False, labelbottom=False) plt.tick_params(axis='y', which='both', right=False, left=False, labelleft=False) for pos in ['right','top','bottom','left']: plt.gca().spines[pos].set_visible(False) plt.show() return ``` ##### 2. utils.py ``` from sqlite3 import Row from evaluationmatrix import * from sklearn import datasets import random import numpy as np import csv import math import matplotlib.pyplot as plt from collections import Counter def importfile(name:str,delimit:str): # importfile('hw3_wine.csv', '\t') file = open("datasets/"+name, encoding='utf-8-sig') reader = csv.reader(file, delimiter=delimit) dataset = [] for row in reader: dataset.append(row) file.close() return dataset def same(attributecolumn): return all(item == attributecolumn[0] for item in attributecolumn) def majority(attributecolumn): return np.argmax(np.bincount(attributecolumn.astype(int))) def entropy(attributecol): values = list(Counter(attributecol).values()) ent = 0 for value in values: k = (value/sum(values)) ent += -k*math.log(k,2) return ent def gini(attributecol): values = list(Counter(attributecol).values()) ginivalue = 1 for value in values: prob = (value/sum(values)) ginivalue -= prob**2 return ginivalue def dropbyindex(data, category, listindex): newdata = np.delete(data.T, listindex).T keytoremove = [list(category.keys())[i] for i in listindex] newcategory = category.copy() [newcategory.pop(key) for key in keytoremove] return newdata, newcategory def id3bestseperate(dataset, attributes:dict): # dataset in is the dataset by row. # attributes is the dictionary of attributes:type # types: numerical, categorical, binary. datasetbycolumn = dataset.T classindex = list(attributes.values()).index("class") originalentrophy = entropy(datasetbycolumn[classindex]) smallestentrophy = originalentrophy thresholdvalue = -1 i = 0 bestattribute = {list(attributes.keys())[i]:attributes[list(attributes.keys())[i]]} attributesinuse = list(attributes.keys())[1:] if (classindex == 0) else list(attributes.keys())[:classindex] # datasetinuse = datasetbycolumn[1:] if (classindex == 0) else datasetbycolumn[:classindex] for attribute in attributesinuse: idx = i+1 if classindex == 0 else i if attributes[attribute] == "categorical" or attributes[attribute] == "binary": listofkeys = list(Counter(datasetbycolumn[idx]).keys()) listofcategory = [] # this is the list of categorical values. for key in listofkeys: indexlist = [idex for idex, element in enumerate(datasetbycolumn[idx]) if element == key] category = np.array(datasetbycolumn[classindex][indexlist]) listofcategory.append(category) entropynow = 0 for ctgry in listofcategory: a = len(ctgry)/len(datasetbycolumn[idx]) # This is probability entropynow += a * entropy(ctgry) if entropynow < smallestentrophy: smallestentrophy = entropynow bestattribute = {attribute:attributes[attribute]} elif attributes[attribute] == "numerical": datasetsort = datasetbycolumn.T[datasetbycolumn.T[:,idx].argsort(kind='quicksort')].T currentthreshold = (datasetsort[idx][1]+datasetsort[idx][0])/2 k = 1 while k < len(datasetsort.T): currentthreshold = (datasetsort[idx][k]+datasetsort[idx][k-1])/2 listofcategory = [datasetsort[classindex][:k],datasetsort[classindex][k:]] entropynow = 0 for ctgry in listofcategory: a = len(ctgry)/len(datasetbycolumn[idx]) # This is probability entropynow += a * entropy(ctgry) if entropynow < smallestentrophy: smallestentrophy = entropynow thresholdvalue = currentthreshold bestattribute = {attribute:attributes[attribute]} k += 1 i += 1 gain = originalentrophy-smallestentrophy # set first attribution dictionary {key:type} to the best attributes. return bestattribute, thresholdvalue, gain def cartbestseperate(dataset, attributes:dict): # dataset in is the dataset by row. # attributes is the dictionary of attributes:type # types: numerical, categorical, binary. datasetbycolumn = dataset.T classindex = list(attributes.values()).index("class") originalgini = gini(datasetbycolumn[classindex]) smallestgini = originalgini thresholdvalue = -1 i = 0 bestattribute = {list(attributes.keys())[i]:attributes[list(attributes.keys())[i]]} attributesinuse = list(attributes.keys())[1:] if (classindex == 0) else list(attributes.keys())[:classindex] # datasetinuse = datasetbycolumn[1:] if (classindex == 0) else datasetbycolumn[:classindex] for attribute in attributesinuse: idx = i+1 if classindex == 0 else i if attributes[attribute] == "categorical" or attributes[attribute] == "binary": listofkeys = list(Counter(datasetbycolumn[idx]).keys()) listofcategory = [] # this is the list of categorical values. for key in listofkeys: indexlist = [idex for idex, element in enumerate(datasetbycolumn[idx]) if element == key] category = np.array(datasetbycolumn[classindex][indexlist]) listofcategory.append(category) currentgini = 0 for ctgry in listofcategory: a = len(ctgry)/len(datasetbycolumn[idx]) # This is probability currentgini += a * gini(ctgry) if currentgini < smallestgini: smallestgini = currentgini bestattribute = {attribute:attributes[attribute]} elif attributes[attribute] == "numerical": datasetsort = datasetbycolumn.T[datasetbycolumn.T[:,idx].argsort(kind='quicksort')].T currentthreshold = (datasetsort[idx][1]+datasetsort[idx][0])/2 k = 1 while k < len(datasetsort.T): currentthreshold = (datasetsort[idx][k]+datasetsort[idx][k-1])/2 listofcategory = [datasetsort[classindex][:k],datasetsort[classindex][k:]] currentgini = 0 for ctgry in listofcategory: a = len(ctgry)/len(datasetbycolumn[idx]) # This is probability currentgini += a * gini(ctgry) if currentgini < smallestgini: smallestgini = currentgini thresholdvalue = currentthreshold bestattribute = {attribute:attributes[attribute]} k += 1 i += 1 # set first attribution dictionary {key:type} to the best attributes. gain = originalgini-smallestgini return bestattribute, thresholdvalue, gain ``` ##### 3. decisiontree.py ``` import sklearn.model_selection import numpy as np import csv import math import matplotlib.pyplot as plt import random from collections import Counter from utils import * class Treenode: type = "" datatype = "" label = None testattribute = "" edge = {} majority = -1 threshold = -1 # for numerical value testattributedict = {} depth = 0 _caldepth = 0 parent = None def __init__(self, label, type): self.label = label self.type = type # self.left = left # self.right = right def caldepth(self): a = self while a.parent is not None: self._caldepth += 1 a = a.parent return self._caldepth def isfather(self): if self.parent is None: return True else: return False # Decision Tree that only analyze square root of the data. def decisiontreeforest(dataset: np.array, dictattributes: dict, algortype: str ='id3', maxdepth: int = 10, minimalsize: int = 10, minimalgain: float = 0.01): datasetcopy = np.copy(dataset).T # dataset copy is by colomn. dictattricopy = dictattributes.copy() classindex = list(dictattributes.values()).index("class") k = len(dictattributes)-1 randomlist = random.sample(range(0, k), round(math.sqrt(k))) if classindex !=0 else random.sample(range(1, k+1), round(math.sqrt(k))) randomlist.append(classindex) randomkey = [list(dictattricopy.keys())[i] for i in randomlist] trimmeddict = {key:dictattricopy[key] for key in randomkey} trimmeddata = np.array(datasetcopy[randomlist]) def processbest(algor): if algor == "cart" or algor == "gini": return cartbestseperate(trimmeddata.T, trimmeddict) else: # algor == "id3" or algor == "infogain" return id3bestseperate(trimmeddata.T, trimmeddict) node = Treenode(label=-1,type="decision") currentdepth = node.depth node.majority = majority(datasetcopy[classindex]) if same(datasetcopy[classindex]): node.type = "leaf" node.label = datasetcopy[classindex][0] return node if len(dictattricopy) == 0: node.type = "leaf" node.label = majority(datasetcopy[classindex]) return node # A stopping criteria 'minimal_size_for_split_criterion' if len(dataset) <= minimalsize: node.type = "leaf" node.label = majority(datasetcopy[classindex]) return node bestattributedict,thresholdval,gain = processbest(algortype) bestattributename = list(bestattributedict.keys())[0] bestattributetype = bestattributedict[bestattributename] node.testattributedict = bestattributedict node.datatype = bestattributetype node.testattribute = bestattributename node.threshold = thresholdval bindex = list(dictattricopy.keys()).index(list(bestattributedict.keys())[0]) # A Possible Stopping criteria 'minimal_gain' if gain < minimalgain: node.type = "leaf" node.label = majority(datasetcopy[classindex]) return node subdatalists = [] if bestattributetype == "numerical": sortedcopy = datasetcopy.T[datasetcopy.T[:,bindex].argsort(kind='quicksort')].T splitindex = 0 for numericalvalue in sortedcopy[bindex]: if numericalvalue > thresholdval: break else: splitindex += 1 subdatalistraw = [sortedcopy.T[:splitindex].T,sortedcopy.T[splitindex:].T] for subdata in subdatalistraw: subdata = np.delete(subdata,bindex,0) subdatalists.append(subdata.T) else: bigv = list(Counter(datasetcopy[bindex]).keys()) # this is the all the categories of the test attribute left. for smallv in bigv: index = [idx for idx, element in enumerate(datasetcopy[bindex]) if element == smallv] subdatav = np.array(datasetcopy.T[index]).T subdatav = np.delete(subdatav,bindex,0) # I delete the column I already used using bindex as reference. # Then, later, pop the same index from list attribute. subdatalists.append(subdatav.T) # list of nparrays of target/label/categories. dictattricopy.pop(bestattributename) edge = {} sdindex = 0 for subvdata in subdatalists: if subvdata.size == 0: node.type = "leaf" node.label = node.majority node.threshold = thresholdval return node # Another Stoping criteria I could ADD: maximal depth if node.caldepth()+1 > maxdepth: node.type = "leaf" node.label = node.majority node.threshold = thresholdval return node subtree = decisiontreeforest(subvdata, dictattricopy, algortype, maxdepth, minimalsize, minimalgain) subtree.depth = currentdepth + 1 subtree.parent = node if bestattributetype == 'numerical': attributevalue = "<=" if sdindex == 0 else ">" else: attributevalue = bigv[sdindex] edge[attributevalue] = subtree sdindex += 1 node.edge = edge return node # Predict the label of the test data, return correct and predict. def prediction(tree: Treenode, instance, dictattricopy): # note that the instance is by row. (I formerly used by column) predict = tree.majority classindex = list(dictattricopy.values()).index("class") correct = instance[classindex] if tree.type == 'leaf': predict = tree.label return predict, correct, predict==correct testindex = list(dictattricopy.keys()).index(tree.testattribute) if tree.datatype == "numerical": if instance[testindex] <= tree.threshold: nexttree = tree.edge['<='] else: nexttree = tree.edge['>'] else: if instance[testindex] not in tree.edge: return predict, correct, predict==correct nexttree = tree.edge[instance[testindex]] return prediction(nexttree, instance, dictattricopy) ``` ##### 4. randomforest.py ``` from utils import * from decisiontree import * # Stratified K-Fold method def stratifiedkfold(data, categorydict, k = 10): classindex = list(categorydict.values()).index("class") datacopy = np.copy(data).T classes = list(Counter(datacopy[classindex]).keys()) nclass = len(classes) # number of classes listofclasses = [] for oneclass in classes: index = [idx for idx, element in enumerate(datacopy[classindex]) if element == oneclass] oneclassdata = np.array(datacopy.T[index]) np.random.shuffle(oneclassdata) listofclasses.append(oneclassdata) splitted = [np.array_split(i, k) for i in listofclasses] nclass = len(classes) combined = [] for j in range(k): ithterm = [] for i in range(nclass): if len(ithterm) == 0: ithterm = splitted[i][j] else: ithterm = np.append(ithterm,splitted[i][j],0) combined.append(ithterm) return combined # Bootstrap/Bagging method with resample ratio def bootstrap(data, ratio=0.1): data2 = np.copy(data) k = len(data) randomlist = random.sample(range(0, k), round(k*ratio)) data2 = np.delete(data2, randomlist, 0) p = len(data2) randomfill = random.sample(range(0, p), k-p) data2 = np.concatenate((data2,data2[randomfill]),0) # print(len(data2)) return data2 # Random Forest, plant a forest of n trees def plantforest(data, categorydict, ntree=10, maxdepth=10, minimalsize=10, minimalgain=0.01, algortype='id3', bootstrapratio = 0.1): forest = [] for i in range(ntree): datause = bootstrap(data, bootstrapratio) tree = decisiontreeforest(datause,categorydict,algortype,maxdepth,minimalsize,minimalgain) forest.append(tree) return forest # Predict the class of a single instance def forestvote(forest, instance, categorydict): votes = {} for tree in forest: predict, correct, correctbool = prediction(tree,instance,categorydict) if predict not in votes: votes[predict] = 1 else: votes[predict] += 1 return max(votes, key=votes.get), correct # A complete k-fold cross validation def kfoldcrossvalid(data, categorydict, k=10, ntree=10, maxdepth=5, minimalsize=10, minimalgain=0.01, algortype='id3', bootstrapratio = 0.1): folded = stratifiedkfold(data, categorydict, k) listofnd = [] accuracylist = [] for i in range(k): # print("at fold", i) testdataset = folded[i] foldedcopy = folded.copy() foldedcopy.pop(i) traindataset = np.vstack(foldedcopy) correctcount = 0 trainforest = plantforest(traindataset,categorydict,ntree,maxdepth,minimalsize,minimalgain,algortype,bootstrapratio) emptyanalysis = [] # testdataset = traindataset for instance in testdataset: predict, correct = forestvote(trainforest,instance,categorydict) emptyanalysis.append([predict, correct]) if predict == correct: correctcount += 1 listofnd.append(np.array(emptyanalysis)) # print('fold', i+1, ' accuracy: ', correctcount/len(testdataset)) accuracylist.append(correctcount/len(testdataset)) acc = np.mean(accuracylist) return listofnd, acc ``` ##### 5. run.py ``` from utils import * from decisiontree import * from randomforest import * def importhousedata(): house = importfile('hw3_house_votes_84.csv', ',') housecategory = {} for i in house[0]: housecategory[i] = 'categorical' housecategory["class"] = 'class' housedata = np.array(house[1:]).astype(float) return housedata, housecategory def importwinedata(): wine = importfile('hw3_wine.csv', '\t') winecategory = {} for i in wine[0]: winecategory[i] = 'numerical' winecategory["# class"] = 'class' winedata = np.array(wine[1:]).astype(float) return winedata, winecategory def importcancerdata(): cancer = importfile('hw3_cancer.csv', '\t') cancercategory = {} for i in cancer[0]: cancercategory[i] = 'numerical' cancercategory["Class"] = 'class' cancerdata = np.array(cancer[1:]).astype(float) return cancerdata, cancercategory def importcmcdata(): cmc = importfile('cmc.data', ',') cmccategory = {"Wife's age":"numerical","Wife's education":"categorical", "Husband's education":"categorical","Number of children ever born":"numerical", "Wife's religion":"binary","Wife's now working?":"binary", "Husband's occupation":"categorical","Standard-of-living index":"categorical", "Media exposure":"binary","Contraceptive method used":"class"} cmcdata = np.array(cmc).astype(int) return cmcdata, cmccategory if __name__=="__main__": housedata, housecategory = importhousedata() winedata, winecategory = importwinedata() cancerdata, cancercategory = importcancerdata() cmcdata,cmccategory = importcmcdata() lists,acc = kfoldcrossvalid(cancerdata, cancercategory, k=10, ntree=20, maxdepth=10, minimalsize=10, minimalgain=0.01, algortype='id3', bootstrapratio = 0.1) print(acc) print(lists) ```
github_jupyter
``` from PIL import Image from numpy import * from pylab import * import scipy.misc from scipy.cluster.vq import * import imtools import pickle imlist = imtools.get_imlist('selected_fontimages/') imnbr = len(imlist) with open('font_pca_modes.pkl', 'rb') as f: immean = pickle.load(f) V = pickle.load(f) immatrix = array([array(Image.open(im)).flatten() for im in imlist], 'f') immean = immean.flatten() projected = array([dot(V[:40], immatrix[i]-immean) for i in range(imnbr)]) cluster_num = 3 projected = whiten(projected) centroids, distortion = kmeans(projected, cluster_num) code, distance = vq(projected, centroids) def divide_branch_with_center(data, branch, k): div = min(k, len(branch)) if div<=1: return list(branch) centroids, distortion = kmeans(data[branch], k) code, distance = vq(data[branch], centroids) new_branch = [] for i in range(k): ind = where(code==i)[0] if len(ind)==0: continue else: new_branch.append((centroids[i], distance[i], divide_branch_with_center(data, branch[ind], k))) return new_branch tree = array([i for i in range(projected.shape[0])]) branches = ([0 for i in range(40)], 0, divide_branch_with_center(projected, tree, 4)) def get_depth(t): if len(t[2])<2: return 1 else: return max([get_depth(tt) for tt in t[2]])+1 def get_height(t): if (len(t[2])<2): return 1 else: return sum([get_height(tt) for tt in t[2]]) from PIL import Image, ImageDraw def draw_average(center, x, y, im): c = center/np.linalg.norm(center) avim = dot((V[:40]).T, c) avim = 255*(avim-min(avim))/(max(avim)-min(avim)+1e-6) avim = avim.reshape(25, 25) avim[avim<0] = 0 avim[avim>255] = 255 avim = Image.fromarray(avim) avim.thumbnail([20, 20]) ns = avim.size im.paste(avim, [int(x), int(y-ns[1]//2), int(x+ns[0]), int(y+ns[1]-ns[1]//2)]) def draw_node(node, draw, x, y, s, iml, im): if len(node[2])<1: return if len(node[2])==1: nodeim = Image.open(iml[node[2][0]]) nodeim.thumbnail([20, 20]) ns = nodeim.size im.paste(nodeim, [int(x), int(y-ns[1]//2), int(x+ns[0]), int(y+ns[1]-ns[1]//2)]) else: ht = sum([get_height(n) for n in node[2]])*20/2 h1 = get_height(node[2][0])*20/2 h2 = get_height(node[2][-1])*20/2 top = y-ht bottom = y+ht draw.line((x, top+h1, x, bottom-h2), fill=(0, 0, 0)) y = top for i in range(len(node[2])): ll = node[2][i][1]/8*s y += get_height(node[2][i])*20/2 xx = x + ll + s/4 draw.line((x, y, xx, y), fill=(0, 0, 0)) if len(node[2][i][2])>1: draw_average(node[2][i][0], xx, y, im) xx = xx+20 draw.line((xx, y, xx+s/4, y), fill=(0, 0, 0)) xx = xx+s/4 draw_node(node[2][i], draw, xx, y, s, imlist, im) y += get_height(node[2][i])*20/2 def draw_dendrogram(node, iml, filename='kclusters.jpg'): rows = get_height(node)*20+40 cols = 1200 s = float(cols-150)/get_depth(node) im = Image.new('RGB', (cols, rows), (255, 255, 255)) draw = ImageDraw.Draw(im) x = 0 y = rows/2 avim = Image.fromarray(immean.reshape(25, 25)) avim.thumbnail([20, 20]) ns = avim.size im.paste(avim, [int(x), int(y-ns[1]//2), int(x+ns[0]), int(y+ns[1]-ns[1]//2)]) draw.line((x+20, y, x+40, y), fill=(0, 0, 0)) draw_node(node, draw, x+40, (rows/2), s, iml, im) im.save(filename) im.show() draw_dendrogram(branches, imlist, filename='k_fonts.jpg') ```
github_jupyter
### - Canonical Correlation Analysis btw Cell painting & L1000 - This notebook focus on calculating the canonical coefficients between the canonical variables of Cell painting and L1000 level-4 profiles after applying PCA on them. --------------------------------------------- - The aim of CCA is finding the relationship between two lumped variables in a way that the correlation between these twos is maximum. Obviously, there are several linear combinations of variables, but the aim is to pick only those linear functions which best express the correlations between the two variable sets. These linear functions are called the canonical variables, and the correlations between corresponding pairs of canonical variables are called canonical correlations. [CCA read](https://medium.com/analytics-vidhya/what-is-canonical-correlation-analysis-58ef4349c0b0) [cca_tutorial](https://github.com/google/svcca/blob/master/tutorials/001_Introduction.ipynb) ``` from google.colab import drive drive.mount('/content/drive') import os, sys from matplotlib import pyplot as plt %matplotlib inline import numpy as np import pickle import pandas as pd import seaborn as sns import gzip sns.set_context("talk") sns.set_style("darkgrid") from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.cross_decomposition import CCA ###know the current directory os.getcwd() os.chdir('/content/drive') # !cat 'My Drive/profiles/cell_painting/cca_core.py' sys.path.append('My Drive/profiles/cell_painting/') import cca_core L1000_cp_dir = 'My Drive/profiles/L1000_cellpainting_comparison/L1000_CP_lvl4_datasets' df_train = pd.read_csv(os.path.join(L1000_cp_dir, 'train_lvl4_data.csv.gz'), compression='gzip',low_memory = False) df_test = pd.read_csv(os.path.join(L1000_cp_dir, 'test_lvl4_data.csv.gz'), compression='gzip',low_memory = False) df_targets = pd.read_csv(os.path.join(L1000_cp_dir, 'target_labels.csv')) metadata_cols = ['replicate_name', 'replicate_id', 'Metadata_broad_sample', 'Metadata_pert_id', 'Metadata_Plate', 'Metadata_Well', 'Metadata_broad_id', 'Metadata_moa', 'sig_id', 'pert_id', 'pert_idose', 'det_plate', 'det_well', 'Metadata_broad_sample', 'pert_iname', 'moa', 'dose'] target_cols = df_targets.columns[1:] df_train_y = df_train[target_cols].copy() df_train_x = df_train.drop(target_cols, axis = 1).copy() df_test_y = df_test[target_cols].copy() df_test_x = df_test.drop(target_cols, axis = 1).copy() df_train_x.drop(metadata_cols, axis = 1, inplace = True) df_test_x.drop(metadata_cols, axis = 1, inplace = True) cp_cols = df_train_x.columns.tolist()[:696] L1000_cols = df_train_x.columns.tolist()[696:] df_train_cp_x = df_train_x.iloc[:, :696].copy() df_train_L1000_x = df_train_x.iloc[:, 696:].copy() df_test_cp_x = df_test_x.iloc[:, :696].copy() df_test_L1000_x = df_test_x.iloc[:, 696:].copy() df_cp_x = pd.concat([df_train_cp_x, df_test_cp_x]) df_L1000_x = pd.concat([df_train_L1000_x, df_test_L1000_x]) def normalize(df): '''Normalize using Standardscaler''' norm_model = StandardScaler() df_norm = pd.DataFrame(norm_model.fit_transform(df),index = df.index,columns = df.columns) return df_norm df_L1000_x = normalize(df_L1000_x) df_cp_x = normalize(df_cp_x) # taking the first 300 PCs for CCA and SVCCA def pca_preprocess(df,n_comp1 = 300,feat_new = ['pca'+ str(i) for i in range(300)]): pca = PCA(n_components=n_comp1, random_state=42) df_pca = pd.DataFrame(pca.fit_transform(df),columns=feat_new) return(df_pca) df_L1_pc_x = pca_preprocess(df_L1000_x) df_cp_pc_x = pca_preprocess(df_cp_x) ``` #### - CCA on CP & L1000 train data ``` cca_results = cca_core.get_cca_similarity(df_cp_pc_x.values.T, df_L1_pc_x.values.T, epsilon=1e-10, verbose=False) plt.figure(figsize=(12,8)) sns.set_context('talk', font_scale = 0.85) sns.lineplot(x=range(len(cca_results["cca_coef1"])), y=cca_results["cca_coef1"]) plt.title("CCA correlation coefficients between CP and L1000 canonical variables (300) after PCA") print("Mean Canonical Correlation co-efficient between CP and L1000 canonical variables (300):", np.mean(cca_results["cca_coef1"])) ``` #### - (Singular Vectors)CCA as a method to analyze the correlation between Cell painting & L1000 ``` print("Results using SVCCA keeping 300 dims") # Mean subtract activations cacts1 = df_cp_pc_x.values.T - np.mean(df_cp_pc_x.values.T, axis=1, keepdims=True) cacts2 = df_L1_pc_x.values.T - np.mean(df_L1_pc_x.values.T, axis=1, keepdims=True) # Perform SVD U1, s1, V1 = np.linalg.svd(cacts1, full_matrices=False) U2, s2, V2 = np.linalg.svd(cacts2, full_matrices=False) svacts1 = np.dot(s1[:300]*np.eye(300), V1[:300]) # can also compute as svacts1 = np.dot(U1.T[:20], cacts1) svacts2 = np.dot(s2[:300]*np.eye(300), V2[:300]) # can also compute as svacts1 = np.dot(U2.T[:20], cacts2) svcca_results = cca_core.get_cca_similarity(svacts1, svacts2, epsilon=1e-10, verbose=False) print('mean svcca correlation coefficient:', np.mean(svcca_results["cca_coef1"])) plt.figure(figsize=(12,8)) sns.set_context('talk', font_scale = 0.85) plt.plot(svcca_results["cca_coef1"], lw=2.0) plt.xlabel("Sorted CCA Correlation Coeff Idx") plt.ylabel("CCA Correlation Coefficient Value") plt.title("SVCCA correlation coefficients between CP and L1000 canonical variables (300)") ``` ### - Using Sklearn CCA package for CCA ``` cca = CCA(n_components=df_cp_pc_x.shape[1]) cp_cca_vars, L1000_cca_vars = cca.fit_transform(df_cp_pc_x, df_L1_pc_x) canonical_coeffs = np.corrcoef(cp_cca_vars.T, L1000_cca_vars.T).diagonal(offset=df_cp_pc_x.shape[1]) print('mean svcca correlation coefficient:', np.mean(svcca_results["cca_coef1"])) plt.figure(figsize=(12,8)) sns.set_context('talk', font_scale = 0.85) plt.plot(canonical_coeffs, lw=2.0) plt.xlabel("Sorted CCA Correlation Coeff Idx") plt.ylabel("CCA Correlation Coefficient Value") plt.title("CCA correlation coefficients between CP and L1000 canonical variables after PCA") ``` #### - Ultimately for further analysis, focus will be on the first few canonical variables of both CP and L1000 that have the highest canonical coefficients.
github_jupyter
<a href="https://colab.research.google.com/github/dmathe18/global_armed_conflict_final_repo/blob/main/Identifying_factors_contributing_to_armed_conflict_scatterplot_analyses.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ``` import pandas as pd import numpy as np import plotly.express as px import six import plotly.express as px from google.colab import files # Import csvs as dataframes df_fuel = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Fuel_exports_percent_merchandise_exports.csv') df_energy = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Energy_Use.csv') df_male_ag = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Percent_Male_Agriculture.csv') df_electricity_access = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Percent_access_to_electricity.csv') df_low_income = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Income_share_by_lowest_20.csv') df_school_enroll = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Percent_school_enrollment.csv') df_literacy = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Literacy_rate.csv') df_high_tech=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/High_tech_exports.csv') df_deaths = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/conflict_deaths.csv') df_conflicts = pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/conflicts.csv') df_imports=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Arms_imports.csv') df_exports=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Arms_exports.csv') df_expense_gdp=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Military_expenditure_GDP.csv') df_expense_capita=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/Military_expenditure_capita.csv') df_regions=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/countries_regions.csv') df_regions=df_regions.rename(columns={'Unnamed: 1':'Country'}) df_fuel_import_plot=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/fuel_imports_data.csv') df_lowinc_conflict_plot=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/lowinc_conflicts_plot_data.csv') df_various_expenditure_plot=pd.read_csv('https://raw.githubusercontent.com/mehurlock94/identifying-factors-contributing-to-armed-conflict/main/various_expenditure_capita.csv') # This is old code that identifies the years with the most data points for each df # length_tracker={} # counter=0 # for df in df_list: # for column in df: # length_tracker[str(counter)+'_'+column]=df[column].isnull().sum(axis=0) # counter+=1 # best_dict={} # counter=0 # for df in df_list: # lowest=5000 # for key in length_tracker: # if str(counter)+'_' in key and 'Country' not in key and 'ame' not in key and 'ode' not in key: # if length_tracker[key]<lowest: # lowest=length_tracker[key] # best = key # best_dict[best]=lowest # counter+=1 # print (df_fuel.isnull().sum(axis=0)) # # Lists to select specific dataframes and eventual column headers # df_list =[df_fuel] # master_labels = ['Fuel'] # Lists which can be varied to select data to be exported in the final csv. Years should reflect those which maximize the number of points for a particular data type. df_list=[df_fuel, df_male_ag, df_electricity_access, df_school_enroll,df_low_income] df_list_columns=['Fuel','Male_ag','Electricity','School_enroll','Low_income'] df_list_years=[2010, 2012, 2012, 2018, 2018, 2005] # Pull unique country names from the lists to identify differences and clean deaths_conflicts=pd.unique(df_deaths['Country']) imports_exports=pd.unique(df_imports['Country']) expense=pd.unique(df_expense_gdp['Country']) merge_list=pd.unique(df_fuel['Country Name']) # Create a dataframe that merges key datapoints for analysis merged=pd.DataFrame(merge_list, columns=['Country']) counter=0 for df, year in zip(df_list, df_list_years): df=df.rename(columns={'Country Name':'Country'}) df.drop(df.columns.difference(['Country',str(year)]),1,inplace=True) merged=merged.merge(df, how='inner', on='Country') df_regions=df_regions.merge(df, on='Country') merged=merged.rename(columns={str(year):df_list_columns[counter]}) df_regions=df_regions.rename(columns={str(year):df_list_columns[counter]}) counter+=1 # Clean the data further by replacing Nan values with the average for a given region. This average comes from the regions defined in the SIPRI dataset """This section of code should only be run if you want Nan values to be filled with the mean for that region""" regions=pd.unique(df_regions['Region']) headers=list(merged.columns.values) df_medians=pd.DataFrame(index=regions, columns=headers[1:]) for region in regions: runner=df_regions[df_regions['Region']==region] for header in headers[1:]: median=runner[header].mean() df_medians.at[region,header]=median df_medians.reset_index(inplace=True) df_medians=df_medians.rename(columns={'index':'Region'}) df_regions=df_regions.merge(df_medians, on='Region') # Further clean the data to remove duplicate column names post-merge df_regions.drop(df_regions.columns.difference(['Region','Country','Fuel_y','Male_ag_y', 'Electricity_y', 'School_enroll_y', 'Low_income_y']),1,inplace=True) df_regions=df_regions.rename(columns={'Fuel_y':'Fuel','Male_ag_y':'Male_ag', 'Electricity_y':'Electricity', 'School_enroll_y':'School_enroll', 'Low_income_y':'Low_income'}) merged=merged.set_index('Country') df_regions=df_regions.set_index('Country') merged=merged.fillna(df_regions) # Define a function to produce csv outputs for the various dataframes. Mainly to avoid the need to re-write excessively def merger(df_list, columns_list,labels_list, merged_df): counter=0 for df, column in zip(df_list, columns_list): df.drop(df.columns.difference(['Country',str(column)]),1,inplace=True) df=df.rename(columns={str(column):labels_list[counter]}) runner=merged_df.merge(df, how='inner', on='Country') runner=runner.dropna() runner.to_csv(str(labels_list[counter])+'.csv') files.download(str(labels_list[counter])+'.csv') counter+=1 return runner # Produce data output csv files merger([df_deaths],[2019],['Deaths'],merged) merger([df_conflicts],[2008],['Conflicts'],merged) merger([df_imports],[2019],['Imports'],merged) merger([df_exports],[2006],['Exports'],merged) merger([df_expense_capita],[2018],['Expense_Capita'],merged) merger([df_expense_gdp],[2018],['Expense_GPD'],merged) merger([df_imports, df_exports, df_expense_capita, df_expense_gdp],[2019, 2006, 2018, 2018],['Imports','Exports','Expense_Capita','Expense_GDP'],merged) # Generate plots of fuel exports vs arms imports joined=pd.melt(df_fuel_import_plot, id_vars=['Fuel'], value_vars=['Imports']) graph=px.scatter(joined, x='Fuel', y='value',title='Predictability of Fuel on Armed Conflicts', labels={'Fuel':'Fuel Exports','value':'Arms Imports'}) graph.write_html('Fuel_Imports.html') # files.download('Fuel_Imports.html') # Generate plots of low income percentage vs armed conflicts joined=pd.melt(df_lowinc_conflict_plot, id_vars=['Low_income'], value_vars=['Conflicts']) graph=px.scatter(joined, x='Low_income', y='value',title='Effect of Lower Income Population on Armed Conflicts', labels={'Low_income':'Income Shared by Lowest 20%','value':'Armed Conflicts'}) graph.write_html('Lowinc_conflicts.html') # files.download('Lowinc_conflicts.html') # Generate plots of fuel exports on military expenditure joined=pd.melt(df_various_expenditure_plot, id_vars=['Fuel'], value_vars=['Expense_Capita']) graph=px.scatter(joined, x='Fuel', y='value',title='Effect of Fuel Exports on Military Spending per Capita', labels={'Fuel':'Fuel Exports','value':'Military Expenses per Capita'}) graph.write_html('Fuel_percapita.html') # files.download('Fuel_percapita.html') # Generate plots of percent males in agriculture vs military expenditure joined=pd.melt(df_various_expenditure_plot, id_vars=['Male_ag'], value_vars=['Expense_Capita']) graph=px.scatter(joined, x='Male_ag', y='value',title='Effect of Farming Population on Military Spending per Capita', labels={'Male_ag':'% Males in Agriculture','value':'Military Expenses per Capita'}) graph.write_html('Maleag_percapita.html') # files.download('Maleag_percapita.html') # Generate plots of percent electricity availability vs military expenditure joined=pd.melt(df_various_expenditure_plot, id_vars=['Electricity'], value_vars=['Expense_Capita']) graph=px.scatter(joined, x='Electricity', y='value',title='Effect of Electricity Availability on Military Spending per Capita', labels={'Electricity':'% Population with Electricity','value':'Military Expenses per Capita'}) graph.write_html('Electricity_percapita.html') # files.download('Electricity_percapita.html') # Generate plots of academic enrollment vs military expenditure joined=pd.melt(df_various_expenditure_plot, id_vars=['School_enroll'], value_vars=['Expense_Capita']) graph=px.scatter(joined, x='School_enroll', y='value',title='Effect of School Enrollment on Military Spending per Capita', labels={'School_enroll':'Ratio of Girls:Boys Enrolled','value':'Military Expenses per Capita'}) graph.write_html('School_percapita.html') files.download('School_percapita.html') ```
github_jupyter
``` import pandas as pd import risk_tools as rt import numpy as np %load_ext autoreload %autoreload 2 %matplotlib inline returns = pd.read_csv('data/Portfolios_Formed_on_ME_monthly_EW.csv', header=0, index_col=0, parse_dates=True, na_values=-99.99) returns.index = pd.to_datetime(returns.index, format='%Y%m') returns.index = returns.index.to_period('M') returns.head() ``` What was the Annualized Return of the Lo 20 portfolio over the entire period? ``` rt.annualized_return(returns['Lo 20']) * 100 ``` What was the Annualized Volatility of the Lo 20 portfolio over the entire period? ``` rt.annualized_volatility(returns['Lo 20']) * 100 ``` What was the Annualized Return of the Hi 20 portfolio over the entire period? ``` rt.annualized_return(returns['Hi 20']) * 100 ``` What was the Annualized Volatility of the Hi 20 portfolio over the entire period? ``` rt.annualized_volatility(returns['Hi 20']) * 100 ``` What was the Annualized Return of the Lo 20 portfolio over the period 1999 - 2015 (both inclusive)? ``` lo20_range_1999_2015 = returns['Lo 20']['1999':'2015'] rt.annualized_return(lo20_range_1999_2015) * 100 ``` What was the Annualized Volatility of the Lo 20 portfolio over the period 1999 - 2015 (both inclusive)? ``` rt.annualized_volatility(lo20_range_1999_2015) * 100 ``` What was the Annualized Return of the Hi 20 portfolio over the period 1999 - 2015 (both inclusive)? ``` hi20_range_1999_2015 = returns['Hi 20']['1999':'2015'] rt.annualized_return(hi20_range_1999_2015) * 100 ``` What was the Annualized Volatility of the Hi 20 portfolio over the period 1999 - 2015 (both inclusive)? ``` rt.annualized_volatility(hi20_range_1999_2015) * 100 ``` What was the Max Drawdown (expressed as a positive number) experienced over the 1999-2015 period in the SmallCap (Lo 20) portfolio? ``` (rt.drawdown(lo20_range_1999_2015, 1000)['Drawdowns']).max() ``` At the end of which month over the period 1999-2015 did that maximum drawdown on the SmallCap (Lo 20) portfolio occur? ``` (rt.drawdown(lo20_range_1999_2015, 1000)['Drawdowns']).idxmax() ``` What was the Max Drawdown (expressed as a positive number) experienced over the 1999-2015 period in the LargeCap (Hi 20) portfolio? ``` (rt.drawdown(hi20_range_1999_2015, 1000)['Drawdowns']).max() ``` Over the period 1999-2015, at the end of which month did that maximum drawdown of the LargeCap (Hi 20) portfolio occur? ``` (rt.drawdown(hi20_range_1999_2015, 1000)['Drawdowns']).idxmax() ``` For the remaining questions, use the EDHEC Hedge Fund Indices data set that we used in the lab assignment and load them into Python. Looking at the data since 2009 (including all of 2009) through 2018 which Hedge Fund Index has exhibited the highest semideviation? ``` hfi = pd.read_csv('data/edhec-hedgefundindices.csv', header=0, index_col=0, parse_dates=True) hfi_after_2009 = hfi['2009':] hfi_after_2009.head() ``` Looking at the data since 2009 (including all of 2009) which Hedge Fund Index has exhibited the lowest semideviation? ``` rt.semideviation(hfi_after_2009) ``` Looking at the data since 2009 (including all of 2009) which Hedge Fund Index has been most negatively skewed? ``` rt.skewness(hfi_after_2009) ``` Looking at the data since 2000 (including all of 2000) through 2018 which Hedge Fund Index has exhibited the highest kurtosis? ``` hfi_after_2000 = hfi['2000':] rt.kurtosis(hfi_after_2000) ```
github_jupyter
# YOLO on PYNQ-Z1 and Movidius NCS: Webcam example To run this notebook, you need to connect a USB webcam to the PYNQ-Z1 and a monitor to the HDMI output. You'll already need a powered USB hub for the Movidius NCS, so you should have a spare port for the webcam. ### Load required packages ``` from mvnc import mvncapi as mvnc import cv2 import numpy as np import time from pynq.overlays.base import BaseOverlay from pynq.lib.video import * import yolo_ncs,ncs import PIL.Image %matplotlib inline # Load the base overlay base = BaseOverlay("base.bit") ``` ### Configure the webcam To get a decent frame rate, we use a webcam resolution of 640x480 so that resizing to 448x448 for the YOLO network is reasonably fast. Note that OpenCV uses BGR, but the YOLO network needs RGB, so we'll have to swap the colors around before sending images to YOLO. ``` # Webcam resolution frame_in_w = 640 frame_in_h = 480 # Configure webcam - note that output images will be BGR videoIn = cv2.VideoCapture(0) videoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w); videoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h); print("Capture device is open: " + str(videoIn.isOpened())) ``` ### Configure the HDMI output ``` hdmi_out = base.video.hdmi_out # Configure the HDMI output to the same resolution as the webcam input mode = VideoMode(frame_in_w,frame_in_h,24) hdmi_out.configure(mode, PIXEL_BGR) # Start the HDMI output hdmi_out.start() ``` ### Take a photo with the webcam ``` ret, frame = videoIn.read() if (not ret): raise RuntimeError("Failed to read from camera.") # Convert BGR image to RGB (required by YOLO and PIL) frame = frame[:,:,(2,1,0)] # Resize the image to the size required by YOLO network (448x448) small_frame = cv2.resize(frame, dsize=(448, 448), interpolation=cv2.INTER_CUBIC) ncs_frame = small_frame.copy()/255.0 # Show the image in the Jupyter notebook img = PIL.Image.fromarray(frame) img ``` ### Open the Movidius NCS ``` # Open the Movidius NCS device ncsdev = ncs.MovidiusNCS() # Load the graph file if ncsdev.load_graph('../graph'): print('Graph file loaded to Movidius NCS') ``` ### Send image to NCS ``` ncsdev.graph.LoadTensor(ncs_frame.astype(np.float16), 'user object') out, userobj = ncsdev.graph.GetResult() # Interpret results and draw boxes on the image results = yolo_ncs.interpret_output(out.astype(np.float32), frame.shape[1], frame.shape[0]) # fc27 instead of fc12 for yolo_small img_res = yolo_ncs.draw_boxes(frame, results, frame.shape[1], frame.shape[0]) # Display labelled image in Jupyter notebook img = PIL.Image.fromarray(img_res) img ``` ### Webcam to HDMI pass-through (without YOLO) ``` n_frames = 2000 start_time = time.time() for _ in range(n_frames): # Get a frame from the webcam ret, frame = videoIn.read() # Copy the input frame to the output frame frame_out = hdmi_out.newframe() frame_out[:,:,:] = frame[:,:,:] hdmi_out.writeframe(frame_out) end_time = time.time() print('Runtime:',end_time-start_time,'FPS:',n_frames/(end_time-start_time)) ``` ### Webcam to HDMI with YOLO ``` n_frames = 200 start_time = time.time() for _ in range(n_frames): # Get a frame from the webcam ret, frame = videoIn.read() # Resize to the frame size required by YOLO network (448x448) and convert to RGB small_frame = cv2.resize(frame[:,:,(2,1,0)], dsize=(448, 448), interpolation=cv2.INTER_CUBIC) ncs_frame = small_frame.copy()/255.0 # Send the frame to the NCS ncsdev.graph.LoadTensor(ncs_frame.astype(np.float16), 'user object') out, userobj = ncsdev.graph.GetResult() # Interpret results and draw the boxes on the image results = yolo_ncs.interpret_output(out.astype(np.float32), frame.shape[1], frame.shape[0]) # fc27 instead of fc12 for yolo_small img_res = yolo_ncs.draw_boxes(frame, results, frame.shape[1], frame.shape[0]) # Copy labelled image into output frame frame_out = hdmi_out.newframe() frame_out[:,:,:] = img_res[:,:,:] hdmi_out.writeframe(frame_out) end_time = time.time() print('Runtime:',end_time-start_time,'FPS:',n_frames/(end_time-start_time)) ``` ### Close the NCS device ``` ncsdev.close() ``` ### Release the webcam and HDMI output ``` videoIn.release() hdmi_out.stop() del hdmi_out ```
github_jupyter
``` import os import json def findCaptureSessionDirs(path): session_paths = [] devices = os.listdir(path) for device in devices: sessions = os.listdir(os.path.join(path, device)) for session in sessions: session_paths.append(os.path.join(device, session)) return session_paths def findCapturesInSession(path): files = [os.path.splitext(f)[0] for f in os.listdir(path) if f.endswith('.json')] return files def loadJsonData(filename): data = None with open(filename) as f: data = json.load(f) return data data_directory = "EyeCaptures" output_directory = "EyeCaptures-dlib" directories = sorted(findCaptureSessionDirs(data_directory)) total_directories = len(directories) print(f"Found {total_directories} directories") from face_utilities import faceEyeRectsToFaceInfoDict, getEyeRectRelative, newFaceInfoDict, find_face_dlib, landmarksToRects, generate_face_grid_rect from PIL import Image as PILImage # Pillow import numpy as np import dateutil.parser import shutil def getScreenOrientation(capture_data): orientation = 0 # Camera Offset and Screen Orientation compensation if capture_data['NativeOrientation'] == "Landscape": if capture_data['CurrentOrientation'] == "Landscape": # Camera above screen # - Landscape on Surface devices orientation = 1 elif capture_data['CurrentOrientation'] == "LandscapeFlipped": # Camera below screen # - Landscape inverted on Surface devices orientation = 2 elif capture_data['CurrentOrientation'] == "PortraitFlipped": # Camera left of screen # - Portrait with camera on left on Surface devices orientation = 3 elif capture_data['CurrentOrientation'] == "Portrait": # Camera right of screen # - Portrait with camera on right on Surface devices orientation = 4 if capture_data['NativeOrientation'] == "Portrait": if capture_data['CurrentOrientation'] == "Portrait": # Camera above screen # - Portrait on iOS devices orientation = 1 elif capture_data['CurrentOrientation'] == "PortraitFlipped": # Camera below screen # - Portrait Inverted on iOS devices orientation = 2 elif capture_data['CurrentOrientation'] == "Landscape": # Camera left of screen # - Landscape home button on right on iOS devices orientation = 3 elif capture_data['CurrentOrientation'] == "LandscapeFlipped": # Camera right of screen # - Landscape home button on left on iOS devices orientation = 4 return orientation def getCaptureTimeString(capture_data): sessiontime = dateutil.parser.parse(capture_data["SessionTimestamp"]) currenttime = dateutil.parser.parse(capture_data["Timestamp"]) timedelta = sessiontime - currenttime return str(timedelta.total_seconds()) for directory_idx, directory in enumerate(directories): print(f"Processing {directory_idx + 1}/{total_directories} - {directory}") captures = findCapturesInSession(os.path.join(data_directory,directory)) total_captures = len(captures) # dotinfo.json - { "DotNum": [ 0, 0, ... ], # "XPts": [ 160, 160, ... ], # "YPts": [ 284, 284, ... ], # "XCam": [ 1.064, 1.064, ... ], # "YCam": [ -6.0055, -6.0055, ... ], # "Time": [ 0.205642, 0.288975, ... ] } # # PositionIndex == DotNum # Timestamp == Time, but no guarantee on order. Unclear if that is an issue or not dotinfo = { "DotNum": [], "XPts": [], "YPts": [], "XCam": [], "YCam": [], "Time": [] } recording_path = os.path.join(data_directory, directory) output_path = os.path.join(output_directory, f"{directory_idx:05d}") output_frame_path = os.path.join(output_path, "frames") faceInfoDict = newFaceInfoDict() # frames.json - ["00000.jpg","00001.jpg"] frames = [] facegrid = { "X": [], "Y": [], "W": [], "H": [], "IsValid": [] } # info.json - {"TotalFrames":99,"NumFaceDetections":97,"NumEyeDetections":56,"Dataset":"train","DeviceName":"iPhone 6"} info = { "TotalFrames": total_captures, "NumFaceDetections": 0, "NumEyeDetections": 0, "Dataset": "train", # For now put all data into training dataset "DeviceName": None } # screen.json - { "H": [ 568, 568, ... ], "W": [ 320, 320, ... ], "Orientation": [ 1, 1, ... ] } screen = { "H": [], "W": [], "Orientation": [] } if not os.path.exists(output_directory): os.mkdir(output_directory) if not os.path.exists(output_path): os.mkdir(output_path) if not os.path.exists(output_frame_path): os.mkdir(output_frame_path) for capture_idx, capture in enumerate(captures): print(f"Processing {capture_idx + 1}/{total_captures} - {capture}") capture_json_path = os.path.join(data_directory, directory, capture + ".json") capture_png_path = os.path.join(data_directory, directory, capture + ".png") if os.path.isfile(capture_json_path) and os.path.isfile(capture_png_path): capture_data = loadJsonData(capture_json_path) if info["DeviceName"] == None: info["DeviceName"] = capture_data["HostModel"] elif info["DeviceName"] != capture_data["HostModel"]: error(f"Device name changed during session, expected \'{info['DeviceName']}\' but got \'{capture_data['HostModel']}\'") capture_image = PILImage.open(capture_png_path).convert('RGB') # dlib wants images in RGB or 8-bit grayscale format capture_image_np = np.array(capture_image) # dlib wants images in numpy array format shape_np, isValid = find_face_dlib(capture_image_np) info["NumFaceDetections"] = info["NumFaceDetections"] + 1 face_rect, left_eye_rect, right_eye_rect, isValid = landmarksToRects(shape_np, isValid) # facegrid.json - { "X": [ 6, 6, ... ], "Y": [ 10, 10, ... ], "W": [ 13, 13, ... ], "H": [ 13, 13, ... ], "IsValid": [ 1, 1, ... ] } if isValid: faceGridX, faceGridY, faceGridW, faceGridH = generate_face_grid_rect(face_rect, capture_image.width, capture_image.height) else: faceGridX = 0 faceGridY = 0 faceGridW = 0 faceGridH = 0 facegrid["X"].append(faceGridX) facegrid["Y"].append(faceGridY) facegrid["W"].append(faceGridW) facegrid["H"].append(faceGridH) facegrid["IsValid"].append(isValid) faceInfoDict, faceInfoIdx = faceEyeRectsToFaceInfoDict(faceInfoDict, face_rect, left_eye_rect, right_eye_rect, isValid) info["NumEyeDetections"] = info["NumEyeDetections"] + 1 # screen.json - { "H": [ 568, 568, ... ], "W": [ 320, 320, ... ], "Orientation": [ 1, 1, ... ] } screen["H"].append(capture_data['ScreenHeightInRawPixels']) screen["W"].append(capture_data['ScreenWidthInRawPixels']) screen["Orientation"].append(getScreenOrientation(capture_data)) # dotinfo.json - { "DotNum": [ 0, 0, ... ], # "XPts": [ 160, 160, ... ], # "YPts": [ 284, 284, ... ], # "XCam": [ 1.064, 1.064, ... ], # "YCam": [ -6.0055, -6.0055, ... ], # "Time": [ 0.205642, 0.288975, ... ] } # # PositionIndex == DotNum # Timestamp == Time, but no guarantee on order. Unclear if that is an issue or not xcam = 0 ycam = 0 dotinfo["DotNum"].append(capture_data["PositionIndex"]) dotinfo["XPts"].append(capture_data["ScreenX"]) dotinfo["YPts"].append(capture_data["ScreenY"]) dotinfo["XCam"].append(0) dotinfo["YCam"].append(0) dotinfo["Time"].append(getCaptureTimeString(capture_data)) # Convert image from PNG to JPG frame_name = str(f"{capture_idx:05d}.jpg") frames.append(frame_name) capture_img = PILImage.open(capture_png_path).convert('RGB') capture_img.save(os.path.join(output_frame_path, frame_name)) else: print(f"Error processing capture {capture}") with open(os.path.join(output_path, 'frames.json'), "w") as write_file: json.dump(frames, write_file) with open(os.path.join(output_path, 'screen.json'), "w") as write_file: json.dump(screen, write_file) with open(os.path.join(output_path, 'info.json'), "w") as write_file: json.dump(info, write_file) with open(os.path.join(output_path, 'dotInfo.json'), "w") as write_file: json.dump(dotinfo, write_file) with open(os.path.join(output_path, 'faceGrid.json'), "w") as write_file: json.dump(facegrid, write_file) with open(os.path.join(output_path, 'dlibFace.json'), "w") as write_file: json.dump(faceInfoDict["Face"], write_file) with open(os.path.join(output_path, 'dlibLeftEye.json'), "w") as write_file: json.dump(faceInfoDict["LeftEye"], write_file) with open(os.path.join(output_path, 'dlibRightEye.json'), "w") as write_file: json.dump(faceInfoDict["RightEye"], write_file) print("DONE") ```
github_jupyter
#### Data Fetch ``` import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt import seaborn as sns import math #extracting lines for simplied verion open('2-fft-malicious-n-0-4-m-12.txt','w').writelines([ line for line in open("2-fft-malicious-n-0-4-m-12.log") if "Enqueue" in line]) print ("done") #extracting content from lines csv_out = open('2-fft-malicious-n-0-4-m-12-csv.txt','w') with open ('2-fft-malicious-n-0-4-m-12.txt', 'rt') as fft: csv_out.write("time,router,outport,inport,packet_address,packet_type,flit_id,flit_type,vnet,vc,src_ni,src_router,dst_ni,dst_router,enq_time\n") for line in fft: line_split = line.split() time = line_split[line_split.index("time:") + 1] router = line_split[line_split.index("SwitchAllocator") + 3] outport = line_split[line_split.index("outport") + 1] inport = line_split[line_split.index("inport") + 1] packet_address = line_split[line_split.index("addr") + 2][1:-1] packet_type = line_split[line_split.index("addr") + 7] flit_id = line_split[line_split.index("[flit::") + 1][3:] flit_type = line_split[line_split.index("Id="+str(flit_id)) + 1][5:] vnet = line_split[line_split.index("Type="+str(flit_type)) + 1][5:] vc = line_split[line_split.index("Vnet="+str(vnet)) + 1][3:] src_ni = line_split[line_split.index("VC="+str(vc)) + 2][3:] src_router = line_split[line_split.index("NI="+str(src_ni)) + 2][7:] dst_ni = line_split[line_split.index("Router="+str(src_router)) + 2][3:] dst_router = line_split[line_split.index("NI="+str(dst_ni)) + 2][7:] enq_time = str(line_split[line_split.index("Enqueue") + 1][5:]) line_csv = time+","+router+","+outport+","+inport+","+packet_address+","+packet_type+","+flit_id+","+flit_type+","+vnet+","+vc+","+src_ni+","+src_router+","+dst_ni+","+dst_router+","+enq_time+"\n" csv_out.write(line_csv) print ("done") #convert txt to csv df = pd.read_csv("2-fft-malicious-n-0-4-m-12-csv.txt",delimiter=',') df.to_csv('2-fft-malicious-n-0-4-m-12.csv',index=False) #dataset df = pd.read_csv('2-fft-malicious-n-0-4-m-12.csv') df.shape df.describe() sns.distplot(df['router'], kde = False, bins=30, color='blue') sns.distplot(df['src_router'], kde = False, bins=30, color='blue') sns.distplot(df['dst_router'], kde = False, bins=30, color='red') sns.distplot(df['inport'], kde = False, bins=30, color='green') sns.distplot(df['outport'], kde = False, bins=30, color='green') sns.distplot(df['packet_type'], kde = False, bins=30, color='red') direction = {'Local': 0,'North': 1, 'East': 2, 'South':3,'West':4} df = df.replace({'inport': direction, 'outport': direction}) data = {'GETS': 1,'GETX': 2,'GUX': 3,'DATA': 4, 'PUTX': 5,'PUTS': 6,'WB_ACK':7} df = df.replace({'packet_type': data}) df['flit_id'] = df['flit_id']+1 df['flit_type'] = df['flit_type']+1 df['vnet'] = df['vnet']+1 df['vc'] = df['vc']+1 hoparr = {"0to0":0,"0to1":1,"0to2":2,"0to3":3,"0to4":1,"0to5":2,"0to6":3,"0to7":4,"0to8":2,"0to9":3,"0to10":4,"0to11":5,"0to12":3,"0to13":4,"0to14":5,"0to15":6, "1to1":0,"1to2":1,"1to3":2,"1to4":2,"1to5":1,"1to6":2,"1to7":3,"1to8":3,"1to9":2,"1to10":3,"1to11":4,"1to12":5,"1to13":3,"1to14":4,"1to15":5, "2to2":0,"2to3":1,"2to4":3,"2to5":2,"2to6":1,"2to7":2,"2to8":4,"2to9":3,"2to10":2,"2to11":3,"2to12":5,"2to13":4,"2to14":3,"2to15":4, "3to3":0,"3to4":4,"3to5":3,"3to6":2,"3to7":1,"3to8":5,"3to9":4,"3to10":3,"3to11":2,"3to12":6,"3to13":5,"3to14":4,"3to15":3, "4to4":0,"4to5":1,"4to6":2,"4to7":3,"4to8":1,"4to9":2,"4to10":3,"4to11":4,"4to12":2,"4to13":3,"4to14":4,"4to15":5, "5to5":0,"5to6":1,"5to7":2,"5to8":2,"5to9":1,"5to10":2,"5to11":3,"5to12":3,"5to13":2,"5to14":3,"5to15":4, "6to6":0,"6to7":1,"6to8":3,"6to9":2,"6to10":1,"6to11":2,"6to12":4,"6to13":3,"6to14":2,"6to15":3, "7to7":0,"7to8":4,"7to9":3,"7to10":2,"7to11":1,"7to12":5,"7to13":4,"7to14":3,"7to15":2, "8to8":0,"8to9":1,"8to10":2,"8to11":3,"8to12":1,"8to13":2,"8to14":3,"8to15":4, "9to9":0,"9to10":1,"9to11":2,"9to12":2,"9to13":1,"9to14":2,"9to15":4, "10to10":0,"10to11":1,"10to12":3,"10to13":2,"10to14":1,"10to15":2, "11to11":0,"11to12":4,"11to13":3,"11to14":2,"11to15":1, "12to12":0,"12to13":1,"12to14":2,"12to15":3, "13to13":0,"13to14":1,"13to15":2, "14to14":0,"14to15":1, "15to15":0} packarr = {} packtime = {} packchunk = [] hopcurrentarr = [] hoptotarr = [] hoppercentarr =[] waitingarr = [] interval = 500 count = 0 for index, row in df.iterrows(): current_time = row["time"] enqueue_time = row["enq_time"] waiting_time = current_time - enqueue_time waitingarr.append(waiting_time) current_router = row["router"] src_router = row["src_router"] dst_router = row["dst_router"] src_router_temp = src_router if src_router_temp>dst_router: temph = src_router_temp src_router_temp = dst_router dst_router = temph hop_count_string = str(src_router_temp)+"to"+str(dst_router) src_router_temp = src_router hop_count = hoparr.get(hop_count_string) if src_router_temp>current_router: tempc = src_router_temp src_router_temp = current_router current_router = tempc current_hop_string = str(src_router_temp)+"to"+str(current_router) current_hop = hoparr.get(current_hop_string) if(current_hop == 0 and hop_count ==0): hop_percent = 0 else: hop_percent = current_hop/hop_count hoptotarr.append(hop_count) hopcurrentarr.append(current_hop) hoppercentarr.append(hop_percent) if row["packet_address"] not in packarr: packarr[row["packet_address"]] = count packtime[row["packet_address"]] = row["time"] packchunk.append(packarr.get(row["packet_address"])) count+=1 else: current_time = row["time"] position = packarr.get(row["packet_address"]) pkt_time = packtime.get(row["packet_address"]) current_max = max(packarr.values()) if (current_time-pkt_time)<interval: packchunk.append(packarr.get(row["packet_address"])) else: del packarr[row["packet_address"]] del packtime[row["packet_address"]] packarr[row["packet_address"]] = current_max+1 packtime[row["packet_address"]] = row["time"] packchunk.append(packarr.get(row["packet_address"])) if (current_max)==count: count+=2 elif (current_max+1)==count: count+=1 df['packet_address'].nunique() print(len(packarr)) print(len(packchunk)) df = df.assign(traversal_id=packchunk) df = df.assign(hop_count=hoptotarr) df = df.assign(current_hop=hopcurrentarr) df = df.assign(hop_percentage=hoppercentarr) df = df.assign(enqueue_time=waitingarr) df.rename(columns={'packet_type': 'cache_coherence_type', 'time': 'timestamp'}, inplace=True) df = df.drop(columns=['packet_address','enq_time']) df.isnull().sum() df.dtypes df.to_csv('2-fft-malicious-n-0-4-m-12.csv',index=False) ``` #### Router Fetch ``` def roundup(x): return int(math.ceil(x / 1000.0)) * 1000 def fetch(i): df = pd.read_csv('2-fft-malicious-n-0-4-m-12.csv') df = df.loc[df['router'] == i] df = df.drop(columns=['router']) df.to_csv('2-fft-malicious-n-0-4-m-12-r'+str(i)+'.csv',index=False) df = pd.read_csv('2-fft-malicious-n-0-4-m-12-r'+str(i)+'.csv') def timecount(df): timearr = [] interval = 999 count = 0 for index, row in df.iterrows(): if row["timestamp"]<=interval : count+=1 else: timearr.append([interval+1,count]) count=1 if (row["timestamp"] == roundup(row["timestamp"])): interval = row["timestamp"]+999 else: interval = roundup(row["timestamp"])-1 timearr.append([interval+1,count]) return timearr def maxcount(timearr,df): countarr = [] increarr = [] maxarr = [] for i in range(len(timearr)): for cnt in range(timearr[i][1],0,-1): countarr.append(cnt) maxarr.append(timearr[i][1]) increment = timearr[i][1] - cnt + 1 increarr.append(increment) df = df.assign(packet_count_decr=countarr) df = df.assign(packet_count_incr=increarr) df = df.assign(max_packet_count=maxarr) return df df = maxcount(timecount(df),df) def rename(df): df['traversal_id'] = df['traversal_id']+1 df["packet_count_index"] = df["packet_count_decr"]*df["packet_count_incr"] df["port_index"] = df["outport"]*df["inport"] df["traversal_index"] = df["cache_coherence_type"]*df["flit_id"]*df["flit_type"]*df["traversal_id"] df["cache_coherence_vnet_index"] = df["cache_coherence_type"]*df["vnet"] df["vnet_vc_cc_index"] = df["vc"]*df["cache_coherence_vnet_index"] rename(df) df['target'] = 0 print(df.shape) df.to_csv('2-fft-malicious-n-0-4-m-12-r'+str(i)+'.csv',index=False) for i in range (0,16): fetch(i) ```
github_jupyter
# Skip-gram word2vec In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation. ## Readings Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material. * A really good [conceptual overview](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/) of word2vec from Chris McCormick * [First word2vec paper](https://arxiv.org/pdf/1301.3781.pdf) from Mikolov et al. * [NIPS paper](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) with improvements for word2vec also from Mikolov et al. * An [implementation of word2vec](http://www.thushv.com/natural_language_processing/word2vec-part-1-nlp-with-deep-learning-with-tensorflow-skip-gram/) from Thushan Ganegedara * TensorFlow [word2vec tutorial](https://www.tensorflow.org/tutorials/word2vec) ## Word embeddings When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. ![one-hot encodings](assets/one_hot_encoding.png) To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit. ![lookup](assets/lookup_matrix.png) Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an **embedding lookup** and the number of hidden units is the **embedding dimension**. <img src='assets/tokenize_lookup.png' width=500> There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well. Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called **Word2Vec** uses the embedding layer to find vector representations of words that contain semantic meaning. ## Word2Vec The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram. <img src="assets/word2vec_architectures.png" width="500"> In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts. First up, importing packages. ``` import time import numpy as np import tensorflow as tf import utils ``` Load the [text8 dataset](http://mattmahoney.net/dc/textdata.html), a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the `data` folder. Then you can extract it and delete the archive file to save storage space. ``` from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import zipfile dataset_folder_path = 'data' dataset_filename = 'text8.zip' dataset_name = 'Text8 Dataset' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(dataset_filename): with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar: urlretrieve( 'http://mattmahoney.net/dc/text8.zip', dataset_filename, pbar.hook) if not isdir(dataset_folder_path): with zipfile.ZipFile(dataset_filename) as zip_ref: zip_ref.extractall(dataset_folder_path) with open('data/text8') as f: text = f.read() ``` ## Preprocessing Here I'm fixing up the text to make training easier. This comes from the `utils` module I wrote. The `preprocess` function coverts any punctuation into tokens, so a period is changed to ` <PERIOD> `. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it. ``` words = utils.preprocess(text) print(words[:30]) print("Total words: {}".format(len(words))) print("Unique words: {}".format(len(set(words)))) ``` And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list `int_words`. ``` vocab_to_int, int_to_vocab = utils.create_lookup_tables(words) int_words = [vocab_to_int[word] for word in words] ``` ## Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by $$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$ where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset. I'm going to leave this up to you as an exercise. Check out my solution to see how I did it. > **Exercise:** Implement subsampling for the words in `int_words`. That is, go through `int_words` and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to `train_words`. ``` from collections import Counter import random threshold = 1e-5 threshold = 0.0006849873916398326 word_counts = Counter(int_words) total_count = len(int_words) print(total_count) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_words if random.random() < (1 - p_drop[word])] print(len(train_words)) print(train_words[:10]) ``` ## Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From [Mikolov et al.](https://arxiv.org/pdf/1301.3781.pdf): "Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels." > **Exercise:** Implement a function `get_target` that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window. ``` def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words) ``` Here's a function that returns batches for our network. The idea is that it grabs `batch_size` words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory. ``` def get_batches(words, batch_size, window_size=5): ''' Create a generator of word batches as a tuple (inputs, targets) ''' n_batches = len(words)//batch_size # only full batches words = words[:n_batches*batch_size] for idx in range(0, len(words), batch_size): x, y = [], [] batch = words[idx:idx+batch_size] for ii in range(len(batch)): batch_x = batch[ii] batch_y = get_target(batch, ii, window_size) y.extend(batch_y) x.extend([batch_x]*len(batch_y)) yield x, y ``` ## Building the graph From [Chris McCormick's blog](http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/), we can see the general structure of our network. ![embedding_network](./assets/skip_gram_net_arch.png) The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset. I'm going to have you build the graph in stages now. First off, creating the `inputs` and `labels` placeholders like normal. > **Exercise:** Assign `inputs` and `labels` using `tf.placeholder`. We're going to be passing in integers, so set the data types to `tf.int32`. The batches we're passing in will have varying sizes, so set the batch sizes to [`None`]. To make things work later, you'll need to set the second dimension of `labels` to `None` or `1`. ``` train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels') ``` ## Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary. > **Exercise:** Tensorflow provides a convenient function [`tf.nn.embedding_lookup`](https://www.tensorflow.org/api_docs/python/tf/nn/embedding_lookup) that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use `tf.nn.embedding_lookup` to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using [tf.random_uniform](https://www.tensorflow.org/api_docs/python/tf/random_uniform). ``` n_vocab = len(int_to_vocab) n_embedding = 200 # Number of embedding features with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs) ``` ## Negative sampling For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called ["negative sampling"](http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf). Tensorflow has a convenient function to do this, [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss). > **Exercise:** Below, create weights and biases for the softmax layer. Then, use [`tf.nn.sampled_softmax_loss`](https://www.tensorflow.org/api_docs/python/tf/nn/sampled_softmax_loss) to calculate the loss. Be sure to read the documentation to figure out how it works. ``` # Number of negative labels to sample n_sampled = 100 with train_graph.as_default(): softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) softmax_b = tf.Variable(tf.zeros(n_vocab)) # Calculate the loss using negative sampling loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab) cost = tf.reduce_mean(loss) optimizer = tf.train.AdamOptimizer().minimize(cost) ``` ## Validation This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings. ``` with train_graph.as_default(): ## From Thushan Ganegedara's implementation valid_size = 16 # Random set of words to evaluate similarity on. valid_window = 100 # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent valid_examples = np.array(random.sample(range(valid_window), valid_size//2)) valid_examples = np.append(valid_examples, random.sample(range(1000,1000+valid_window), valid_size//2)) valid_dataset = tf.constant(valid_examples, dtype=tf.int32) # We use the cosine distance: norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True)) normalized_embedding = embedding / norm valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset) similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding)) # If the checkpoints directory doesn't exist: !mkdir checkpoints epochs = 10 batch_size = 1000 window_size = 10 with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: iteration = 1 loss = 0 sess.run(tf.global_variables_initializer()) for e in range(1, epochs+1): batches = get_batches(train_words, batch_size, window_size) start = time.time() for x, y in batches: feed = {inputs: x, labels: np.array(y)[:, None]} train_loss, _ = sess.run([cost, optimizer], feed_dict=feed) loss += train_loss if iteration % 100 == 0: end = time.time() print("Epoch {}/{}".format(e, epochs), "Iteration: {}".format(iteration), "Avg. Training loss: {:.4f}".format(loss/100), "{:.4f} sec/batch".format((end-start)/100)) loss = 0 start = time.time() if iteration % 1000 == 0: # note that this is expensive (~20% slowdown if computed every 500 steps) sim = similarity.eval() for i in range(valid_size): valid_word = int_to_vocab[valid_examples[i]] top_k = 8 # number of nearest neighbors nearest = (-sim[i, :]).argsort()[1:top_k+1] log = 'Nearest to %s:' % valid_word for k in range(top_k): close_word = int_to_vocab[nearest[k]] log = '%s %s,' % (log, close_word) print(log) iteration += 1 save_path = saver.save(sess, "checkpoints/text8.ckpt") embed_mat = sess.run(normalized_embedding) ``` Restore the trained network if you need to: ``` with train_graph.as_default(): saver = tf.train.Saver() with tf.Session(graph=train_graph) as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) embed_mat = sess.run(embedding) ``` ## Visualizing the word vectors Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out [this post from Christopher Olah](http://colah.github.io/posts/2014-10-Visualizing-MNIST/) to learn more about T-SNE and other ways to visualize high-dimensional data. ``` %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt from sklearn.manifold import TSNE viz_words = 500 tsne = TSNE() embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :]) fig, ax = plt.subplots(figsize=(14, 14)) for idx in range(viz_words): plt.scatter(*embed_tsne[idx, :], color='steelblue') plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7) ```
github_jupyter
``` from matplotlib import pyplot as plt import pickle %matplotlib inline plt.figure(figsize=(10, 3)) import seaborn as sns; sns.set() with open("routing_score.pkl",'rb') as h: test_list=pickle.load(h) data=test_list[2][0].mean(axis=0,keepdims=True).squeeze() # data=test_list[0] xLabel = [ "movie_id", "user_id", "gender", "age", "occupation", "zip", "genre"] yLabel = ["caps_{}".format(i+1) for i in range(4)] ax = sns.heatmap(data,vmin=0, vmax=1,cmap = 'Blues',xticklabels =xLabel,yticklabels=yLabel) ax.xaxis.set_ticks_position('top') plt.xticks(rotation=0) plt.yticks(rotation=0) plt.savefig('capsule_movielens.jpg',dpi=400) import re pattern=re.compile("^\D*") pattern.findall("+-2") import math from itertools import combinations def comb(n,m): return math.factorial(n)//(math.factorial(n-m)*math.factorial(m)) def calculate(num_people): total_num=0 for i in range(1,num_people+1): print(i) total_num+=comb(num_people,i)*i return total_num%(1e9+7) calculate(4) [i for i in range(10,-1,-1)] import math def comb(n,m): return math.factorial(n)//(math.factorial(n-m)*math.factorial(m)) comb(4,2) from typing import List class Solution: def solveNQueens(self, n: int) -> List[List[str]]: #初始化棋盘 checkerboard=[['0']*n for _ in range(n)] print("init checkerboard",checkerboard) #记录棋盘状态 checkerboard_stack= [] result_list=[] #更新棋盘,该位置放入Q后不可放入的位置 def update_checkerboard(row,col): print(checkerboard) checkerboard[row][:]='.' checkerboard[:][col]='.' #以该位置为中心扩散 for i in range(0,n): left=row-i right=row+i up=col-i down=col+i if left >=0: if up >=0: checkerboard[left][up]='.' if down < n: print(left,down) checkerboard[left][down]='.' if right<n: if up >=0: checkerboard[left][up]='.' if down < n: print(down) checkerboard[left][down]='.' return checkerboard #回溯算法,DFS每落下一个子更新棋盘状态 def dfs(row,n): nonlocal checkerboard #终止条件最后一行只剩一个空余位置,该位置可以放置皇后 if row==n-1 and '0' in ''.join(checkerboard[row]): checkerboard[row]=' '.join(checkerboard[row]).replace('0','Q').split(' ') result_list.append(checkerboard) return True #进行选择 for col in range(n): if checkerboard[row][col]=='0': #选择该处可以选位置 checkerboard_stack.append(update_checkerboard(row,col)) checkerboard[row][col]='Q' #继续遍历下一行 if dfs(row+1,n): checkerboard_stack.pop() continue #回溯 checkerboard=checkerboard_stack.pop() return False dfs(0,n) return result_list solution=Solution() solution.solveNQueens(4) ```
github_jupyter
``` import pandas as pd import numpy as np df = pd.read_csv('phone_data.csv') df.head() df.shape print(df['item']) df[['item','duration','month']].head() df.head() df.iloc[:8,1:4] df.loc[8:] ``` #BOOLEAN INDEXING and & or | Equal to == Not equal to != Not in ~ Equals: == Not equals: != Greater than, less than: > or < Greater than or equal to >= Less than or equal to <= ``` condition = (df.item == 'call') condition.head() condition = (df.item == 'call') df[~condition].head() condition = (df.item == 'call') & (df.network == 'Vodafone') df[condition].head() ``` NOW IF YOU WANT TO SEE THE ITEMS WHICH ARE NOT INCLUDED (EXCLUDED ONES) ``` condition = (df.item == 'call') df[~condition].head() condition = (df.item == 'call') | (df.network == 'Vodafone') df[condition].head() ``` NOW IF YOU WANT TO ADD A COLUMN WHICH WILL BE A MULTIPLE OF A PARTICULAR COLUMN ``` df['new_column'] = df['duration']*3 df.head() df['new_column'] = df['network']*3 df.head() df.head() df.drop(['new_column'], axis = 1, inplace=True) df.head() df.head() df.sort_values('duration') #ascending df.sort_values('duration', ascending = False ).head() #descending df.sort_values(['duration', 'network'], ascending=[False, True]).head() #ascending ``` #WORKING WITH DATES Directive %a Weekday as locale’s abbreviated name. Sun, Mon, …, Sat (en_US) So, Mo, …, Sa (de_DE) %A Weekday as locale’s full name. Sunday, Monday, …, Saturday (en_US) Sonntag, Montag, …, Samstag (de_DE) %w Weekday as a decimal number, where 0 is Sunday and 6 is Saturday. 0, 1, 2, 3, 4, 5, 6 %d Day of the month as a zero-padded decimal number. 01, 02, …, 31 ``` pd.to_datetime(df['date']) >= '2015-01-01' sum(pd.to_datetime(df['date']) >= '2015-01-01') from datetime import datetime from datetime import date df['new_date']=[datetime.strptime(x,'%d/%m/%y %H:%M') for x in df['date']] type(df['new_date'][0]) type(df['date'][0]) print(date.today()) print(datetime.now()) df[df.month=='2014-11']['duration'].sum() df.groupby('month')['duration'].sum() df.groupby('month')['date'].count() df[df['item']=='call'].groupby('network')['duration'].sum() df.groupby(['network','item'])['duration'].sum() df.groupby(['network','item'])['duration'].sum().reset_index() # lambda function - Row wise operation def new_duration(network, duration): if network=='world': new_duration = duration * 2 else: new_duration = duration * 4 return new_duration df['new_duration'] = df.apply(lambda x: new_duration(x['network'], x['duration']), axis=1) ```
github_jupyter
# DAPA Tutorial #2: Area - Sentinel-2 ## Load environment variables Please make sure that the environment variable "DAPA_URL" is set in the `custom.env` file. You can check this by executing the following block. If DAPA_URL is not set, please create a text file named `custom.env` in your home directory with the following input: >DAPA_URL=YOUR-PERSONAL-DAPA-APP-URL ``` from edc import setup_environment_variables setup_environment_variables() ``` ## Check notebook compabtibility **Please note:** If you conduct this notebook again at a later time, the base image of this Jupyter Hub service can include newer versions of the libraries installed. Thus, the notebook execution can fail. This compatibility check is only necessary when something is broken. ``` from edc import check_compatibility check_compatibility("user-0.19.6") ``` ## Load libraries Python libraries used in this tutorial will be loaded. ``` import os import xarray as xr import pandas as pd import requests import matplotlib from ipyleaflet import Map, Rectangle, DrawControl, basemaps, basemap_to_tiles %matplotlib inline ``` ## Set DAPA endpoint Execute the following code to check if the DAPA_URL is available in the environment variable and to set the `/dapa` endpoint. ``` service_url = None dapa_url = None if 'DAPA_URL' not in os.environ: print('!! DAPA_URL does not exist as environment variable. Please make sure this is the case - see first block of this notebook! !!') else: service_url = os.environ['DAPA_URL'] dapa_url = '{}/{}'.format(service_url, 'oapi') print('DAPA path: {}'.format(dapa_url.replace(service_url, ''))) ``` ## Get collections supported by this endpoint This request provides a list of collections. The path of each collection is used as starting path of this service. ``` collections_url = '{}/{}'.format(dapa_url, 'collections') collections = requests.get(collections_url, headers={'Accept': 'application/json'}) print('DAPA path: {}'.format(collections.url.replace(service_url, ''))) collections.json() ``` ## Get fields of collection Sentinel-2 L2A The fields (or variables in other DAPA endpoints - these are the bands of the raster data) can be retrieved in all requests to the DAPA endpoint. In addition to the fixed set of fields, "virtual" fields can be used to conduct math operations (e.g., the calculation of indices). ``` collection = 'S2L2A' fields_url = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/fields') fields = requests.get(fields_url, headers={'Accept': 'application/json'}) print('DAPA path: {}'.format(fields.url.replace(service_url, ''))) fields.json() ``` ## Retrieve data as raster aggregated by time ### Set DAPA URL and parameters The output of this request is a single raster (`area` endpoint). As the input collection (S2L2A) is a multi-temporal raster and the output format is an area, temporal aggregation is conducted for each pixel in the area. To retrieve a single raster, a bounding box (`bbox`) or polygon geometry (`geom`) needs to be provided. The `time` parameter allows to aggregate data only within a specific time span. Also the band (`field`) to be returned by DAPA needs to be specified as well. ``` # DAPA URL url = '{}/{}/{}/{}'.format(dapa_url, 'collections', collection, 'dapa/area') # Parameters for this request params = { 'bbox': '11.49,48.05,11.66,48.22', 'time': '2018-05-07T10:00:00Z/2018-05-07T12:00:00Z', 'fields': 'NDVI=(B08-B04)/(B08%2BB04),NDBI=(B11-B08)/(B11%2BB08)', # Please note: + signs need to be URL encoded -> %2B 'aggregate': 'avg' } # show point in the map m = Map( basemap=basemap_to_tiles(basemaps.OpenStreetMap.Mapnik), center=(48.14, 11.56), zoom=10 ) bbox = [float(coord) for coord in params['bbox'].split(',')] rectangle = Rectangle(bounds=((bbox[1], bbox[0]), (bbox[3], bbox[2]))) m.add_layer(rectangle) m ``` ### Build request URL and conduct request ``` params_str = "&".join("%s=%s" % (k, v) for k,v in params.items()) r = requests.get(url, params=params_str) print('DAPA path: {}'.format(r.url.replace(service_url, ''))) print('Status code: {}'.format(r.status_code)) ``` ### Write raster dataset to GeoTIFF file The response of the `area` endpoint is currently a GeoTIFF file, which can either be saved to disk or used directly in further processing. ``` with open('area_avg.tif', 'wb') as filew: filew.write(r.content) ``` ### Open raster dataset with xarray The GeoTIFF file can be opened with xarray. The file consists of bands related to each `field` and each aggregation function (see descriptions attribute in the xarray output). ``` ds = xr.open_rasterio('area_avg.tif') ds ``` ### Plot NDVI image (first band) ``` ds[0].plot(cmap="RdYlGn") ``` ## Output gdalinfo ``` !gdalinfo -stats area_avg.tif ```
github_jupyter
``` # Collaborative Base filtering like in Netflix, Youtube import numpy as np import pandas as pd import matplotlib.pyplot as plt movies=pd.read_csv("C:/Users/Dell/Desktop/Movie-Recommendation/movies.csv") ratings=pd.read_csv("C:/Users/Dell/Desktop/Movie-Recommendation/ratings.csv") movies.head() ratings.head() final=ratings.pivot(index='movieId',columns='userId',values='rating') final.head() final.fillna(0,inplace=True) final.head() no_user_voted=ratings.groupby('movieId')['rating'].agg('count') no_movies_voted=ratings.groupby('userId')['rating'].agg('count') no_user_voted plt.scatter(no_user_voted.index,no_user_voted,color='mediumseagreen') plt.axhline(y=10,color='r') plt.xlabel('MovieId') plt.ylabel('No. of users voted') plt.show() final final = final.loc[no_user_voted[no_user_voted > 10].index,:] final plt.scatter(no_movies_voted.index,no_movies_voted,color='mediumseagreen') plt.axhline(y=50,color='r') plt.xlabel('UserId') plt.ylabel('No. of votes by user') plt.show() final=final.loc[:,no_movies_voted[no_movies_voted > 50].index] final #sparsity = 1.0 - ( np.count_nonzero(sample) / float(sample.size) ) from scipy.sparse import csr_matrix csr_data = csr_matrix(final.values) final.reset_index(inplace=True) from sklearn.neighbors import NearestNeighbors knn = NearestNeighbors(metric='cosine', n_neighbors=20) knn.fit(csr_data) movie_list = movies[movies['title'].str.contains('Iron Man')] movie_list movie_idx= movie_list.iloc[0]['movieId'] movie_idx def get_movie_recommendation(movie_name): n_movies_to_reccomend = 10 movie_list = movies[movies['title'].str.contains(movie_name)] if len(movie_list): movie_idx= movie_list.iloc[0]['movieId'] movie_idx = final[final['movieId'] == movie_idx].index[0] distances , indices = knn.kneighbors(csr_data[movie_idx],n_neighbors=n_movies_to_reccomend+1) rec_movie_indices = sorted(list(zip(indices.squeeze().tolist(),distances.squeeze().tolist())),key=lambda x: x[1])[:0:-1] recommend_frame = [] for val in rec_movie_indices: movie_idx = final.iloc[val[0]]['movieId'] idx = movies[movies['movieId'] == movie_idx].index recommend_frame.append({'Title':movies.iloc[idx]['title'].values[0],'Distance':val[1]}) df = pd.DataFrame(recommend_frame,index=range(1,n_movies_to_reccomend+1)) return df else: return "No movies found" get_movie_recommendation('Iron Man') get_movie_recommendation('Memento') ```
github_jupyter
``` # default_exp core ``` # Core > Basic healper functions ``` #hide from nbdev.showdoc import * #hide %load_ext autoreload %autoreload 2 #export import torch from torch import nn, einsum import torch.nn.functional as F import torch.autograd.profiler as profiler from fastai.basics import * from fastai.text.all import * from fastai.test_utils import * from functools import partial, reduce, wraps from inspect import isfunction from operator import mul from copy import deepcopy from torch import Tensor from typing import Tuple from einops import rearrange, repeat ``` ## Helper functions ### General purpose utils ``` #export def exists(val): return val is not None def default(val, d): if exists(val): return val return d() if isfunction(d) else d def expand_dim1(x): if len(x.shape) == 1: return x[None, :] else: return x def max_neg_value(tensor): return -torch.finfo(tensor.dtype).max #export def setattr_on(model, attr, val, module_class): for m in model.modules(): if isinstance(m, module_class): setattr(m, attr, val) ``` ### Generative utils ``` #export # generative helpers # credit https://github.com/huggingface/transformers/blob/a0c62d249303a68f5336e3f9a96ecf9241d7abbe/src/transformers/generation_logits_process.py def top_p_filter(logits, top_p=0.9): sorted_logits, sorted_indices = torch.sort(logits, descending=True) cum_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) sorted_indices_to_remove = cum_probs > top_p sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone() sorted_indices_to_remove[..., 0] = 0 # if min_tokens_to_keep > 1: # # Keep at least min_tokens_to_keep (set to min_tokens_to_keep-1 because we add the first one below) # sorted_indices_to_remove[..., : min_tokens_to_keep - 1] = 0 indices_to_remove = sorted_indices_to_remove.scatter(1, sorted_indices, sorted_indices_to_remove) logits[indices_to_remove] = float('-inf') return logits def top_k_filter(logits, top_k=20): indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None] logits[indices_to_remove] = float('-inf') return logits _sampler = { 'top_k':top_k_filter, 'top_p':top_p_filter, 'gready':lambda x: x.argmax(-1) } ``` ### LSH specific helpers From [lucidrains/reformer-pytorch](https://github.com/lucidrains/reformer-pytorch/). ``` #exports def cache_method_decorator(cache_attr, cache_namespace, reexecute = False): def inner_fn(fn): @wraps(fn) def wrapper(self, *args, key_namespace=None, fetch=False, set_cache=True, **kwargs): namespace_str = str(default(key_namespace, '')) _cache = getattr(self, cache_attr) _keyname = f'{cache_namespace}:{namespace_str}' if fetch: val = _cache[_keyname] if reexecute: fn(self, *args, **kwargs) else: val = fn(self, *args, **kwargs) if set_cache: setattr(self, cache_attr, {**_cache, **{_keyname: val}}) return val return wrapper return inner_fn #exports def look_one_back(x): x_extra = torch.cat([x[:, -1:, ...], x[:, :-1, ...]], dim=1) return torch.cat([x, x_extra], dim=2) #exports def chunked_sum(tensor, chunks=1): *orig_size, last_dim = tensor.shape tensor = tensor.reshape(-1, last_dim) summed_tensors = [c.sum(dim=-1) for c in tensor.chunk(chunks, dim=0)] return torch.cat(summed_tensors, dim=0).reshape(orig_size) #exports def sort_key_val(t1, t2, dim=-1): values, indices = t1.sort(dim=dim) t2 = t2.expand_as(t1) return values, t2.gather(dim, indices) #exports def batched_index_select(values, indices): last_dim = values.shape[-1] return values.gather(1, indices[:, :, None].expand(-1, -1, last_dim)) ``` ## Profiling functions Utility functions to assess model performance. Test functions with `mod` and input `x`. ``` mod = get_text_classifier(AWD_LSTM, vocab_sz=10_000, n_class=10) x = torch.randint(0, 100, (3, 72)) #export def do_cuda_timing(f, inp, context=None, n_loops=100): ''' Get timings of cuda modules. Note `self_cpu_time_total` is returned, but from experiments this appears to be similar/same to the total CUDA time f : function to profile, typically an nn.Module inp : required input to f context : optional additional input into f, used for Decoder-style modules ''' f.cuda() inp = inp.cuda() if context is not None: context = context.cuda() with profiler.profile(record_shapes=False, use_cuda=True) as prof: with profiler.record_function("model_inference"): with torch.no_grad(): for _ in range(n_loops): if context is None: f(inp) else: f(inp, context) torch.cuda.synchronize() res = round((prof.key_averages().self_cpu_time_total / 1000) / n_loops, 3) print(f'{res}ms') return res #export def model_performance(n_loops=5, model='arto', dls=None, n_epochs=1, lr=5e-4): """ DEMO CODE ONLY! Run training loop to measure timings. Note that the models internally should be changed depending on the model you would like to use. You should also adjust the metrics you are monitoring """ acc_ls, ppl_ls =[], [] for i in range(n_loops): # ADD YOUR MODEL(S) INIT HERE # if model == 'arto': m = artoTransformerLM(vocab_sz, 512) # elif model == 'pt': m = ptTransformerLM(vocab_sz, 512) # else: print('model name not correct') learn = Learner(dls, m, loss_func=CrossEntropyLossFlat(), metrics=[accuracy, Perplexity()]).to_native_fp16() learn.fit_one_cycle(n_epochs, lr, wd=0.05) acc_ls.append(learn.recorder.final_record[2]) ppl_ls.append(learn.recorder.final_record[3]) print(f'Avg Accuracy: {round(sum(acc_ls)/len(acc_ls),3)}, std: {np.std(acc_ls)}') print(f'Avg Perplexity: {round(sum(ppl_ls)/len(ppl_ls),3)}, std: {np.std(ppl_ls)}') print() return learn, acc_ls, ppl_ls #export def total_params(m): """ Give the number of parameters of a module and if it's trainable or not - Taken from Taken from fastai.callback.hook """ params = sum([p.numel() for p in m.parameters()]) trains = [p.requires_grad for p in m.parameters()] return params, (False if len(trains)==0 else trains[0]) ``` Number of params for our test model: ``` total_params(mod) ``` ## Translation Callbacks Callbacks used to ensuring training a translation model works. All 3 are needed See [notebook here](https://github.com/bentrevett/pytorch-seq2seq/blob/master/6%20-%20Attention%20is%20All%20You%20Need.ipynb) for explanation of EOS shifting ``` # exports class CombineInputOutputCallback(Callback): """ Callback to combine the source (self.xb) and target (self.yb) into self.xb """ def __init__(self): pass def before_batch(self): self.learn.xb = (self.xb[0], self.yb[0]) class AssertAndCancelFit(Callback): "Cancels batch after backward to avoid opt.step()" def before_batch(self): assert len(self.learn.xb) == 2 assert self.learn.xb[1] is self.learn.yb[0] raise CancelEpochException() learn = synth_learner(cbs=[CombineInputOutputCallback(), AssertAndCancelFit()]) learn.fit(1) # exports class RemoveEOSCallback(Callback): """ Shift the target presented to the model during training to remove the "eos" token as we don't want the model to learn to translate EOS when it sees EOS. In practice we actually mask the EOS token as due to batching the last token will often be a <pad> token, not EOS """ def __init__(self, eos_idx): self.eos_idx=eos_idx def before_batch(self): eos_mask=(self.learn.xb[1]!=self.eos_idx) sz=torch.tensor(self.learn.xb[1].size()) sz[1]=sz[1]-1 self.learn.xb = (self.learn.xb[0], self.learn.xb[1][eos_mask].view((sz[0],sz[1]))) # exports class LossTargetShiftCallback(Callback): """ Shift the target shown to the loss to exclude the "bos" token as the first token we want predicted should be an actual word, not the "bos" token (as we have already given the model "bos" ) """ def __init__(self): pass def after_pred(self): self.learn.yb = (self.learn.yb[0][:,1:],) class TestLossShiftAndCancelFit(Callback): "Cancels batch after backward to avoid opt.step()" def after_pred(self): o = self.learn.dls.one_batch() assert self.learn.yb[0].size()[1] == o[1].size()[1] - 1 raise CancelEpochException() learn = synth_learner(cbs=[LossTargetShiftCallback(), TestLossShiftAndCancelFit()]) learn.fit(1) #export class PadBatchCallback(Callback): "Pads input and target sequences to multiple of 2*bucket_size" def __init__(self, bucket_size:int=64, val:int=0, y_val:int=-100): self.mult = 2*bucket_size self.val, self.y_val = val, y_val def before_batch(self): bs, sl = self.x.size() if sl % self.mult != 0: pad_ = self.mult - sl%self.mult self.learn.xb = (F.pad(self.x, (0,pad_), 'constant', self.val), ) self.learn.yb = (F.pad(self.y, (0,pad_), 'constant', self.y_val), ) ``` ## Loss functions ``` #export class LabelSmoothingCrossEntropy(Module): """Label smotthing cross entropy similar to fastai implementation https://github.com/fastai/fastai/blob/89b1afb59e37e5abf7008888f6e4dd1bf1211e3e/fastai/losses.py#L79 with added option to provide ignore_index""" y_int = True def __init__(self, eps:float=0.1, reduction='mean', ignore_index=-100): store_attr() def forward(self, output, target): c = output.size()[-1] log_preds = F.log_softmax(output, dim=-1) nll_loss = F.nll_loss(log_preds, target.long(), reduction=self.reduction, ignore_index=self.ignore_index) mask = target.eq(self.ignore_index) log_preds = log_preds.masked_fill(mask.unsqueeze(-1), 0.) if self.reduction=='sum': smooth_loss = -log_preds.sum() else: smooth_loss = -log_preds.sum(dim=-1) #We divide by that size at the return line so sum and not mean if self.reduction=='mean': smooth_loss = smooth_loss.mean()/(1-mask.float().mean())# devide by fraction of accounted values to debias mean return smooth_loss*self.eps/c + (1-self.eps)*nll_loss def activation(self, out): return F.softmax(out, dim=-1) def decodes(self, out): return out.argmax(dim=-1) #export @delegates() class LabelSmoothingCrossEntropyFlat(BaseLoss): "Same as `LabelSmoothingCrossEntropy`, but flattens input and target." y_int = True @use_kwargs_dict(keep=True, eps=0.1, reduction='mean') def __init__(self, *args, axis=-1, **kwargs): super().__init__(LabelSmoothingCrossEntropy, *args, axis=axis, **kwargs) def activation(self, out): return F.softmax(out, dim=-1) def decodes(self, out): return out.argmax(dim=-1) bs=4 sl=10 v=32 pred = torch.randn(bs, sl, v, requires_grad=True) targ = torch.randint(v, (bs,sl)) i, j = torch.triu_indices(bs, sl, offset=(sl-bs+1)) targ[i,j] = -1 loss_func = LabelSmoothingCrossEntropyFlat(ignore_index=-1) loss = loss_func(pred, targ) loss.backward() assert (torch.all(pred.grad == 0, dim=-1) == (targ==-1)).all() ``` ## Distributed ``` #export from fastai.distributed import * @patch @contextmanager def distrib_ctx(self: Learner, cuda_id=None,sync_bn=True): "A context manager to adapt a learner to train in distributed data parallel mode." # Figure out the GPU to use from rank. Create a dpg if none exists yet. if cuda_id is None: cuda_id = int(os.environ.get('DEFAULT_GPU', 0)) if not torch.distributed.is_initialized(): setup_distrib(cuda_id) cleanup_dpg = torch.distributed.is_initialized() else: cleanup_dpg = False # Adapt self to DistributedDataParallel, yield, and cleanup afterwards. try: if num_distrib(): self.to_distributed(cuda_id,sync_bn) yield self finally: self.detach_distributed() if cleanup_dpg: teardown_distrib() #hide from nbdev.export import notebook2script; notebook2script() ```
github_jupyter
<a href="https://colab.research.google.com/github/JaimieOnigkeit/DS-Unit-2-Linear-Models/blob/master/module3-ridge-regression/Jaimie_Onigkeit_LS_DS_213_assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> Lambda School Data Science *Unit 2, Sprint 1, Module 3* --- # Ridge Regression ## Assignment We're going back to our other **New York City** real estate dataset. Instead of predicting apartment rents, you'll predict property sales prices. But not just for condos in Tribeca... - [ ] Use a subset of the data where `BUILDING_CLASS_CATEGORY` == `'01 ONE FAMILY DWELLINGS'` and the sale price was more than 100 thousand and less than 2 million. - [ ] Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test. - [ ] Do one-hot encoding of categorical features. - [ ] Do feature selection with `SelectKBest`. - [ ] Fit a ridge regression model with multiple features. Use the `normalize=True` parameter (or do [feature scaling](https://scikit-learn.org/stable/modules/preprocessing.html) beforehand — use the scaler's `fit_transform` method with the train set, and the scaler's `transform` method with the test set) - [ ] Get mean absolute error for the test set. - [ ] As always, commit your notebook to your fork of the GitHub repo. The [NYC Department of Finance](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) has a glossary of property sales terms and NYC Building Class Code Descriptions. The data comes from the [NYC OpenData](https://data.cityofnewyork.us/browse?q=NYC%20calendar%20sales) portal. ## Stretch Goals Don't worry, you aren't expected to do all these stretch goals! These are just ideas to consider and choose from. - [ ] Add your own stretch goal(s) ! - [ ] Instead of `Ridge`, try `LinearRegression`. Depending on how many features you select, your errors will probably blow up! 💥 - [ ] Instead of `Ridge`, try [`RidgeCV`](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.html). - [ ] Learn more about feature selection: - ["Permutation importance"](https://www.kaggle.com/dansbecker/permutation-importance) - [scikit-learn's User Guide for Feature Selection](https://scikit-learn.org/stable/modules/feature_selection.html) - [mlxtend](http://rasbt.github.io/mlxtend/) library - scikit-learn-contrib libraries: [boruta_py](https://github.com/scikit-learn-contrib/boruta_py) & [stability-selection](https://github.com/scikit-learn-contrib/stability-selection) - [_Feature Engineering and Selection_](http://www.feat.engineering/) by Kuhn & Johnson. - [ ] Try [statsmodels](https://www.statsmodels.org/stable/index.html) if you’re interested in more inferential statistical approach to linear regression and feature selection, looking at p values and 95% confidence intervals for the coefficients. - [ ] Read [_An Introduction to Statistical Learning_](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf), Chapters 1-3, for more math & theory, but in an accessible, readable way. - [ ] Try [scikit-learn pipelines](https://scikit-learn.org/stable/modules/compose.html). ``` %%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/' # Ignore this Numpy warning when using Plotly Express: # FutureWarning: Method .ptp is deprecated and will be removed in a future version. Use numpy.ptp instead. import warnings warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy') import pandas as pd import pandas_profiling # Read New York City property sales data df = pd.read_csv(DATA_PATH+'condos/NYC_Citywide_Rolling_Calendar_Sales.csv') # Change column names: replace spaces with underscores df.columns = [col.replace(' ', '_') for col in df] # SALE_PRICE was read as strings. # Remove symbols, convert to integer df['SALE_PRICE'] = ( df['SALE_PRICE'] .str.replace('$','') .str.replace('-','') .str.replace(',','') .astype(int) ) # BOROUGH is a numeric column, but arguably should be a categorical feature, # so convert it from a number to a string df['BOROUGH'] = df['BOROUGH'].astype(str) # Reduce cardinality for NEIGHBORHOOD feature # Get a list of the top 10 neighborhoods top10 = df['NEIGHBORHOOD'].value_counts()[:10].index # At locations where the neighborhood is NOT in the top 10, # replace the neighborhood with 'OTHER' df.loc[~df['NEIGHBORHOOD'].isin(top10), 'NEIGHBORHOOD'] = 'OTHER' df.describe() df = df.drop(['EASE-MENT'], axis = 'columns') df.dtypes df['SALE_DATE'] = pd.to_datetime(df['SALE_DATE'], infer_datetime_format=True) df.dtypes ``` ## Use a subset of the data where BUILDING_CLASS_CATEGORY == '01 ONE FAMILY DWELLINGS' and the sale price was more than 100 thousand and less than 2 million. ``` df_new = df[df.BUILDING_CLASS_CATEGORY == '01 ONE FAMILY DWELLINGS'] df_new.head() df_new = df_new[(df_new.SALE_PRICE >= 100000 ) & (df_new.SALE_PRICE <= 2000000)] ``` ## Do train/test split. Use data from January — March 2019 to train. Use data from April 2019 to test. ``` df_new.head() #Remove unecessary columns df_new = df_new.drop('APARTMENT_NUMBER', axis = 'columns') df_new = df_new.drop('TAX_CLASS_AT_TIME_OF_SALE', axis = 'columns') cutoff = pd.to_datetime('2019-04-01') train = df_new[df_new.SALE_DATE < cutoff] test = df_new[df_new.SALE_DATE >= cutoff] print(train.shape) print(test.shape) ``` ## Do one-hot encoding of categorical features. ``` test.describe(include='object') target = 'SALE_PRICE' high_cardinality = ['ADDRESS', 'LAND_SQUARE_FEET', 'SALE_DATE'] features = train.columns.drop([target] + high_cardinality) X_train = train[features] y_train = train[target] X_test = test[features] y_test = test[target] import category_encoders as ce encoder = ce.OneHotEncoder(use_cat_names=True) X_train = encoder.fit_transform(X_train) X_train.head() X_test = encoder.transform(X_test) X_test ``` ## Do feature selection with SelectKBest. ``` X_train.dtypes from sklearn.feature_selection import SelectKBest, f_regression selector = SelectKBest(score_func=f_regression, k=29) # .fit_transform on the train set # .transform on test set X_train_selected = selector.fit_transform(X_train, y_train) X_train_selected.shape selected_mask = selector.get_support() all_names = X_train.columns selected_names = all_names[selected_mask] unselected_names = all_names[~selected_mask] print('Features selected:') for name in selected_names: print(name) print('\n') print('Features not selected:') for name in unselected_names: print(name) from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_absolute_error for k in range(1, len(X_train.columns)+1): print(f'{k} features') selector = SelectKBest(score_func=f_regression, k=k) X_train_selected = selector.fit_transform(X_train, y_train) X_test_selected = selector.transform(X_test) model = LinearRegression() model.fit(X_train_selected, y_train) y_pred = model.predict(X_test_selected) mae = mean_absolute_error(y_test, y_pred) print(f'Test Mean Absolute Error: ${mae:,.0f} \n') ``` Looks like 29 features minimizes the error ## Fit a ridge regression model with multiple features. Use the normalize=True parameter (or do feature scaling beforehand — use the scaler's fit_transform method with the train set, and the scaler's transform method with the test set) ``` import matplotlib.pyplot as plt from sklearn.linear_model import Ridge from IPython.display import display, HTML for alpha in [0.001, 0.01, 0.1, 1.0, 10, 100.0, 1000.0]: # Fit Ridge Regression model display(HTML(f'Ridge Regression, with alpha={alpha}')) model = Ridge(alpha=alpha) model.fit(X_train, y_train) y_pred = model.predict(X_test) # Get Test MAE mae = mean_absolute_error(y_test, y_pred) display(HTML(f'Test Mean Absolute Error: ${mae:,.0f}')) ``` I tried to plot the coefficients, but I got an error saying that over 20 graphs had been created! Alpha 1 has the lowest mean absolute error.
github_jupyter
``` # 화씨 -> 섭씨로 바꾸기 # categorical 바꾸기 # 날짜 date 형식으로 바꾸기 # NA값 0으로 처리하기 import pandas as pd import numpy as np from datetime import datetime df = pd.read_csv('train.csv',encoding='euc-kr') df.head() df.info() #datetime으로 변환 df['Date'] = pd.to_datetime(df['Date']) df['Year'] =df['Date'].dt.year df['Month'] =df['Date'].dt.month df['Day'] =df['Date'].dt.day df['Day_name'] =df['Date'].dt.day_name() df.info() df.head() df.head() df['Type'] = df['Type'].astype('category') df['IsHoliday'] = df['IsHoliday'].astype('category') df['Store'] = df['Store'].astype('category') df['Dept'] = df['Dept'].astype('category') df['Temperature'] = df['Temperature'] - 32 / 1.8 df.head() df.corr() import matplotlib.pyplot as plt import seaborn as sns def encode_sin_cos(df,col_n,max_val): df[col_n+'_sin'] = np.sin(2*np.pi*df[col_n]/max_val) df[col_n+'_cos'] = np.cos(2*np.pi*df[col_n]/max_val) return df df = encode_sin_cos(df,'Month',12) df = encode_sin_cos(df,'Day',31) df[['Year','Month','Day','Month_sin','Month_cos','Day_sin','Day_cos']] df_2010 = df[df['Year'] == 2010] df_2011 = df[df['Year'] == 2011] df_2012 = df[df['Year'] == 2012] c_m = sns.scatterplot(x="Month_sin",y="Month_cos",data=df_2010) c_m.set_title("Cyclic Encoding of Month (2010)") c_m.set_ylabel("Cosine Encoded Months") c_m.set_xlabel("Sine Encoded Months") c_m = sns.scatterplot(x="Month_sin",y="Month_cos",data=df_2011) c_m.set_title("Cyclic Encoding of Month (2011)") c_m.set_ylabel("Cosine Encoded Months") c_m.set_xlabel("Sine Encoded Months") c_m = sns.scatterplot(x="Month_sin",y="Month_cos",data=df_2012) c_m.set_title("Cyclic Encoding of Month (2012)") c_m.set_ylabel("Cosine Encoded Months") c_m.set_xlabel("Sine Encoded Months") corr = df[['Store','Dept','Date','Weekly_Sales','IsHoliday','Temperature','Fuel_Price','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI','Unemployment','Type','Size','Year','Month','Day','Day_name']].corr() # corr['Weekly_Sales'].dtypes corr['Weekly_Sales'].abs().sort_values(ascending=False) sns.set(style="white") corr = df[['Store','Dept','Date','Weekly_Sales','IsHoliday','Temperature','Fuel_Price','MarkDown1','MarkDown2','MarkDown3','MarkDown4','MarkDown5','CPI','Unemployment','Type','Size','Year','Month','Day','Day_name']].corr() mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True f, ax = plt.subplots(figsize=(11, 9)) cmap = sns.diverging_palette(220, 10, as_cmap=True) sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5}) plt.scatter(df['Fuel_Price'],df['Weekly_Sales']) plt.show() plt.scatter(df['Size'],df['Weekly_Sales']) plt.show() df.loc[df['Weekly_Sales'] >300000] df.loc[df['Weekly_Sales'] >240000,"Date"].value_counts() ```
github_jupyter
``` import pandas as pd import matplotlib.pyplot as plt import numpy as np from rdkit.Chem import MolFromSmiles from rdkit.Chem.Descriptors import ExactMolWt df = pd.read_csv("39_Formose reaction_MeOH.csv")#glucose_dry_impcols.csv print(df.columns) # first get rid of empty lines in the mass list by replacing with '' df.replace('', np.nan, inplace=True) # also, some 'Mass' values are not numbers df.dropna(subset=['Mass'], inplace=True) # now replace NaNs with '' to avoid weird errors df.fillna('', inplace=True) df.shape df.head() # make a list of exact mass and relative abundance. mass_list = [] rel_abundance = [] for i in range(len(df)): # allow entire spectrum for this one if float(df['Mass'].iloc[i]) < 250 and "No Hit" not in df['Molecular Formula'].iloc[i]: mass_list.append(float(df['Mass'].iloc[i])) rel_abundance.append(float(df['Rel. Abundance'].iloc[i])) # now, "renormalize" the relative abundance. highest = max(rel_abundance) norm_factor = 100.0/highest normalized_abun = [] for ab in rel_abundance: normalized_abun.append(norm_factor*ab) print(f'{len(mass_list)} items in {mass_list}') # formose MOD output # ../main/glucose/glucose_degradation_output_10mar.txt data_mod = pd.read_csv('../main/formose/formose_output.txt', sep='\t', names=['Generation', 'SMILES']) sim_masses = [] for i in range(len(formose_mod)): row = formose_mod.iloc[i] mol = MolFromSmiles(row['SMILES']) mol_wt = ExactMolWt(mol) sim_masses.append(mol_wt) data_mod['Mol Wt'] = sim_masses unique_sim_masses = list(set(sim_masses)) unique_mass_freq = [sim_masses.count(mass) for mass in unique_sim_masses] highest_freq = max(unique_mass_freq) norm_freq = [100*(freq/highest_freq) for freq in unique_mass_freq] print('Unique masses:',len(unique_sim_masses)) print('Frequency of each mass', unique_mass_freq) print(unique_sim_masses) from matplotlib import rc # Use LaTeX and CMU Serif font. rc('text', usetex=True) rc('font', **{'family': 'serif', 'serif': ['Computer Modern']}) # for some flexibility, create a container for the figure fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(6, 12), sharex=True) # create a figure object #ax = fig.add_subplot(111) # create an axis object # first, draw the experimental spectrum axes[0].vlines(x=mass_list, ymin=0, ymax=normalized_abun, color='cornflowerblue') # now the CNRN axes[1].vlines(x=unique_sim_masses, ymin=0, ymax=norm_freq, color='deeppink') #plt.bar(mass_list, rel_abundance, width=0.5) axes[0].set_yscale('log') axes[1].set_yscale('log') axes[0].set_ylim([0.875, 125]) axes[1].set_ylim([0.875, 125]) plt.gca().invert_yaxis() plt.xlim(155, 205) plt.xlabel('Exact Mass') #plt.ylabel('Normalized Abundance') plt.tight_layout() plt.subplots_adjust(wspace=0, hspace=0) plt.savefig('formose_mirror_plot.jpg', dpi=300) plt.show() ```
github_jupyter
### Example Class ``` import datetime # we will use this for date objects class Person: def __init__(self, name, surname, birthdate, address, telephone, email): self.name = name self.surname = surname self.birthdate = birthdate self.address = address self.telephone = telephone self.email = email def age(self): today = datetime.date.today() age = today.year - self.birthdate.year if today < datetime.date(today.year, self.birthdate.month, self.birthdate.day): age -= 1 return age person = Person( "Jane", "Doe", datetime.date(1992, 3, 12), # year, month, day "No. 12 Short Street, Greenville", "555 456 0987", "jane.doe@example.com" ) print(person.name) print(person.email) print(person.age()) ``` __init__() method is used to initialize an instance or object of a class<br> self.name, self.surname, self.birthdate, self.address, self.telephone, and self.email are **instance** attributes You may have noticed that both of these method definitions have ```self``` as the first parameter, and we use this variable inside the method bodies – but we don’t appear to pass this parameter in. This is because whenever we call a method on an object, the object itself is automatically passed in as the first parameter. This gives us a way to access the object’s properties from inside the object’s methods. ### Class attributes We define class attributes in the body of a class, at the same indentation level as method definitions (one level up from the insides of methods): ``` class Person: TITLES = ('Dr', 'Mr', 'Mrs', 'Ms') # This is a Class attribute def __init__(self, title, name, surname): if title not in self.TITLES: raise ValueError("%s is not a valid title." % title) self.title = title self.name = name self.surname = surname if __name__ == "__main__": me = Person(title='Mr', name='John', surname='Doe') print(me.title) print(me.name) print(me.surname) print(Person.TITLES) ``` Class attributes exists for all instances of a class. These attributes will be shared by all instances of that class. ### Class Decorators **@classmethod** - Just like we can define class attributes, which are shared between all instances of a class, we can define class methods. We do this by using the @classmethod decorator to decorate an ordinary method. **@staticmethod** - A static method doesn’t have the calling object passed into it as the first parameter. This means that it doesn’t have access to the rest of the class or instance at all. We can call them from an instance or a class object, but they are most commonly called from class objects, like class methods.<br><br>If we are using a class to group together related methods which don’t need to access each other or any other data on the class, we may want to use this technique. The advantage of using static methods is that we eliminate unnecessary cls or self parameters from our method definitions. The disadvantage is that if we do occasionally want to refer to another class method or attribute inside a static method we have to write the class name out in full, which can be much more verbose than using the cls variable which is available to us inside a class method.
github_jupyter
``` # Load libraries import pandas as pd import numpy as np import matplotlib.pyplot as plt import time from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.neighbors import KNeighborsClassifier # Load dataset url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data" names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'class'] dataset = pd.read_csv(url, names=names) print(dataset.shape) print(dataset.head(5)) # slit features and labels array = dataset.values X = array[:,0:4] y = array[:,4] # Test options and evaluation metric seed = 7 scoring = 'accuracy' # test different number of cores: max 8 num_cpu_list = list(range(1,9)) training_times_all = [] param_grid = {"n_neighbors" : list(range(1,20))} training_times = [] for num_cpu in num_cpu_list: clf = GridSearchCV(KNeighborsClassifier(), param_grid, scoring=scoring) clf.set_params(n_jobs=num_cpu) start_time = time.time() clf.fit(X, y) training_times.append(time.time() - start_time) # print logging message print("Computing KNN grid with {} cores DONE.".format(num_cpu)) print("All computations DONE.") # best parameters found print("Best parameters:") print(clf.best_params_) print("With accuracy:") print(clf.best_score_) scores_all_percent = [100 * grid_score[1] for grid_score in clf.grid_scores_] params_all = [grid_score[0]["n_neighbors"] for grid_score in clf.grid_scores_] N = 19 ind = np.arange(N) # the x locations for bars width = 0.5 # the width of the bars fig, ax = plt.subplots() ax.bar(ind + width/2, scores_all_percent, width) ax.set_xticks(ind + width) ax.set_xticklabels([str(i) for i in params_all]) ax.set_ylim([90,100]) plt.title("Accuracy of KNN vs n_neighbors param") plt.xlabel("n_neighbors") plt.ylabel("accuracy [%]") plt.show() ``` The above plot shows that the best accuracy for KNN algorithm is obtained for **n_neighbors = 5** ``` fig, ax = plt.subplots() ax.plot(num_cpu_list, training_times, 'ro') ax.set_xlim([0, len(num_cpu_list)+1]) #plt.axis([0, len(num_cpu_list)+1, 0, max(training_times)+1]) plt.title("Search time vs #CPU Cores") plt.xlabel("#CPU Cores") plt.ylabel("search time [s]") plt.show() ``` We can see that the search time for **n_jobs > 1** is highier than for **n_jobs = 1**. The reason is that multiprocessing comes at cost i.e. the distribution of multiple processes can take more time that the actual execution time for the small datasets like **Iris** (150 rows).
github_jupyter
# pipegraph User Guide ## Rationale [scikit-learn](http://scikit-learn.org/stable/) provides a useful set of data preprocessors and machine learning models. The `Pipeline` object can effectively encapsulate a chain of transformers followed by final model. Other functions, like `GridSearchCV` can effectively use `Pipeline` objects to find the set of parameters that provide the best estimator. ### Pipeline + GridSearchCV: an awesome combination Let's consider a simple example to illustrate the advantages of using `Pipeline` and `GridSearchCV`. First let's import the libraries we will use and then let's build some artificial data set following a simple polynomial rule ``` import numpy as np from sklearn.preprocessing import MinMaxScaler from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression from sklearn.model_selection import GridSearchCV from sklearn.pipeline import Pipeline import matplotlib.pyplot as plt X = 2*np.random.rand(100,1)-1 y = 40 * X**5 + 3*X*2 + 3*X + 3*np.random.randn(100,1) ``` Once we have some data ready, we instantiate the transformers and a regressor we want to fit: ``` scaler = MinMaxScaler() polynomial_features = PolynomialFeatures() linear_model = LinearRegression() ``` We define the steps that form the Pipeline object and then we instantiate such a Pipeline ``` steps = [('scaler', scaler), ('polynomial_features', polynomial_features), ('linear_model', linear_model)] pipe = Pipeline(steps=steps) ``` Now we can pass this pipeline to `GridSearchCV`. When the `GridSearchCV` object is fitted, the search for the best combination for hyperparameters is performed according to the values provided in the `param_grid` parameter: ``` param_grid = {'polynomial_features__degree': range(1, 11), 'linear_model__fit_intercept': [True, False]} grid_search_regressor = GridSearchCV(estimator=pipe, param_grid=param_grid, refit=True) grid_search_regressor.fit(X, y); ``` And now we can check the results of fitting the Pipeline and the values of the hyperparameters: ``` y_pred = grid_search_regressor.predict(X) plt.scatter(X, y) plt.scatter(X, y_pred) plt.show() coef = grid_search_regressor.best_estimator_.get_params()['linear_model'].coef_ degree = grid_search_regressor.best_estimator_.get_params()['polynomial_features'].degree print('Information about the parameters of the best estimator: \n degree: {} \n coefficients: {} '.format(degree, coef)) ``` ### Pipeline weaknesses: From this example we can learn that `Pipeline` and `GridSearchCV` are very useful tools to consider when attempting to fit models. As far as the needs of the user can be satisfied by a set of transformers followed by a final model, this approach seems to be highly convenient. Additional advantages of such approach are the **parallel computation** and **memoization** capabilities of GridSearchCV. Unfortunately though, current implementation of scikit-learn's `Pipeline`: - Does not allow postprocessors after the final model - Does not allow extracting information about intermediate results - The X is transformed on every transformer but the following step can not have access to X variable values beyond the previous step - Only allows single path workflows ### pipegraph goals: [pipegraph](https://github.com/mcasl/PipeGraph) was programmed in order to allow researchers and practitioners to: - Use multiple path workflows - Have access to every variable value produced by any step of the workflow - Use an arbitraty number of models and transformers in the way the user prefers - Express the model as a graph consisting of transformers, regressors, classifiers or custom blocks - Build new custom block in an easy way - Provide the community some adapters to scikit-learn's objects that may help further developments ## pipegraph main interface: The PipeGraphRegressor and PipeGraphClassifier classes `pipegraph` provides the user two main classes: `PipeGraphRegressor` and `PipeGraphClassifier`. They both provide a familiar interface to the raw `PipeGraph` class that most users will not need to use. The `PipeGraph` class provides greater versatility allowing an arbitrary number of inputs and outputs and may be the base class for those users facing applications with such special needs. Most users, though, will be happy using just the former two classes provided as main interface to operate the library. As the names intend to imply, `PipeGraphRegressor` is the class to use for regression models and `PipeGraphClassifier` is intended for classification problems. Indeed, the only difference between these two classes is the default scoring function that has been chosen accordingly to scikit-learn defaults for each case. Apart from that, both classes share the same code. It must be noticed though, that any of these classes can comprise a plethora of different regressors or clasiffiers. It is the final step the one that will define whether we are defining a classification or regression problem. ## From a single path workflow to a graph with multiple paths: Understanding connections These two classes provide an interface as similar to scikit-learn's `Pipeline` as possible in order to ease their use to those already familiar with scikit-learn. There is a slight but important difference that empowers these two classes: the `PipeGraph` related classes accept extra information about which input variables are needed by each step, thus allowing multiple path workflows. To clarify the usage of these connections, let's start using `pipegraph` with a simple example that could be otherwise perfectly expressed using a scikit-learn's `Pipeline` as well. In this simple case, the data is transformed using a `MinMaxScaler` transformer and the preprocessed data is fed to a `LinearRegression` model. Figure 1 shows the steps of this PipeGraphRegressor and the connections between them: which input variables each one accepts and their origin, that is, if they are provided by a previous step, like the output of `scaler`, named `predict`, that is used by `linear_model`'s `X` variable; or `y` which is not calculated by any previous block but is passed by the user in the `fit` or `predict` method calls. <img src="./images/figure_1-a.png" width="400" /> Figure 1. PipeGraph diagram showing the steps and their connections In this first simple example of `pipegraph` the last step is a regressor, and thus the `PipeGraphRegressor` class is the most adequate class to choose. But other than that, we define the steps as usual for a standard `Pipeline`: as a list of tuples (label, sklearn object). We are not introducing yet any information at all about the connections, in which case the `PipeGraphRegressor` object is built considering that the steps follow a linear workflow in the same way as a standard `Pipeline`. ``` from pipegraph import PipeGraphRegressor X = 2*np.random.rand(100,1)-1 y = 40 * X**5 + 3*X*2 + 3*X + 3*np.random.randn(100,1) scaler = MinMaxScaler() linear_model = LinearRegression() steps = [('scaler', scaler), ('linear_model', linear_model)] pgraph = PipeGraphRegressor(steps=steps) pgraph.fit(X, y) ``` As the printed output shows, the internal links displayed by the `fit_connections` and `predict_connections` parameters are in line with those we saw in Figure 1 and those expected by a single path pipeline. As we did not specify these values, they were created by `PipeGRaphRegressor.__init__()` method as a comodity. We can have a look at these values by directly inspecting the attributes values. As `PipeGraphRegressor` and `PipeGraphClassifier` are wrappers of a `PipeGraph` object stored in the `_pipegraph` attribute, we have to dig a bit deeper to find the `fit_connections` ``` pgraph._pipegraph.fit_connections ``` Figure 2 surely will help understading the syntax used by the connections dictionary. It goes like this: - The keys of the top level entries of the dictionary must be the same as those of the previously defined steps. - The values assocciated to these keys define the variables from other steps that are going to be considered as inputs for the current step. They are dictionaries themselves, where: - The keys of the nested dictionary represent the input variables as named at the current step. - The values assocciated to these keys define the steps that hold the desired information and the variables as named at that step. This information can be written as: - A tuple with the label of the step in position 0 followed by the name of the output variable in position 1. - A string: - If the string value is one of the labels from the steps, then it is interpreted as tuple, as previously, with the label of the step in position 0 and 'predict' as name of the output variable in position 1. - Otherwise, it is considered to be a variable from an external source, such as those provided by the user while invoking the ``fit``, ``predict`` or ``fit_predict`` methods. <img src="./images/figure_1-b.png" width="700" /> Figure 2. Illustration of the connections of the PipeGraph The choice of name 'predict' for default output variables was made for convenience reasons as it will be illustrated later on. The developers preferred using always the same word for every block even though it might not be a regressor nor a classifier. Finally, let's get the predicted values from this `PipeGraphRegressor` for illustrative purposes: ``` y_pred = pgraph.predict(X) plt.scatter(X, y, label='Original Data') plt.scatter(X, y_pred, label='Predicted Data') plt.title('Plots of original and predicted data') plt.legend(loc='best') plt.grid(True) plt.xlabel('Index') plt.ylabel('Value of Data') plt.show() ``` ## `GridSearchCV` compatibility requirements Both `PipeGraphRegressor`and `PipeGraphClassifier` are compatible with `GridSearchCV` provided the last step can be scored, either: - by using `PipeGraphRegressor` or `PipeGraphClassifier` default scoring functions, - by implementing a custom scoring function capable of handling that last step inputs and outputs, - by using a `NeutralRegressor` or `NeutralClassifier` block as final step. Those pipegraphs with a last step from scikit-learn's estimators set will work perfectly well using `PipeGraphRegressor` or `PipeGraphClassifier` default scoring functions. The other two alternative cover those cases in which a custom block with non standard inputs is provided. In that case, choosing a neutral regressor or classifier is usually a much simpler approach than writing customs scoring function. `NeutralRegressor` or `NeutralClassifier` are two classes provided for users convenience so that no special scoring function is needed. They just allow the user to pick some variables from other previous steps as `X` and `y` and provide compatibility to use a default scoring function. ### Example using default scoring functions We will show more complex examples in what follows, but let's first illustrate with a simple example how to use `GrisSearchCV` with the default scoring functions. Figure 3 shows the steps of the model: - **scaler**: a preprocessing step using a `MinMaxScaler` object, - **polynomial_features**: a transformer step that generates a new feature matrix consisting of all polynomial combinations of the features with degree less than or equal to the specified one, - **linear_model**: the `LinearRegression` object we want to fit. <img src="./images/figure_2.png" width="700" /> Figure 3. Using a PipeGraphRegressor object as estimator by GridSearchCV Firstly, we import the necessary libraries and create some artificial data. ``` from sklearn.preprocessing import PolynomialFeatures X = 2*np.random.rand(100,1)-1 y = 40 * X**5 + 3*X*2 + 3*X + 3*np.random.randn(100,1) scaler = MinMaxScaler() polynomial_features = PolynomialFeatures() linear_model = LinearRegression() ``` Secondly, we define the steps and a ``param_grid`` dictionary as specified by `GridSearchCV`. In this case we just want to explore a few possibilities varying the degree of the polynomials and whether to use or not an intercept at the linear model. ``` steps = [('scaler', scaler), ('polynomial_features', polynomial_features), ('linear_model', linear_model)] param_grid = {'polynomial_features__degree': range(1, 11), 'linear_model__fit_intercept': [True, False]} ``` Now, we use ``PipeGraphRegressor`` as estimator for `GridSearchCV` and perform the ``fit`` and ``predict`` operations. As the last steps, a linear regressor from scikit-learn, already works with the default scoring functions, no extra efforts are needed to make it compatible with `GridSearchCV`. ``` pgraph = PipeGraphRegressor(steps=steps) grid_search_regressor = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True) grid_search_regressor.fit(X, y) y_pred = grid_search_regressor.predict(X) plt.scatter(X, y) plt.scatter(X, y_pred) plt.show() coef = grid_search_regressor.best_estimator_.get_params()['linear_model'].coef_ degree = grid_search_regressor.best_estimator_.get_params()['polynomial_features'].degree print('Information about the parameters of the best estimator: \n degree: {} \n coefficients: {} '.format(degree, coef)) ``` This example showed how to use `GridSearchCV` with `PipeGraphRegressor` in a simple single path workflow with default scoring functions. Let's explore in next section a more complex example. ## Multiple path workflow examples Untill now, all the examples we showed displayed a single path sequence of steps and thus they could have been equally easily done using sickit-learn standard `Pipeline`. We are going to show now in the following examples multiple path cases in which we illustrate some compatibility constrains that occur and how to deal with them successfully. ### Example: Injecting a varying vector in the sample_weight parameter of LinearRegression This example illustrates the case in which a varying vector is injected to a linear regression model as ``sample_weight`` in order to evaluate them and obtain the sample_weight that generates the best results. The steps of this model are shown in Figure 4. To perform such experiment, the following issues appear: - The shape of the graph is not a single path workflow as those that can be implemented using Pipeline. Thus, we need to use `pipegraph`. - The model has 3 input variables, `X`, `y`, and `sample_weight`. The `Pipegraph` class can accept an arbitrary number of input variables, but, in order to use scikit-learn's current implementation of GridSearchCV, only `X` and `y` are accepted. We can do the trick but previously concatenating `X` and `sample_weight` into a single pandas DataFrame, for example, in order to comply with GridSearchCV requisites. That implies that the graph must be capable of separating afterwards the augmented `X` into the two components again. The **selector** step is in charge of this splitting. This step features a `ColumnSelector` custom step. This is not a scikit-learn original object but a custom class that allows to split an array into columns. In this case, ``X`` augmented data is column-wise divided as specified in a mapping dictionary. We will talk later on about custom blocks. - The information provided to the ``sample_weight`` parameter of the LinearRegression step varies on the different scenarios explored by GridSearchCV. In a GridSearchCV with Pipeline, ``sample_weight`` can't vary because it is treated as a ``fit_param`` instead of a variable. Using pipegraph's connections this is no longer a problem. - As we need a custom transformer to apply the power function to the sample_weight vector, we implement the **custom_power** step featuring a `CustomPower` custom class. Again, we will talk later on about custom blocks. The three other steps from the model are already known: - **scaler**: implements `MinMaxScaler` class - **polynomial_features**: Contains a `PolynomialFeatures` object - **linear_model**: Contains a `LinearRegression` model <img src="./images/figure_3.png" width="600" /> Figure 4. A multipath model Let's import the new components: ``` import pandas as pd from pipegraph.base import ColumnSelector from pipegraph.demo_blocks import CustomPower ``` We create an augmented ``X`` in which all data but ``y`` is concatenated. In this case, we concatenate ``X`` and ``sample_weight`` vector. ``` X = pd.DataFrame(dict(X=np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]), sample_weight=np.array([0.01, 0.95, 0.10, 0.95, 0.95, 0.10, 0.10, 0.95, 0.95, 0.95, 0.01]))) y = np.array( [ 10, 4, 20, 16, 25 , -60, 85, 64, 81, 100, 150]) ``` Next we define the steps and we use `PipeGraphRegressor` as estimator for `GridSearchCV`. ``` scaler = MinMaxScaler() polynomial_features = PolynomialFeatures() linear_model = LinearRegression() custom_power = CustomPower() selector = ColumnSelector(mapping={'X': slice(0, 1), 'sample_weight': slice(1,2)}) steps = [('selector', selector), ('custom_power', custom_power), ('scaler', scaler), ('polynomial_features', polynomial_features), ('linear_model', linear_model)] pgraph = PipeGraphRegressor(steps=steps) ``` Now, we have to define the connections of the model. We could have specified a dictionary containing the connections, but [as suggested by Joel Nothman](https://github.com/scikit-learn-contrib/scikit-learn-contrib/issues/28), scikit-learn users might find more convenient to use a method `inject` like in this example. Let's see `inject`s docstring: ``` import inspect print(inspect.getdoc(pgraph.inject)) ``` `inject` allows to chain different calls to progressively describe all the connections needed in an easy to read manner: ``` (pgraph.inject(sink='selector', sink_var='X', source='_External', source_var='X') .inject('custom_power', 'X', 'selector', 'sample_weight') .inject('scaler', 'X', 'selector', 'X') .inject('polynomial_features', 'X', 'scaler') .inject('linear_model', 'X', 'polynomial_features') .inject('linear_model', 'y', source_var='y') .inject('linear_model', 'sample_weight', 'custom_power')) ``` Then we define ``param_grid`` as expected by `GridSearchCV` to explore several possibilities of varying parameters. ``` param_grid = {'polynomial_features__degree': range(1, 3), 'linear_model__fit_intercept': [True, False], 'custom_power__power': [1, 5, 10, 20, 30]} grid_search_regressor = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True) grid_search_regressor.fit(X, y) y_pred = grid_search_regressor.predict(X) plt.scatter(X.loc[:,'X'], y) plt.scatter(X.loc[:,'X'], y_pred) plt.show() power = grid_search_regressor.best_estimator_.get_params()['custom_power'] print('Power that obtains the best results in the linear model: \n {}'.format(power)) ``` This example showed how to solve current limitations of scikit-learn `Pipeline`: - Displayed a multipath workflow successfully implemented by **pipegraph** - Showed how to circumvent current limitations of standard `GridSearchCV`, in particular, the restriction on the number of input parameters - Showed the flexibility of **pipegraph** for specifying the connections in an easy to read manner using the `inject` method - Demonstrated the capability of injecting previous steps' output into other models parameters, such as it is the case of the sample_weight parameter in the linear regressor. ### Example: Combination of classifiers A set of classifiers is combined as input to a neural network. Additionally, the scaled inputs are injected as well to the neural network. The data is firstly transformed by scaling its features. Steps of the **PipeGraph**: - **scaler**: A `MinMaxScaler` data preprocessor - **gaussian_nb**: A `GaussianNB` classifier - **svc**: A `SVC` classifier - **concat**: A `Concatenator` custom class that appends the outputs of the `GaussianNB`, `SVC` classifiers, and the scaled inputs. - **mlp**: A `MLPClassifier` object <img src="./images/figure_4.png" width="700" /> Figure 5. PipeGraph diagram showing the steps and their connections ``` from pipegraph.base import PipeGraphClassifier, Concatenator from sklearn.datasets import load_iris from sklearn.naive_bayes import GaussianNB from sklearn.svm import SVC from sklearn.neural_network import MLPClassifier iris = load_iris() X = iris.data y = iris.target scaler = MinMaxScaler() gaussian_nb = GaussianNB() svc = SVC() mlp = MLPClassifier() concatenator = Concatenator() steps = [('scaler', scaler), ('gaussian_nb', gaussian_nb), ('svc', svc), ('concat', concatenator), ('mlp', mlp)] ``` In this example we use a `PipeGraphClassifier` because the result is a classification and we want to take advantage of scikit-learn default scoring method for classifiers. Once more, we use the `inject` chain of calls to define the connections. ``` pgraph = PipeGraphClassifier(steps=steps) (pgraph.inject(sink='scaler', sink_var='X', source='_External', source_var='X') .inject('gaussian_nb', 'X', 'scaler') .inject('gaussian_nb', 'y', source_var='y') .inject('svc', 'X', 'scaler') .inject('svc', 'y', source_var='y') .inject('concat', 'X1', 'scaler') .inject('concat', 'X2', 'gaussian_nb') .inject('concat', 'X3', 'svc') .inject('mlp', 'X', 'concat') .inject('mlp', 'y', source_var='y') ) param_grid = {'svc__C': [0.1, 0.5, 1.0], 'mlp__hidden_layer_sizes': [(3,), (6,), (9,),], 'mlp__max_iter': [5000, 10000]} grid_search_classifier = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True) grid_search_classifier.fit(X, y) y_pred = grid_search_classifier.predict(X) grid_search_classifier.best_estimator_.get_params() # Code for plotting the confusion matrix taken from 'Python Data Science Handbook' by Jake VanderPlas from sklearn.metrics import confusion_matrix import seaborn as sns; sns.set() # for plot styling mat = confusion_matrix(y_pred, y) sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False) plt.xlabel('true label') plt.ylabel('predicted label'); plt.show() ``` This example displayed complex data injections that are successfully managed by **pipegraph**. ### Example: Demultiplexor - multiplexor An imaginative layout using a classifier to predict the cluster labels and fitting a separate model for each cluster. We will elaborate on this example in the examples that follow introducing variations. AS the Figure shows, the steps of the **PipeGraph** are: - **scaler**: A :class:`MinMaxScaler` data preprocessor - **classifier**: A :class:`GaussianMixture` classifier - **demux**: A custom :class:`Demultiplexer` class in charge of splitting the input arrays accordingly to the selection input vector - **lm_0**: A :class:`LinearRegression` model - **lm_1**: A :class:`LinearRegression` model - **lm_2**: A :class:`LinearRegression` model - **mux**: A custom :class:`Multiplexer` class in charge of combining different input arrays into a single one accordingly to the selection input vector <img src="./images/figure_5.png" width="700" /> Figure 6. PipeGraph diagram showing the steps and their connections ``` from pipegraph.base import PipeGraphRegressor, Demultiplexer, Multiplexer from sklearn.mixture import GaussianMixture X_first = pd.Series(np.random.rand(100,)) y_first = pd.Series(4 * X_first + 0.5*np.random.randn(100,)) X_second = pd.Series(np.random.rand(100,) + 3) y_second = pd.Series(-4 * X_second + 0.5*np.random.randn(100,)) X_third = pd.Series(np.random.rand(100,) + 6) y_third = pd.Series(2 * X_third + 0.5*np.random.randn(100,)) X = pd.concat([X_first, X_second, X_third], axis=0).to_frame() y = pd.concat([y_first, y_second, y_third], axis=0).to_frame() scaler = MinMaxScaler() gaussian_mixture = GaussianMixture(n_components=3) demux = Demultiplexer() lm_0 = LinearRegression() lm_1 = LinearRegression() lm_2 = LinearRegression() mux = Multiplexer() steps = [('scaler', scaler), ('classifier', gaussian_mixture), ('demux', demux), ('lm_0', lm_0), ('lm_1', lm_1), ('lm_2', lm_2), ('mux', mux), ] ``` Instead of using ``inject`` as in previous example, in this one we are going to pass a dictionary describing the connections to PipeGraph constructor ``` connections = { 'scaler': {'X': 'X'}, 'classifier': {'X': 'scaler'}, 'demux': {'X': 'scaler', 'y': 'y', 'selection': 'classifier'}, 'lm_0': {'X': ('demux', 'X_0'), 'y': ('demux', 'y_0')}, 'lm_1': {'X': ('demux', 'X_1'), 'y': ('demux', 'y_1')}, 'lm_2': {'X': ('demux', 'X_2'), 'y': ('demux', 'y_2')}, 'mux': {'0': 'lm_0', '1': 'lm_1', '2': 'lm_2', 'selection': 'classifier'}} pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections) pgraph.fit(X, y) y_pred = pgraph.predict(X) plt.scatter(X, y) plt.scatter(X, y_pred) plt.show() ``` ### Example: Encapsulating several blocks into a PipeGraph and reusing it We consider the previous example in which we had the following pipegraph model: <img src="./images/figure_6.png" width="700" /> We can be interested in using a fragment of the pipegraph, for example, those blocks marked with the circle (the Demultiplexer, the linear model collection, and the Multiplexer), as a single block in another pipegraph: <img src="./images/figure_7.png" width="500" /> We prepare the data and build a PipeGraph with these steps alone: ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt from sklearn.preprocessing import MinMaxScaler from sklearn.mixture import GaussianMixture from sklearn.linear_model import LinearRegression from pipegraph.base import PipeGraph, PipeGraphRegressor, Demultiplexer, Multiplexer # Prepare some artificial data X_first = pd.Series(np.random.rand(100,)) y_first = pd.Series(4 * X_first + 0.5*np.random.randn(100,)) X_second = pd.Series(np.random.rand(100,) + 3) y_second = pd.Series(-4 * X_second + 0.5*np.random.randn(100,)) X_third = pd.Series(np.random.rand(100,) + 6) y_third = pd.Series(2 * X_third + 0.5*np.random.randn(100,)) X = pd.concat([X_first, X_second, X_third], axis=0).to_frame() y = pd.concat([y_first, y_second, y_third], axis=0).to_frame() # Create a single complex block demux = Demultiplexer() lm_0 = LinearRegression() lm_1 = LinearRegression() lm_2 = LinearRegression() mux = Multiplexer() three_multiplexed_models_steps = [ ('demux', demux), ('lm_0', lm_0), ('lm_1', lm_1), ('lm_2', lm_2), ('mux', mux), ] three_multiplexed_models_connections = { 'demux': {'X': 'X', 'y': 'y', 'selection': 'selection'}, 'lm_0': {'X': ('demux', 'X_0'), 'y': ('demux', 'y_0')}, 'lm_1': {'X': ('demux', 'X_1'), 'y': ('demux', 'y_1')}, 'lm_2': {'X': ('demux', 'X_2'), 'y': ('demux', 'y_2')}, 'mux': {'0': 'lm_0', '1': 'lm_1', '2': 'lm_2', 'selection': 'selection'}} three_multiplexed_models = PipeGraph(steps=three_multiplexed_models_steps, fit_connections=three_multiplexed_models_connections ) ``` Now we can treat this PipeGraph as a reusable component and use it as a unitary step in another PipeGraph: ``` scaler = MinMaxScaler() gaussian_mixture = GaussianMixture(n_components=3) models = three_multiplexed_models steps = [('scaler', scaler), ('classifier', gaussian_mixture), ('models', three_multiplexed_models), ] connections = {'scaler': {'X': 'X'}, 'classifier': {'X': 'scaler'}, 'models': {'X': 'scaler', 'y': 'y', 'selection': 'classifier'}, } pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections) pgraph.fit(X, y) y_pred = pgraph.predict(X) plt.scatter(X, y) plt.scatter(X, y_pred) plt.show() ``` ### Example: Dynamically built component using initialization parameters Last section showed how the user can choose to encapsulate several blocks into a PipeGraph and use it as a single unit in another PipeGraph. Now we will see how these components can be dynamically built on runtime depending on initialization parameters. <img src="./images/figure_8.png" width="700" /> We can think of programatically changing the number of regression models inside this component we isolated in the previous example. First we do it by using initialization parameters in a ``PipeGraph`` subclass we called ``pipegraph.base.RegressorsWithParametrizedNumberOfReplicas``: ``` import inspect from pipegraph.base import RegressorsWithParametrizedNumberOfReplicas print(inspect.getsource(RegressorsWithParametrizedNumberOfReplicas)) ``` As it can be seen from the source code, in this example we are basically interested in using a PipeGraph object whose `__init__` has different parameters than the usual ones. Thus, we subclass PipeGRaph and reimplement the `__init__` method. In doing so, we are capable of working out the structure of the steps and connections before calling the `super().__init__` method that provides the regular `PipeGraph` object. Using this new component we can build a PipeGraph with as many multiplexed models as given by the `number_of_replicas` parameter: ``` scaler = MinMaxScaler() gaussian_mixture = GaussianMixture(n_components=3) models = RegressorsWithParametrizedNumberOfReplicas(number_of_replicas=3, model_prototype=LinearRegression(), model_parameters={}) steps = [('scaler', scaler), ('classifier', gaussian_mixture), ('models', models), ] connections = {'scaler': {'X': 'X'}, 'classifier': {'X': 'scaler'}, 'models': {'X': 'scaler', 'y': 'y', 'selection': 'classifier'}, } pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections) pgraph.fit(X, y) y_pred = pgraph.predict(X) plt.scatter(X, y) plt.scatter(X, y_pred) plt.show() ``` ### Example: Dynamically built component using input signal values during the fit stage Last example showed how to grow a PipeGraph object programatically during runtime using the `__init__` method. In this example, we are going to show how we can change the internal structure of a PipeGraph object, not during initialization but during fit. Specifically, we will show how the multiplexed model can be dynamically added on runtime depending on input signal values during `fit`. Now we consider the possibility of using the classifier's output to automatically adjust the number of replicas. This can be seen as PipeGraph changing its inner topology to adapt its connections and steps to other components context. This morphing capability opens interesting possibilities to explore indeed. ``` import inspect from pipegraph.base import RegressorsWithDataDependentNumberOfReplicas print(inspect.getsource(RegressorsWithDataDependentNumberOfReplicas)) ``` Again we subclass from parent `PipeGraph` class and implement a different `__init__`. In this example we won't make use of a `number_of_replicas` parameter, as it will be inferred from data during `fit` and thus we are satisfied by passing only those parameters allowing us to change the regressor models. As it can be seen from the code, the `__init__` method just stores the values provided by the user and it is the `fit` method the one in charge of growing the inner structure of the pipegraph. Using this new component we can build a simplified PipeGraph: ``` scaler = MinMaxScaler() gaussian_mixture = GaussianMixture(n_components=3) models = RegressorsWithDataDependentNumberOfReplicas(model_prototype=LinearRegression(), model_parameters={}) steps = [('scaler', scaler), ('classifier', gaussian_mixture), ('models', models), ] connections = {'scaler': {'X': 'X'}, 'classifier': {'X': 'scaler'}, 'models': {'X': 'scaler', 'y': 'y', 'selection': 'classifier'}, } pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections) pgraph.fit(X, y) y_pred = pgraph.predict(X) plt.scatter(X, y) plt.scatter(X, y_pred) plt.show() ``` ### Example: GridSearch on dynamically built component using input signal values Previous example showed how a PipeGraph object can be dynamically built on runtime depending on input signal values during fit. Now, in this example we will show how to use `GridSearchCV` to explore the best combination of hyperparameters. ``` from sklearn.model_selection import train_test_split from pipegraph.base import NeutralRegressor # We prepare some data X_first = pd.Series(np.random.rand(100,)) y_first = pd.Series(4 * X_first + 0.5*np.random.randn(100,)) X_second = pd.Series(np.random.rand(100,) + 3) y_second = pd.Series(-4 * X_second + 0.5*np.random.randn(100,)) X_third = pd.Series(np.random.rand(100,) + 6) y_third = pd.Series(2 * X_third + 0.5*np.random.randn(100,)) X = pd.concat([X_first, X_second, X_third], axis=0).to_frame() y = pd.concat([y_first, y_second, y_third], axis=0).to_frame() X_train, X_test, y_train, y_test = train_test_split(X, y) ``` To ease the calculation of the score for the GridSearchCV we add a neutral regressor as a last step, capable of calculating the score using a default scoring function. This is much more convenient than worrying about programming a custom scoring function for a block with an arbitrary number of inputs. ``` scaler = MinMaxScaler() gaussian_mixture = GaussianMixture(n_components=3) models = RegressorsWithDataDependentNumberOfReplicas(model_prototype=LinearRegression(), model_parameters={}) neutral_regressor = NeutralRegressor() steps = [('scaler', scaler), ('classifier', gaussian_mixture), ('models', models), ('neutral', neutral_regressor)] connections = {'scaler': {'X': 'X'}, 'classifier': {'X': 'scaler'}, 'models': {'X': 'scaler', 'y': 'y', 'selection': 'classifier'}, 'neutral': {'X': 'models'} } pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections) ``` Using GridSearchCV to find the best number of clusters and the best regressors ``` from sklearn.model_selection import GridSearchCV param_grid = {'classifier__n_components': range(2,10)} gs = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True) gs.fit(X_train, y_train) y_pred = gs.predict(X_train) plt.scatter(X_train, y_train) plt.scatter(X_train, y_pred) print("Score:" , gs.score(X_test, y_test)) print("classifier__n_components:", gs.best_estimator_.get_params()['classifier__n_components']) ``` ### Example: Alternative solution Now we consider an alternative solution to the previous example. The solution already shown displayed the potential of being able to morph the graph during fitting. A simpler approach is considered in this example by reusing components and combining the classifier with the demultiplexed models. ``` from pipegraph.base import ClassifierAndRegressorsBundle print(inspect.getsource(ClassifierAndRegressorsBundle)) ``` As before, we built a custom block by subclassing PipeGraph and the modifying the `__init__` method to provide the parameters specifically needed for our purposes. Then we chain in the same PipeGraph the classifier, and the already available and known block for creating multiplexed models by providing parameters during `__init__`. It must be noticed that both the classifier and the models share have the same number of clusters and model: the number_of_replicas value provided by the user. Using this new component we can build a simplified PipeGraph: ``` scaler = MinMaxScaler() classifier_and_models = ClassifierAndRegressorsBundle(number_of_replicas=6) neutral_regressor = NeutralRegressor() steps = [('scaler', scaler), ('bundle', classifier_and_models), ('neutral', neutral_regressor)] connections = {'scaler': {'X': 'X'}, 'bundle': {'X': 'scaler', 'y': 'y'}, 'neutral': {'X': 'bundle'}} pgraph = PipeGraphRegressor(steps=steps, fit_connections=connections) ``` Using GridSearchCV to find the best number of clusters and the best regressors ``` from sklearn.model_selection import GridSearchCV param_grid = {'bundle__number_of_replicas': range(3,10)} gs = GridSearchCV(estimator=pgraph, param_grid=param_grid, refit=True) gs.fit(X_train, y_train) y_pred = gs.predict(X_train) plt.scatter(X_train, y_train) plt.scatter(X_train, y_pred) print("Score:" , gs.score(X_test, y_test)) print("bundle__number_of_replicas:", gs.best_estimator_.get_params()['bundle__number_of_replicas']) ```
github_jupyter
``` import tensorflow as tf from tensorflow.keras import backend as K import matplotlib as mpl import pickle import matplotlib.pyplot as plt import matplotlib import numpy as np import os # Init NuScenes. Requires the dataset to be stored on disk. from nuscenes.nuscenes import NuScenes from nuscenes.map_expansion.map_api import NuScenesMap matplotlib.rcParams['figure.figsize'] = (24, 18) matplotlib.rcParams['figure.facecolor'] = 'white' matplotlib.rcParams.update({'font.size': 20}) TRAIN_SIZE = 9800 TRAIN_TIME = 6 BATCH_SIZE = 32 BUFFER_SIZE = 500 total_ped_matrix = np.load("../details/new_ped_matrix.npy") with open("../details/ped_dataset.pkl", "rb") as f: ped_dataset = pickle.load(f) with open('../details/scene_info.pkl', 'rb') as handle: scene_info = pickle.load(handle) nusc = NuScenes(version='v1.0-trainval', \ dataroot='../../../../data/', \ verbose=False) so_map = NuScenesMap(dataroot='../../../../data/', \ map_name='singapore-onenorth') bs_map = NuScenesMap(dataroot='../../../../data/', \ map_name='boston-seaport') sh_map = NuScenesMap(dataroot='../../../../data/', \ map_name='singapore-hollandvillage') sq_map = NuScenesMap(dataroot='../../../../data/', \ map_name='singapore-queenstown') # dict mapping map name to map file map_files = {'singapore-onenorth': so_map, 'boston-seaport': bs_map, 'singapore-hollandvillage': sh_map, 'singapore-queenstown': sq_map} # defining the custom rmse loss function def rmse_loss(gt_path, pred_path): ''' calculates custom rmse loss between every time point ''' gt_path = tf.reshape(gt_path, [-1, 10, 2]) pred_path = tf.reshape(pred_path, [-1, 10, 2]) return K.sqrt(K.mean(K.square(gt_path-pred_path))) # loading the model fc_model = tf.keras.models.load_model("../checkpoints/lstm_best.hdf5", compile=False) fc_model.compile(optimizer=tf.keras.optimizers.RMSprop(clipvalue=1.0), loss=rmse_loss, metrics=["accuracy"]) # undo normalization for plotting def move_from_origin(l, origin): x0, y0 = origin return [[x + x0, y + y0] for x, y in l] def rotate_from_y(l, angle): theta = -angle return [(x*np.cos(theta) - y*np.sin(theta), x*np.sin(theta) + y*np.cos(theta)) for x, y in l] # loss calculation for test prediction def rmse_error(l1, l2): loss = [] if len(np.array(l1).shape) < 2: return ((l1[0] - l2[0])**2 + (l1[1] - l2[1])**2)**0.5 for p1, p2 in zip(l1, l2): loss.append(((p1[0] - p2[0])**2 + (p1[1] - p2[1])**2)**0.5) loss = np.array(loss) return np.mean(loss) rmse_values = [] fde_valus = [] for test_idx in range(9800, 11056): test_data = total_ped_matrix[test_idx:test_idx+1,:6,:] predictions = fc_model.predict(test_data).reshape(-1, 2) predictions = move_from_origin(rotate_from_y(predictions, ped_dataset[test_idx]["angle"]), ped_dataset[test_idx]["origin"]) # n_scene = ped_dataset[test_idx]["scene_no"] # ego_poses = map_files[scene_info[str(n_scene)]["map_name"]].render_pedposes_on_fancy_map( # nusc, scene_tokens=[nusc.scene[n_scene]['token']], # ped_path = np.array(ped_dataset[test_idx]["translation"])[:,:2], # verbose = False, # render_egoposes=True, render_egoposes_range=False, # render_legend=False) # plt.scatter(*zip(*np.array(ped_dataset[test_idx]["translation"])[:6,:2]), c='k', s=5, zorder=2) # plt.scatter(*zip(*np.array(ped_dataset[test_idx]["translation"])[6:,:2]), c='b', s=5, zorder=3) # plt.scatter(*zip(*predictions), c='r', s=5, zorder=4) # plt.show() loss = rmse_error(predictions, np.array(ped_dataset[test_idx]["translation"])[6:,:2]) final_loss = rmse_error(predictions[-1], np.array(ped_dataset[test_idx]["translation"])[-1,:2]) rmse_values.append(loss) fde_valus.append(final_loss) print(f"RMSE Loss in m is {np.mean(np.array(rmse_values))}") print(f"Loss of final position in m is {np.mean(np.array(fde_valus))}") feature_errors = [] for j in range(total_ped_matrix.shape[2]): trial_matrix = np.copy(total_ped_matrix) trial_matrix[9800:11056,:,j] = trial_matrix[11054:12310,:,j] rmse_values = [] for test_idx in range(9800,11056): test_data = trial_matrix[test_idx:test_idx+1,:6,:] predictions = fc_model.predict(test_data).reshape(-1, 2) predictions = move_from_origin(rotate_from_y(predictions, ped_dataset[test_idx]["angle"]), ped_dataset[test_idx]["origin"]) loss = rmse_error(predictions, np.array(ped_dataset[test_idx]["translation"])[6:,:2]) rmse_values.append(loss) feature_errors.append(np.mean(np.array(rmse_values))) feature_importance = [l-0.2265729 for l in feature_errors] plt.bar(['x','y','vel_x','vel_y','acc_x','acc_y','d_curb'], feature_importance) plt.title("Effect on performance with respect to features used") plt.xlabel("Features used") plt.ylabel("Performance Difference (RMSE)") plt.savefig("../images/feature_analysis.png", bbox_inches='tight', pad_inches=1) plt.show() ```
github_jupyter
# 2. Acquire the Data ## Finding Data Sources There are three place to get onion price and quantity information by market. 1. **[Agmarket](http://agmarknet.nic.in/)** - This is the website run by the Directorate of Marketing & Inspection (DMI), Ministry of Agriculture, Government of India and provides daily price and arrival data for all agricultural commodities at national and state level. Unfortunately, the link to get Market-wise Daily Report for Specific Commodity (Onion for us) leads to a multipage aspx entry form to get data for each date. So it is like to require an involved scraper to get the data. Too much effort - Move on. Here is the best link to go to get what is available - http://agmarknet.nic.in/agnew/NationalBEnglish/SpecificCommodityWeeklyReport.aspx?ss=1 2. **[Data.gov.in](https://data.gov.in/)** - This is normally a good place to get government data in a machine readable form like csv or xml. The Variety-wise Daily Market Prices Data of Onion is available for each year as an XML but unfortunately it does not include quantity information that is needed. It would be good to have both price and quantity - so even though this is easy, lets see if we can get both from a different source. Here is the best link to go to get what is available - https://data.gov.in/catalog/variety-wise-daily-market-prices-data-onion#web_catalog_tabs_block_10 3. **[NHRDF](http://nhrdf.org/en-us/)** - This is the website of National Horticultural Research & Development Foundation and maintains a database on Market Arrivals and Price, Area and Production and Export Data for three commodities - Garlic, Onion and Potatoes. We are in luck! It also has data from 1996 onwards and has only got one form to fill to get the data in a tabular form. Further it also has production and export data. Excellent. Lets use this. Here is the best link to got to get all that is available - http://nhrdf.org/en-us/DatabaseReports ## Scraping the Data ### Ways to Scrape Data Now we can do this in two different levels of sophistication 1. **Automate the form filling process**: The form on this page looks simple. But viewing source in the browser shows there form to fill with hidden fields and we will need to access it as a browser to get the session fields and then submit the form. This is a little bit more complicated than simple scraping a table on a webpage 2. **Manually fill the form**: What if we manually fill the form with the desired form fields and then save the page as a html file. Then we can read this file and just scrape the table from it. Lets go with the simple way for now. ### Scraping - Manual Form Filling So let us fill the form to get a small subset of data and test our scraping process. We will start by getting the [Monthwise Market Arrivals](http://nhrdf.org/en-us/MonthWiseMarketArrivals). - Crop Name: Onion - Month: January - Market: All - Year: 2016 The saved webpage is available at [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html) ### Understand the HTML Structure We need to scrape data from this html page... So let us try to understand the structure of the page. 1. You can view the source of the page - typically Right Click and View Source on any browser and that would give your the source HTML for any page. 2. You can open the developer tools in your browser and investigate the structure as you mouse over the page 3. We can use a tools like [Selector Gadget](http://selectorgadget.com/) to understand the id's and classes' used in the web page Our data is under the **&lt;table&gt;** tag ### Exercise #1 Find the number of tables in the HTML Structure of [MonthWiseMarketArrivalsJan2016.html](MonthWiseMarketArrivalsJan2016.html)? ### Find all the Tables ``` # Import the library we need, which is Pandas import pandas as pd # Read all the tables from the html document AllTables = pd.read_html('MonthWiseMarketArrivalsJan2016.html') # Let us find out how many tables has it found? len(AllTables) type(AllTables) ``` ### Exercise #2 Find the exact table of data we want in the list of AllTables? ``` AllTables[4] ``` ### Get the exact table To read the exact table we need to pass in an identifier value which would identify the table. We can use the `attrs` parameter in read_html to do so. The parameter we will pass is the `id` variable ``` # So can we read our exact table OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) # So how many tables have we got now len(OneTable) # Show the table of data identifed by pandas with just the first five rows OneTable[0].head() ``` However, we have not got the header correctly in our dataframe. Let us see if we can fix this. To get help on any function just use `??` before the function to help. Run this function and see what additional parameter you need to define to get the header correctly ``` ??pd.read_html ``` ### Exercise #3 Read the html file again and ensure that the correct header is identifed by pandas? ``` OneTable = pd.read_html('MonthWiseMarketArrivalsJan2016.html', header = 0, attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) ``` Show the top five rows of the dataframe you have read to ensure the headers are now correct. ``` OneTable[0].head() ``` ### Dataframe Viewing ``` # Let us store the dataframe in a df variable. You will see that as a very common convention in data science pandas use df = OneTable[0] # Shape of the dateset - number of rows & number of columns in the dataframe df.shape # Get the names of all the columns df.columns # Can we see sample rows - the top 5 rows df.head() # Can we see sample rows - the bottom 5 rows df.tail() # Can we access a specific columns df["Market"] # Using the dot notation df.Market # Selecting specific column and rows df[0:5]["Market"] # Works both ways df["Market"][0:5] #Getting unique values of State pd.unique(df['Market']) ``` ## Downloading the Entire Month Wise Arrival Data ``` AllTable = pd.read_html('MonthWiseMarketArrivals.html', header = 0, attrs = {'id' : 'dnn_ctr974_MonthWiseMarketArrivals_GridView1'}) AllTable[0].head() ??pd.DataFrame.to_csv AllTable[0].columns # Change the column names to simpler ones AllTable[0].columns = ['market', 'month', 'year', 'quantity', 'priceMin', 'priceMax', 'priceMod'] AllTable[0].head() # Save the dataframe to a csv file AllTable[0].to_csv('MonthWiseMarketArrivals.csv', index = False) ```
github_jupyter
``` # default_exp tutorial ``` # nbdev tutorial > A step by step guide - image: images/nbdev_source.gif nbdev is a system for *exploratory programming*. See the [nbdev launch post](https://www.fast.ai/2019/12/02/nbdev/) for information about what that means. In practice, programming in this way can feel very different to the kind of programming many of you will be familiar with, since we've mainly be taught coding techniques that are (at least implicitly) tied to the underlying tools we have access to. I've found that programming in a "notebook first" way can make me 2-3x more productive than I was before (when we used vscode, Visual Studio, vim, PyCharm, and similar tools). In this tutorial, I'll try to get you up and running with the basics of the nbdev system as quickly and easily as possible. You can also watch this video in which I take you through the tutorial, step by step (to view full screen, click the little square in the bottom right of the video; to view in a separate Youtube window, click the Youtube logo): <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/Hrs7iEYmRmg" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe> ## Set up Your Jupyter Server ### Jupyter Environment To complete this tutorial, you'll need a Jupyter Notebook Server configured on your machine. If you have not installed Jupyter before, you may find the [Anaconda Individual Edition](https://www.anaconda.com/products/individual) the simplest to install. If you already have experience with Jupyter, please note that everything in this tutorial must be run using the same kernel. ### Install `nbdev` No matter how you installed Jupyter, you'll need to manually install `nbdev`. There is not a `conda` package for `nbdev`, so you'll need to use `pip` to install it. And you'll need to do that from a terminal window. Jupyter notebook has a terminal window available, so we'll use that: 1. Start `jupyter notebook` 2. From the "New" dropdown on the right side, choose `Terminal`. 3. Enter "`python -m pip install nbdev`" When the command completes, you're ready to go. ## Set up Repo ### Template To create your new project repo, click here: [nbdev template](https://github.com/fastai/nbdev_template/generate) (you need to be logged in to GitHub for this link to work). Fill in the requested info and click *Create repository from template*. **NB:** The name of your project will become the name of the Python package generated by nbdev. For that reason, it is a good idea to pick a short, all-lowercase name with _no dashes_ between words (underscores are allowed). Now, open your terminal, and clone the repo you just created. ### Github pages The nbdev system uses [jekyll](https://jekyllrb.com/) for documentation. Because [GitHub Pages supports Jekyll](https://help.github.com/en/github/working-with-github-pages/setting-up-a-github-pages-site-with-jekyll), you can host your site for free on [Github Pages](https://pages.github.com/) without any additional setup, so this is the approach we recommend (but it's not required; any jekyll hosting will work fine). To enable Github Pages in your project repo, click *Settings* in Github, then scroll down to *Github Pages*, and set "Source" to *Master branch /docs folder*. Once you've saved, if you scroll back down to that section, Github will have a link to your new website. Copy that URL, and then go back to your main repo page, click "edit" next to the description and paste the URL into the "website" section. While you're there, go ahead and put in your project description too. ## Edit settings.ini Next, edit the `settings.ini` file in your cloned repo. This file contains all the necessary information for when you'll be ready to package your library. The basic structure (that can be personalized provided you change the relevant information in `settings.ini`) is that the root of the repo will contain your notebooks, the `docs` folder will contain your auto-generated docs, and a folder with a name you select will contain your auto-generated modules. You'll see these commented out lines in `settings.ini`. Uncomment them, and set each value as needed. ``` # lib_name = your_project_name # user = your_github_username # description = A description of your project # keywords = some keywords # author = Your Name # author_email = email@example.com # copyright = Your Name or Company Name ``` We'll see some other settings we can change later. ## Install git hooks Jupyter Notebooks can cause challenges with git conflicts, but life becomes much easier when you use `nbdev`. As a first step, run `nbdev_install_git_hooks` in the terminal from your project folder. This will set up git hooks which will remove metadata from your notebooks when you commit, greatly reducing the chance you have a conflict. But if you do get a conflict later, simply run `nbdev_fix_merge filename.ipynb`. This will replace any conflicts in cell outputs with your version, and if there are conflicts in input cells, then both cells will be included in the merged file, along with standard conflict markers (e.g. `=====`). Then you can open the notebook in Jupyter and choose which version to keep. ## Edit 00_core.ipynb Now, run `jupyter notebook`, and click `00_core.ipynb` (you don't *have* to start your notebook names with a number like we do here; but we find it helpful to show the order you've created your project in). You'll see something that looks a bit like this: ```python # default_exp core ``` **module name here** > API details. ```python #hide from nbdev.showdoc import * ``` Let's explain what these special cells mean. ### Add cell comments There are certain special comments that, when placed as the first line of the cell, provide important information to nbdev. The cell `#default_exp core`, defines the name of the generated module (lib_name/core.py). For any cells that you want to be included in your python module, type `#export` as the first line of the cell. Each of those cells will be added to your module. ### Add a function Let's add a function to this notebook, e.g.: ```python #export def say_hello(to): "Say hello to somebody" return f'Hello {to}!' ``` Notice how it includes `#export` at the top - this means it will be included in our module, and documentation. The documentation will look like this: ``` #export def say_hello(to): "Say hello to somebody" return f'Hello {to}!' ``` ### Add examples and tests It's a good idea to give an example of your function in action. Just include regular code cells, and they'll appear (with output) in the docs, e.g.: ``` say_hello("Sylvain") ``` Examples can output plots, images, etc, and they'll all appear in your docs, e.g.: ``` from IPython.display import display,SVG display(SVG('<svg height="100"><circle cx="50" cy="50" r="40"/></svg>')) ``` You can also include tests: ``` assert say_hello("Jeremy")=="Hello Jeremy!" ``` You should also add markdown headings as you create your notebook; one benefit of this is that a table of contents will be created in the documentation automatically. ## Build lib Now you can create your python module. To do so, just run `nbdev_build_lib` from the terminal when anywhere in your project folder. ``` $ nbdev_build_lib Converted 00_core.ipynb. Converted index.ipynb. ``` ## Edit index.ipynb Now you're ready to create your documentation home page and readme file; these are both generated automatically from *index.ipynb*. So click on that to open it now. You'll see that there's already a line there to import your library - change it to use the name you selected in `settings.ini`. Then, add information about how to use your module, including some examples. Remember, these examples should be actual notebook code cells with real outputs. ## Build docs Now you can create your documentation. To do so, just run `nbdev_build_docs` from the terminal when anywhere in your project folder. ``` $ nbdev_build_docs converting: /home/jhoward/git/nbdev/nbs/00_core.ipynb converting: /home/jhoward/git/nbdev/nbs/index.ipynb ``` ## Commit to Github You can now `git commit` and `git push`. Wait a minute or two for Github to process your commit, and then head over to the Github website to look at your results. ### CI Back in your project's Github main page, click where it says *1 commit* (or *2 commits* or whatever). Hopefully, you'll see a green checkmark next to your latest commit. That means that your documentation site built correctly, and your module's tests all passed! This is checked for you using *continuous integration (CI)* with [GitHub actions](https://github.com/features/actions). This does the following: - Checks the notebooks are readable - Checks the notebooks have been cleaned of needless metadata to avoid merge conflicts - Checks there is no diff between the notebooks and the exported library - Runs the tests in your notebooks Edit the file `.github/workflows/main.yml` if you need to modify any of the CI steps. If you have a red cross, that means something failed. Click on the cross, then click *Details*, and you'll be able to see what failed. ### View docs and readme Once everything is passing, have a look at your readme in Github. You'll see that your `index.ipynb` file has been converted to a readme automatically. Next, go to your documentation site (e.g. by clicking on the link next to the description that you created earlier). You should see that your index notebook has also been used here. Congratulations, the basics are now all in place! Let's continue and use some more advanced functionality. ## Add a class Create a class in `00_core.ipynb` as follows: ```python class HelloSayer: "Say hello to `to` using `say_hello`" def __init__(self, to): self.to = to def say(self): say_hello(self.to) ``` This will automatically appear in the docs like this: ``` #export class HelloSayer: "Say hello to `to` using `say_hello`" def __init__(self, to): self.to = to def say(self): "Do the saying" say_hello(self.to) ``` ### Document with show_doc However, methods aren't automatically documented. To add method docs, use `show_doc`: ```python from nbdev.showdoc import * show_doc(HelloSayer.say) ``` ``` from nbdev.showdoc import * show_doc(HelloSayer.say) ``` And add some examples and/or tests: ``` o = HelloSayer("Alexis") o.say() ``` ## Add links with backticks Notice above there is a link from our new class documentation to our function. That's because we used backticks in the docstring: "Say hello to `to` using `say_hello`" These are automatically converted to hyperlinks wherever possible. For instance, here are hyperlinks to `HelloSayer` and `say_hello` created using backticks. ## Set up autoreload Since you'll be often updating your modules from one notebook, and using them in another, it's helpful if your notebook automatically reads in the new modules as soon as the python file changes. To make this happen, just add these lines to the top of your notebook: ``` %load_ext autoreload %autoreload 2 ``` ## Add in-notebook export cell It's helpful to be able to export all your modules directly from a notebook, rather than going to the terminal to do it. All nbdev commands are available directly from a notebook in Python. Add these lines to any cell and run it to exports your modules (I normally make this the last cell of my scripts). ```python from nbdev.export import notebook2script notebook2script() ``` ## Run tests in parallel Before you push to github or make a release, you might want to run all your tests. nbdev can run all your notebooks in parallel to check for errors. Just run `nbdev_test_nbs` in a terminal. ``` (base) jhoward@usf3:~/git/nbdev$ nbdev_test_nbs testing: /home/jhoward/git/nbdev/nbs/00_core.ipynb testing: /home/jhoward/git/nbdev/nbs/index.ipynb All tests are passing! ``` ## View docs locally If you want to look at your docs locally before you push to Github, you can do so by running a jekyll server. First, install Jekyll by [following these steps](https://jekyllrb.com/docs/installation/ubuntu/). Then, install the modules needed for serving nbdev docs by `cd`ing to the `docs` directory, and typing `bundle install`. Finally, cd back to your repo root and type `make docs_serve`. This will launch a server on port 4000 (by default) which you can connect to with your browser to view your docs. If Github pages fails to build your docs, running locally with Jekyll is the easiest way to find out what the problem is. ## Set up prerequisites If your module requires other modules as dependencies, you can add those prerequisites to your `settings.ini` in the `requirements` section. This should be in the same format as [install_requires in setuptools](https://packaging.python.org/discussions/install-requires-vs-requirements/#install-requires), with each requirement separated by a space. ## Set up console scripts Behind the scenes, nbdev uses that standard package `setuptools` for handling installation of modules. One very useful feature of `setuptools` is that it can automatically create [cross-platform console scripts](https://python-packaging.readthedocs.io/en/latest/command-line-scripts.html#the-console-scripts-entry-point). nbdev surfaces this functionality; to use it, use the same format as `setuptools`, with whitespace between each script definition (if you have more than one). ``` console_scripts = nbdev_build_lib=nbdev.cli:nbdev_build_lib ``` ## Test with editable install To test and use your modules in other projects, and use your console scripts (if you have any), the easiest approach is to use an [editable install](http://codumentary.blogspot.com/2014/11/python-tip-of-year-pip-install-editable.html). To do this, `cd` to the root of your repo in the terminal, and type: pip install -e . (Note that the trailing period is important.) Your module changes will be automatically picked up without reinstalling. If you add any additional console scripts, you will need to run this command again. ## Upload to pypi If you want people to be able to install your project by just typing `pip install your-project` then you need to upload it to [pypi](https://pypi.org/). The good news is, we've already created a fully pypi compliant installer for your project! So all you need to do is register at pypi (click "Register" on pypi) if you haven't previously done so, and then create a file called `~/.pypirc` with your login details. It should have these contents: ``` [pypi] username = your_pypi_username password = your_pypi_password ``` To upload your project to pypi, just type `make release` in your project root directory. Once it's complete, a link to your project on pypi will be printed. **NB**: `make release` will automatically increment the version number in `settings.py` before pushing a new release to pypi. If you don't want to do this, run `make pypi` instead. ## Install collapsible headings and toc2 There are two jupyter notebook extensions that I highly recommend when working with projects like this. They are: - [Collapsible headings](https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/collapsible_headings/readme.html): This lets you fold and unfold each section in your notebook, based on its markdown headings. You can also hit <kbd>left</kbd> to go to the start of a section, and <kbd>right</kbd> to go to the end - [TOC2](https://jupyter-contrib-nbextensions.readthedocs.io/en/latest/nbextensions/toc2/README.html): This adds a table of contents to your notebooks, which you can navigate either with the `Navigate` menu item it adds to your notebooks, or the TOC sidebar it adds. These can be modified and/or hidden using its settings. ## Look at nbdev "source" for more ideas Don't forget that nbdev itself is written in nbdev! It's a good place to look to see how fast.ai uses it in practice, and get a few tips. You'll find the nbdev notebooks here in the [nbs folder](https://github.com/fastai/nbdev/tree/master/nbdev) on Github.
github_jupyter
I've approached this problem as a regression problem and trained the model with the helps of LSTMs, details of the model architecture are furnished below. I've evaluated the model using 3 iterations of hold-out validation, the details of which are furnished below ``` from google.colab import drive import pandas as pd import numpy as np drive.mount('/content/gdrive') #Please mount it to appropriate directory where all the files are put cd gdrive/My\ Drive/73Strings #Constructing dataframe using dictionary.txt and sentiment_labels.txt phrase_and_ids = pd.read_table('dictionary.txt',delimiter='|',names=['Phrase', 'PhraseID']) ids_and_labels = pd.read_table('sentiment_labels.txt',delimiter='|',names=['PhraseID', 'Labels'],header=0) df_all = phrase_and_ids.merge(ids_and_labels, how='inner', on='PhraseID') #Glimpse of the dataset df_all #Estimating the maximum length of phrases. length_phrases = [] for index, row in df_all.iterrows(): sentence = (row['Phrase']) sentence_words = sentence.split(' ') len_sentence_words = len(sentence_words) length_phrases.append(len_sentence_words) max_length_of_a_phrase = max(length_phrases) # Creating a dictionary that maps a word to its embedding(100 dimensional) #Used GloVe embeddings embeddings_index = dict() f = open('glove_6B_100d.txt') for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word.lower()] = coefs f.close() #Transforming list of phrases, such that from each phrase special characters(mentioned below in filter param) are removed, # and words are encoded, and finally the phrase is padded to max_length_of_a_phrase from keras.preprocessing.sequence import pad_sequences from keras.preprocessing.text import Tokenizer t = Tokenizer(filters = '!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n\'') t.fit_on_texts(df_all.iloc[:,0]) encoded_docs = t.texts_to_sequences(df_all.iloc[:,0]) max_length = max_length_of_a_phrase padded_docs = pad_sequences(encoded_docs, maxlen=max_length, padding='post') vocab_size = len(t.word_index) + 1 #Constructing the embedding_matrix which maps the encoded words to its respective word embeddings embedding_matrix = np.zeros((vocab_size, 100)) for word, i in t.word_index.items(): embedding_vector = embeddings_index.get(word.lower()) if embedding_vector is not None: embedding_matrix[i] = embedding_vector #Performing 3 iterations of hold-out validation to assess the model from sklearn.model_selection import train_test_split X_train1, X_test1, y_train1, y_test1 = train_test_split( padded_docs, df_all.loc[:,'Labels'], test_size=0.20, random_state=42) X_train2, X_test2, y_train2, y_test2 = train_test_split( padded_docs, df_all.loc[:,'Labels'], test_size=0.20, random_state=43) X_train3, X_test3, y_train3, y_test3 = train_test_split(padded_docs, df_all.loc[:,'Labels'], test_size=0.20, random_state=44) #I've experimented with various architectures, the model architecture which is given below is the best one among them. from keras.models import Sequential from keras.layers import Dense from keras.layers import Flatten from keras.layers import Embedding from keras.layers import Bidirectional from keras.layers import LSTM from keras.layers import Dropout model = Sequential() e = Embedding(vocab_size, 100, weights=[embedding_matrix], input_length=max_length_of_a_phrase, trainable=False) model.add(e) #model.add(Flatten()) model.add(LSTM(512)) model.add(Dropout(0.30)) model.add(Dense(200,activation='relu')) model.add(Dropout(0.30)) model.add(Dense(100, activation='relu')) model.add(Dense(1,activation='relu')) model.compile(optimizer='adam', loss='mse', metrics=['mse','mae']) # summarize the model print(model.summary()) history = model.fit(X_train3, y_train3, epochs=20,validation_data=(X_test3,y_test3),batch_size=128) #I've already trained the model for all 3 iterations of hold-out validation, and have stored the model and the corresponding history files. # So that it can be loaded later import keras import pickle as pkl model1 = keras.models.load_model('model_new_1.0') model2 = keras.models.load_model('model_new_2.0') model3 = keras.models.load_model('model_new_3.0') with open('history_new_1.0.pkl','rb') as file1: hist_1 = pkl.load(file1) with open('history_new_2.0.pkl','rb') as file1: hist_2 = pkl.load(file1) with open('history_new_3.0.pkl','rb') as file1: hist_3 = pkl.load(file1) #We see the average performance of the model across 3 runs of hold-out validation mean_mae = (hist_1['val_mae'][-1]+hist_2 ['val_mae'][-1]+hist_3['val_mae'][-1])/3 mean_mse = (hist_1['val_mse'][-1]+hist_2['val_mse'][-1]+hist_3['val_mse'][-1])/3 print("The performance of the model architecture across three runs of hold out validation is : \n Validation_MSE: %f \t Validation_MAE: %f " %(mean_mse,mean_mae)) #We see the average performance of the model across 3 runs of hold-out validation mean_mae = (hist_1['mae'][-1]+hist_2 ['mae'][-1]+hist_3['mae'][-1])/3 mean_mse = (hist_1['mse'][-1]+hist_2['mse'][-1]+hist_3['mse'][-1])/3 print("The performance of the model architecture across three runs of hold out validation is : \n Train_MSE: %f \t Train_MAE: %f " %(mean_mse,mean_mae)) #Plotting the history file(MSE) of model1, for visualization purpose import matplotlib.pyplot as plt plt.plot(hist_1['mse']) plt.plot(hist_1['val_mse']) plt.title('model mse') plt.ylabel('mse') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right') plt.show() #Plotting the history file(MAE) of model1, for visualization purpose import matplotlib.pyplot as plt plt.plot(hist_1['mae']) plt.plot(hist_1['val_mae']) plt.title('model mae') plt.ylabel('mae') plt.xlabel('epoch') plt.legend(['train', 'test'], loc='upper right') plt.show() ```
github_jupyter
``` %%javascript IPython.OutputArea.prototype._should_scroll = function(lines) { return false; } import numpy as np import matplotlib.pyplot as plt import matplotlib.image as mpimg from matplotlib import rcParams %matplotlib inline TEXT_COLOUR = { 'PURPLE':'\033[95m', 'CYAN':'\033[96m', 'DARKCYAN':'\033[36m', 'BLUE':'\033[94m', 'GREEN':'\033[92m', 'YELLOW':'\033[93m', 'RED':'\033[91m', 'BOLD':'\033[1m', 'UNDERLINE':'\033[4m', 'END':'\033[0m' } def print_bold(*msgs): print(TEXT_COLOUR['BOLD']) print(*msgs) print(TEXT_COLOUR['END']) def print_green(*msgs): print(TEXT_COLOUR['GREEN']) print(*msgs) print(TEXT_COLOUR['END']) def print_error(*msgs): print(TEXT_COLOUR['RED']) print(*msgs) print(TEXT_COLOUR['END']) def wrap_green(msg): return TEXT_COLOUR['GREEN'] + msg + TEXT_COLOUR['END'] def wrap_red(msg): return TEXT_COLOUR['RED'] + msg + TEXT_COLOUR['END'] def up_down_str(val): msg = str(val) if val > 0: msg = wrap_green(msg) elif val < 0: msg = wrap_red(msg) return msg exp='bert-base' num_layers = 12 tasks = ["CoLA","SST-2","MRPC","STS-B","QQP","MNLI", "MNLI-MM", "QNLI", "RTE"] metrics = { "CoLA":["mcc"], "MNLI":["acc"], "MNLI-MM":["acc"], "MRPC":["f1"], "QNLI":["acc"], "QQP":["f1"], "RTE":["acc"], "SST-2":["acc"], "STS-B":["spearmanr"], "WNLI":["acc"] #temp } reported_in_paper = { "CoLA":0.00, "MNLI":0.00, "MNLI-MM":0.0, "MRPC":0.00, "QNLI":0.00, "QQP":0.00, "RTE":0.00, "SST-2":0.00, "STS-B":0.00, "WNLI":0.00 } def get_average_val(lines): reported = [] for line in lines: val = float(line.split()[1]) if val != 0: reported.append(val) out = 0 if len(reported) != 0: reported.sort(reverse = True) candidates = [reported[0]] for j in range(1, len(reported)): if reported[j] > 0.9 * reported[0]: candidates.append(reported[j]) out = np.mean(candidates) return out results = {} for task in tasks: task_results = {} task_metrics = metrics[task] for metric in task_metrics: # base metrics print(f"../../mt_dnn_exp_results/{exp}/{task}/base-{metric}.txt") f=open(f"../../mt_dnn_exp_results/{exp}/{task}/base-{metric}.txt", "r") lines = f.read().splitlines() task_results[f'base-{metric}'] = get_average_val(lines) # no layer metrics fine_tuning_metrics = [] f=open(f"../../mt_dnn_exp_results/{exp}/{task}/no_layer-{metric}.txt", "r") lines = f.read().splitlines() fine_tuning_metrics.append(get_average_val(lines)) # fine-tuned metrics log_file_prefix='' for i in reversed(range(int(num_layers/2), num_layers)): log_file_prefix += str(i) f=open(f"../../mt_dnn_exp_results/{exp}/{task}/{log_file_prefix}-{metric}.txt", "r") lines = f.read().splitlines() fine_tuning_metrics.append(get_average_val(lines)) log_file_prefix +='_' task_results[f'{metric}'] = list(reversed(fine_tuning_metrics)) results[task] = task_results x_axis = [] for i in range(int(num_layers/2), num_layers): x_axis.append(str(i)) x_axis.append("none") def draw_graph(task, y_label, paper, base, reported): plt.figure(figsize=(10,6)) plt.plot(x_axis, reported) plt.xlabel("layers") plt.ylabel(y_label) if paper == 0.0: gap = max(reported) - min(reported) top = max(max(reported), base) + (gap*0.2) bottom = min(min(reported), base) - (gap*0.2) plt.ylim(bottom, top) plt.axhline(y=base, linestyle='--', c='green') else: gap = max(reported) - min(reported) top = max(max(reported), base, paper) + (gap*0.2) bottom = min(min(reported), base, paper) - (gap*0.2) plt.ylim(bottom, top) plt.axhline(y=base, linestyle='--', c='green') plt.axhline(y=paper, linestyle='--', c='red') plt.title(f'{exp}-{task} ({round(base,4)})') plt.savefig(f'images/{exp}/{task}', format='png', bbox_inches='tight') plt.show() for task in tasks: task_results = results[task] task_metrics = metrics[task] for metric in task_metrics: reported = task_results[metric] base = task_results[f'base-{metric}'] print_bold(task, metric, ': b -', round(base * 100, 2), 'h -',round(task_results[metric][0] * 100, 2), 'n -', round(task_results[metric][-1] * 100, 2)) import copy layer_90 = [] layer_95 = [] threshold_90 = 0.9 threshold_95 = 0.95 x_axis.reverse() for task in tasks: # print_bold(task) task_results = results[task] task_metrics = metrics[task] for metric in task_metrics: base = task_results[f'base-{metric}'] reported = copy.deepcopy(task_results[metric]) reported.reverse() flag_90 = True flag_95 = True for ind, val in enumerate(reported): if val/base > threshold_90 and flag_90: flag_90 = False layer_90.append(ind) results[task]['90%'] = ind if val/base > threshold_95 and flag_95: flag_95 = False layer_95.append(ind) results[task]['95%'] = ind if flag_90: print(task, "Fails to achieve 90% threshold", reported[-1]/base) layer_90.append(len(reported)-1) results[task]['90%'] = "-" if flag_95: print(task, "Fails to achieve 95% threshold", reported[-1]/base) layer_95.append(len(reported)-1) results[task]['95%'] = "-" print(x_axis) print(layer_90) min_layer_ind_90 = max(layer_90) print("layer_90 ", min_layer_ind_90, 'layer:', x_axis[min_layer_ind_90], round((1-(min_layer_ind_90/num_layers)) * 100, 2), '%') print(layer_95) min_layer_ind_95 = max(layer_95) print("layer_95 ", min_layer_ind_95, 'layer:', x_axis[min_layer_ind_95], round((1-(min_layer_ind_95/num_layers)) * 100, 2), '%') firsts = [] seconds = [] for task in tasks: task_results = results[task] task_metrics = metrics[task] for metric in task_metrics: base = task_results[f'base-{metric}'] reported = copy.deepcopy(task_results[metric]) reported.reverse() if task != "CoLA": first = round(100*reported[0]/base, 2) second = round(100*reported[1]/base, 2) firsts.append(first) seconds.append(second) print_bold(task, base) print('\t90', reported[min_layer_ind_90], round(reported[min_layer_ind_90]/base * 100, 2)) print('\t95', reported[min_layer_ind_95], round(reported[min_layer_ind_95]/base * 100, 2)) print_bold(len(firsts), np.mean(firsts), np.mean(seconds), round(np.mean(seconds) - np.mean(firsts),2)) for task in ["STS-B"]: task_results = results[task] task_metrics = metrics[task] for metric in task_metrics: print(task_results[metric][-1]) print(task_results[metric][-2]) latex_metrics = { "CoLA":"MCC", "MNLI":"Acc.", "MNLI-MM":"Acc.", "MRPC":"F$_1$", "QNLI":"Acc.", "QQP":"F$_1$", "RTE":"Acc.", "SST-2":"Acc.", "STS-B":"$\\rho$" } print("\\begin{center}\n\t\\scalebox{0.88}{\n\t\t\\begin{tabular}{rc|ccccccc} \n\t\t\\toprule[1pt] \n\t\t\\multirow{2}{*}{Task (metric)} & \\multirow{2}{*}{Baseline} & \\multicolumn{7}{c}{Fine-tuned layers} \\\\ \n\t\t\\cline{3-9} \n\t\t& & 6-11 & 7-11 & 8-11 & 9-11 & 10-11 & 11-11 & None \\\\ \n\t\t\t\\midrule") avg_performance = [] for task in tasks: m = metrics[task][0] base_key = f"base-{m}" if task == "MNLI-MM": row = f"\t\t\tMNLI-mm ({latex_metrics[task]}) & " else: row = f"\t\t\t{task} ({latex_metrics[task]}) & " row += "{:0.2f}".format(round(results[task][base_key] * 100, 2)) for ind, val in enumerate(results[task][m]): row += " & {:0.2f}".format(round(val * 100,2)) if len(avg_performance) == ind: avg_performance.append([]) percent = (val / results[task][base_key]) * 100 avg_performance[ind].append(percent) # row += "& {}".format(results[task]["90%"]) # row += "& {}".format(results[task]["95%"]) row += " \\\\" print(row) print("\t\t\t\\midrule\\midrule") row = "\t\t\tRel. perf. (\%) & 100.00" for perf in avg_performance: row += " & {:0.2f}".format(round(np.mean(perf) ,2)) row += " \\\\" print(row) print("\t\t\\end{tabular}\n\t}\n\t\\caption{MTDNN-BERT-base on GLUE}\n\t\\label{table:finetune-all}\n\\end{center}") \begin{tabular}{rc|ccccccc} \toprule[1pt] \multirow{2}{*}{Task (Metric)} & \multirow{2}{*}{Baseline} & \multicolumn{7}{c}{Fine-tuned layers} \\ \cline{3-9} & & 6-11 & 7-11 & 8-11 & 9-11 & 10-11 & 11-11 & none \\ ```
github_jupyter
#Author : Devesh Kumar ## Task 4 : Prediction using Decision Tree Algorithm ___ ## GRIP @ The Sparks Foundation ____ # Role : Data Science and Business Analytics [Batch May-2021] ## Table of Contents<br> > - 1. Introduction. - 2. Importing Libraries. - 3. Fetching and loading data. - 4. Checking for null values. - 5. Plotting Pairplot. - 6. Building Decision Tree Model. - 7. Training and fitting the model. - 8. Model Evaluation - 9. Graphical Visualisation. - 10. Conclusion. #**Introduction** * We are given the iris flower dataset, with featues sepal length, sepal width, petal length and petal width. * Our aim is to create a decision tree classifier to classify the flowers in categories that are: Iris setosa, Iris versicolor, and Iris virginica. * Here, Python language is used to build the classifier. * Dataset link: https://bit.ly/3kXTdox #**Importing Libraries** ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns import warnings warnings.filterwarnings('ignore') ``` #**Fetching and loading data** ``` iris = pd.read_csv("/content/sample_data/Iris - Iris.csv") #loading the dataset in iris variable iris.head() iris.tail() iris = iris.drop(['Id'], axis = 1) #dropping column 'Id' iris iris.shape ``` In iris dataset, 5 features and 150 datapoints are present. #**Checking for Null Values** ``` iris.info() ``` Here, we can see that no null values are present. ``` iris['Species'].value_counts() ``` From the above data we can say that, the iris dataset is a balanced dataset as the number of datapoints for every class are same. #**Plotting Pairplot** ``` sns.set_style("whitegrid") sns.pairplot(iris,hue="Species",size=3); plt.show() ``` #**Splitting The Data** ``` X = iris.iloc[ : , : -1] y = iris.iloc[ : , -1 ] from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 0) ``` #**Decision Tree** #**Training and fitting the model** ``` from sklearn.tree import DecisionTreeClassifier tree_clf=DecisionTreeClassifier() tree_clf.fit(x_train,y_train) y_pred = tree_clf.predict(x_test) y_pred pd.DataFrame(y_pred, y_test) ``` #**Model Evaluation** ``` from sklearn.metrics import accuracy_score print(accuracy_score(y_test, y_pred)) from sklearn.metrics import classification_report print(classification_report(y_test, y_pred)) ``` #**Graphical Visualization** ``` # Import necessary libraries for graph viz from sklearn.externals.six import StringIO from IPython.display import Image from sklearn.tree import export_graphviz import pydotplus # Visualize the graph dot_data = StringIO() export_graphviz(tree_clf, out_file=dot_data, feature_names=iris.columns[:-1], class_names = ['Setosa', 'Versicolor', 'Viginica'] ,filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png()) ``` #**Conclusion** Hence the Decision Tree Classifier is created ; you can feed any data to this classifier and it would be able to predict the right class accordingly.
github_jupyter
``` # !pip install pandas_datareader keras seaborn # !conda install -y -c conda-forge fbprophet # !pip install pydot graphviz import boto3 import base64 from botocore.exceptions import ClientError from IPython.display import display import pandas_datareader import pandas as pd import numpy as np from keras import Sequential from keras.layers import Dense, LSTM, InputLayer, Attention import seaborn as sns import matplotlib.pyplot as plt from keras.utils import plot_model from keras.callbacks import EarlyStopping tickers = ['AAPL'] metric = 'low' pc_metric = f'{metric}_percent_change' norm_metric = f'{pc_metric}_norm' lookback=100 def get_secret(): secret_name = "alpha_vantage" region_name = "us-east-2" # Create a Secrets Manager client session = boto3.session.Session() client = session.client( service_name='secretsmanager', region_name=region_name ) try: get_secret_value_response = client.get_secret_value(SecretId=secret_name) except ClientError as e: display(e) else: # Decrypts secret using the associated KMS CMK. # Depending on whether the secret is a string or binary, one of these fields will be populated. if 'SecretString' in get_secret_value_response: secret = get_secret_value_response['SecretString'] else: secret = base64.b64decode(get_secret_value_response['SecretBinary']) return secret def format_dates(daily_stocks_data): df = daily_stocks_data.copy() df['date']=df.index df.reset_index(inplace=True, drop=True) return df def add_percent_change(daily_stocks_data, metric): percents = list() for index, row in daily_stocks_data.iterrows(): old = row[metric] try: new = daily_stocks_data.iloc[index + 1][metric] except Exception as e: percents.append(np.nan) ## no next value, so this is undefined continue percents.append((new-old)/new) cp_df = daily_stocks_data.copy() cp_df[f'{metric}_percent_change']=percents return cp_df def add_norm(df, label): arr = np.array([x*1000 for x in df[label].to_numpy()]).reshape(-1, 1) # norm = normalize(arr, norm='l1') norm = arr new_df = df.copy() new_df[f'{label}_norm'] = norm return new_df def to_ts_df(daily_stocks_data, lookback, metric): ## column names columns = list() for i in range(lookback): columns.append(f'{metric}_{i}') columns.append(f'{metric}_target') df = pd.DataFrame(columns=columns) ## columns data = daily_stocks_data[metric].to_numpy() for index, col in enumerate(df.columns): df[col] = data[index:len(data)-lookback+index] ## dates index dates = daily_stocks_data.date.to_numpy()[:-lookback] df.insert(0, 'date', dates) return df def to_ts(ts_df): data = list() targets = list() for index, row in ts_df.iloc[:,1:].iterrows(): rnp = row.to_numpy() data.append([[x] for x in rnp[:-1]]) targets.append(rnp[-1]) data = np.array(data) targets = np.array(targets) return data, targets ALPHA_API_KEY = get_secret() daily_stocks_data_raw = pandas_datareader.av.time_series.AVTimeSeriesReader(symbols=tickers, api_key=ALPHA_API_KEY, function='TIME_SERIES_DAILY').read() daily_stocks_data = format_dates(daily_stocks_data_raw) daily_stocks_data = add_percent_change(daily_stocks_data, metric) daily_stocks_data[daily_stocks_data[pc_metric].isnull()] = 0 daily_stocks_data = add_norm(daily_stocks_data, pc_metric) ts_df = to_ts_df(daily_stocks_data, lookback, pc_metric) data, targets = to_ts(ts_df) display(daily_stocks_data) display(ts_df) ## currently testing to set up mlflow and training jobs. def deep_lstm(): model = Sequential() model.add(InputLayer(input_shape=(None,1))) # model.add(LSTM(12, return_sequences=True)) # model.add(LSTM(12, return_sequences=True)) # model.add(LSTM(6, return_sequences=True)) # model.add(LSTM(6, return_sequences=True)) # model.add(LSTM(2, return_sequences=True)) # model.add(LSTM(1)) model.add(Dense(1)) model.compile(loss='mae', metrics=['mse','mape']) return model model = deep_lstm() model.summary() # plot_model(model) early = EarlyStopping(patience=2, restore_best_weights=True) model.fit(x=data, y=targets, batch_size=36, validation_split=0.2, epochs=1, callbacks=[early]) ```
github_jupyter
# 04 - Full waveform inversion with Devito and scipy.optimize.minimize ## Introduction In this tutorial we show how [Devito](http://www.opesci.org/devito-public) can be used with [scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) to solve the FWI gradient based minimization problem described in the previous tutorial. ```python scipy.optimize.minimize(fun, x0, args=(), method=None, jac=None, hess=None, hessp=None, bounds=None, constraints=(), tol=None, callback=None, options=None) ``` > Minimization of scalar function of one or more variables. > > In general, the optimization problems are of the form: > > minimize f(x) subject to > > g_i(x) >= 0, i = 1,...,m > h_j(x) = 0, j = 1,...,p > where x is a vector of one or more variables. g_i(x) are the inequality constraints. h_j(x) are the equality constrains. [scipy.optimize.minimize](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html) provides a wide variety of methods for solving minimization problems depending on the context. Here we are going to focus on using L-BFGS via [scipy.optimize.minimize(method=’L-BFGS-B’)](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html#optimize-minimize-lbfgsb) ```python scipy.optimize.minimize(fun, x0, args=(), method='L-BFGS-B', jac=None, bounds=None, tol=None, callback=None, options={'disp': None, 'maxls': 20, 'iprint': -1, 'gtol': 1e-05, 'eps': 1e-08, 'maxiter': 15000, 'ftol': 2.220446049250313e-09, 'maxcor': 10, 'maxfun': 15000})``` The argument `fun` is a callable function that returns the misfit between the simulated and the observed data. If `jac` is a Boolean and is `True`, `fun` is assumed to return the gradient along with the objective function - as is our case when applying the adjoint-state method. ## Setting up (synthetic) data We are going to set up the same synthetic test case as for the previous tutorial (refer back for details). The code below is slightly re-engineered to make it suitable for using with scipy.optimize.minimize. ``` #NBVAL_IGNORE_OUTPUT from examples.seismic import Model, demo_model import numpy as np # Define the grid parameters def get_grid(): shape = (101, 101) # Number of grid point (nx, nz) spacing = (10., 10.) # Grid spacing in m. The domain size is now 1km by 1km origin = (0., 0.) # Need origin to define relative source and receiver locations return shape, spacing, origin # Define the test phantom; in this case we are using a simple circle # so we can easily see what is going on. def get_true_model(): shape, spacing, origin = get_grid() return demo_model('circle-isotropic', vp=3.0, vp_background=2.5, origin=origin, shape=shape, spacing=spacing, nbpml=40) # The initial guess for the subsurface model. def get_initial_model(): shape, spacing, origin = get_grid() return demo_model('circle-isotropic', vp=2.5, vp_background=2.5, origin=origin, shape=shape, spacing=spacing, nbpml=40) from examples.seismic.acoustic import AcousticWaveSolver from examples.seismic import RickerSource, Receiver # Inversion crime alert! Here the worker is creating the 'observed' data # using the real model. For a real case the worker would be reading # seismic data from disk. def get_data(param): """ Returns source and receiver data for a single shot labeled 'shot_id'. """ true_model = get_true_model() dt = true_model.critical_dt # Time step from model grid spacing # Set up source data and geometry. nt = int(1 + (param['tn']-param['t0']) / dt) # Discrete time axis length src = RickerSource(name='src', grid=true_model.grid, f0=param['f0'], time=np.linspace(param['t0'], param['tn'], nt)) src.coordinates.data[0, :] = [30, param['shot_id']*1000./(param['nshots']-1)] # Set up receiver data and geometry. nreceivers = 101 # Number of receiver locations per shot rec = Receiver(name='rec', grid=true_model.grid, npoint=nreceivers, ntime=nt) rec.coordinates.data[:, 1] = np.linspace(0, true_model.domain_size[0], num=nreceivers) rec.coordinates.data[:, 0] = 980. # 20m from the right end # Set up solver - using model_in so that we have the same dt, # otherwise we should use pandas to resample the time series data. solver = AcousticWaveSolver(true_model, src, rec, space_order=4) # Generate synthetic receiver data from true model true_d, _, _ = solver.forward(src=src, m=true_model.m) return src, true_d, nt, solver ``` ## Create operators for gradient based inversion To perform the inversion we are going to use [scipy.optimize.minimize(method=’L-BFGS-B’)](https://docs.scipy.org/doc/scipy/reference/optimize.minimize-lbfgsb.html#optimize-minimize-lbfgsb). First we define the functional, ```f```, and gradient, ```g```, operator (i.e. the function ```fun```) for a single shot of data. ``` from devito import Function, clear_cache # Create FWI gradient kernel for a single shot def fwi_gradient_i(x, param): # Need to clear the workers cache. clear_cache() # Get the current model and the shot data for this worker. model0 = get_initial_model() model0.m.data[:] = x.astype(np.float32).reshape(model0.m.data.shape) src, rec, nt, solver = get_data(param) # Create symbols to hold the gradient and the misfit between # the 'measured' and simulated data. grad = Function(name="grad", grid=model0.grid) residual = Receiver(name='rec', grid=model0.grid, ntime=nt, coordinates=rec.coordinates.data) # Compute simulated data and full forward wavefield u0 d, u0, _ = solver.forward(src=src, m=model0.m, save=True) # Compute the data misfit (residual) and objective function residual.data[:] = d.data[:] - rec.data[:] f = .5*np.linalg.norm(residual.data.flatten())**2 # Compute gradient using the adjoint-state method. Note, this # backpropagates the data misfit through the model. solver.gradient(rec=residual, u=u0, m=model0.m, grad=grad) # return the objective functional and gradient. return f, np.array(grad.data) ``` Next we define the global functional and gradient function that sums the contributions to f and g for each shot of data. ``` def fwi_gradient(x, param): # Initialize f and g. param['shot_id'] = 0 f, g = fwi_gradient_i(x, param) # Loop through all shots summing f, g. for i in range(1, param['nshots']): param['shot_id'] = i f_i, g_i = fwi_gradient_i(x, param) f += f_i g[:] += g_i # Note the explicit cast; while the forward/adjoint solver only requires float32, # L-BFGS-B in SciPy expects a flat array in 64-bit floats. return f, g.flatten().astype(np.float64) ``` ## FWI with L-BFGS-B Equipped with a function to calculate the functional and gradient, we are finally ready to call ```scipy.optimize.minimize```. ``` #NBVAL_SKIP # Change to the WARNING log level to reduce log output # as compared to the default DEBUG from devito import configuration configuration['log_level'] = 'WARNING' # Set up a dictionary of inversion parameters. param = {'t0': 0., 'tn': 1000., # Simulation lasts 1 second (1000 ms) 'f0': 0.010, # Source peak frequency is 10Hz (0.010 kHz) 'nshots': 9} # Number of shots to create gradient from # Define bounding box constraints on the solution. def apply_box_constraint(m): # Maximum possible 'realistic' velocity is 3.5 km/sec # Minimum possible 'realistic' velocity is 2 km/sec return np.clip(m, 1/3.5**2, 1/2**2) # Many optimization methods in scipy.optimize.minimize accept a callback # function that can operate on the solution after every iteration. Here # we use this to apply box constraints and to monitor the true relative # solution error. relative_error = [] def fwi_callbacks(x): # Apply boundary constraint x.data[:] = apply_box_constraint(x) # Calculate true relative error true_x = get_true_model().m.data.flatten() relative_error.append(np.linalg.norm((x-true_x)/true_x)) # Initialize solution model0 = get_initial_model() # Finally, calling the minimizing function. We are limiting the maximum number # of iterations here to 10 so that it runs quickly for the purpose of the # tutorial. from scipy import optimize result = optimize.minimize(fwi_gradient, model0.m.data.flatten().astype(np.float64), args=(param, ), method='L-BFGS-B', jac=True, callback=fwi_callbacks, options={'maxiter':10, 'disp':True}) # Print out results of optimizer. print(result) #NBVAL_SKIP # Show what the update does to the model from examples.seismic import plot_image, plot_velocity model0.m.data[:] = result.x.astype(np.float32).reshape(model0.m.data.shape) model0.vp = np.sqrt(1. / model0.m.data[40:-40, 40:-40]) plot_velocity(model0) #NBVAL_SKIP # Plot percentage error plot_image(100*np.abs(model0.vp-get_true_model().vp.data)/get_true_model().vp.data, cmap="hot") ``` While we are resolving the circle at the centre of the domain there are also lots of artifacts throughout the domain. ``` #NBVAL_SKIP import matplotlib.pyplot as plt # Plot objective function decrease plt.figure() plt.loglog(relative_error) plt.xlabel('Iteration number') plt.ylabel('True relative error') plt.title('Convergence') plt.show() ``` <sup>This notebook is part of the tutorial "Optimised Symbolic Finite Difference Computation with Devito" presented at the Intel® HPC Developer Conference 2017.</sup>
github_jupyter
``` import numpy as np import collections import random import tensorflow as tf def build_dataset(words, n_words): count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]] count.extend(collections.Counter(words).most_common(n_words - 1)) dictionary = dict() for word, _ in count: dictionary[word] = len(dictionary) data = list() unk_count = 0 for word in words: index = dictionary.get(word, 0) if index == 0: unk_count += 1 data.append(index) count[0][1] = unk_count reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys())) return data, count, dictionary, reversed_dictionary def str_idx(corpus, dic, maxlen, UNK=3): X = np.zeros((len(corpus),maxlen)) for i in range(len(corpus)): for no, k in enumerate(corpus[i][:maxlen][::-1]): val = dic[k] if k in dic else UNK X[i,-1 - no]= val return X def load_data(filepath): x1=[] x2=[] y=[] for line in open(filepath): l=line.strip().split("\t") if len(l)<2: continue if random.random() > 0.5: x1.append(l[0].lower()) x2.append(l[1].lower()) else: x1.append(l[1].lower()) x2.append(l[0].lower()) y.append(int(l[2])) return np.array(x1),np.array(x2),np.array(y) X1_text, X2_text, Y = load_data('train_snli.txt') concat = (' '.join(X1_text.tolist() + X2_text.tolist())).split() vocabulary_size = len(list(set(concat))) data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size) print('vocab from size: %d'%(vocabulary_size)) print('Most common words', count[4:10]) print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]]) def _pairwise_distances(embeddings_left, embeddings_right, squared=False): dot_product = tf.matmul(embeddings_left, tf.transpose(embeddings_right)) square_norm = tf.diag_part(dot_product) distances = tf.expand_dims(square_norm, 1) - 2.0 * dot_product + tf.expand_dims(square_norm, 0) distances = tf.maximum(distances, 0.0) if not squared: mask = tf.to_float(tf.equal(distances, 0.0)) distances = distances + mask * 1e-16 distances = tf.sqrt(distances) distances = distances * (1.0 - mask) return distances def _get_anchor_positive_triplet_mask(labels): indices_equal = tf.cast(tf.eye(tf.shape(labels)[0]), tf.bool) indices_not_equal = tf.logical_not(indices_equal) labels_equal = tf.equal(tf.expand_dims(labels, 0), tf.expand_dims(labels, 1)) mask = tf.logical_and(indices_not_equal, labels_equal) return mask def _get_anchor_negative_triplet_mask(labels): labels_equal = tf.equal(tf.expand_dims(labels, 0), tf.expand_dims(labels, 1)) mask = tf.logical_not(labels_equal) return mask def _get_triplet_mask(labels): indices_equal = tf.cast(tf.eye(tf.shape(labels)[0]), tf.bool) indices_not_equal = tf.logical_not(indices_equal) i_not_equal_j = tf.expand_dims(indices_not_equal, 2) i_not_equal_k = tf.expand_dims(indices_not_equal, 1) j_not_equal_k = tf.expand_dims(indices_not_equal, 0) distinct_indices = tf.logical_and(tf.logical_and(i_not_equal_j, i_not_equal_k), j_not_equal_k) label_equal = tf.equal(tf.expand_dims(labels, 0), tf.expand_dims(labels, 1)) i_equal_j = tf.expand_dims(label_equal, 2) i_equal_k = tf.expand_dims(label_equal, 1) valid_labels = tf.logical_and(i_equal_j, tf.logical_not(i_equal_k)) mask = tf.logical_and(distinct_indices, valid_labels) return mask def batch_all_triplet_loss(labels, embeddings_left, embeddings_right, margin, squared=False): pairwise_dist = _pairwise_distances(embeddings_left, embeddings_right, squared=squared) anchor_positive_dist = tf.expand_dims(pairwise_dist, 2) assert anchor_positive_dist.shape[2] == 1, "{}".format(anchor_positive_dist.shape) anchor_negative_dist = tf.expand_dims(pairwise_dist, 1) assert anchor_negative_dist.shape[1] == 1, "{}".format(anchor_negative_dist.shape) triplet_loss = anchor_positive_dist - anchor_negative_dist + margin mask = _get_triplet_mask(labels) mask = tf.to_float(mask) triplet_loss = tf.multiply(mask, triplet_loss) triplet_loss = tf.maximum(triplet_loss, 0.0) valid_triplets = tf.to_float(tf.greater(triplet_loss, 1e-16)) num_positive_triplets = tf.reduce_sum(valid_triplets) num_valid_triplets = tf.reduce_sum(mask) fraction_positive_triplets = num_positive_triplets / (num_valid_triplets + 1e-16) triplet_loss = tf.reduce_sum(triplet_loss) / (num_positive_triplets + 1e-16) return triplet_loss, fraction_positive_triplets class Model: def __init__(self, size_layer, num_layers, embedded_size, dict_size, learning_rate, dimension_output): def cells(reuse=False): return tf.nn.rnn_cell.LSTMCell(size_layer, initializer=tf.orthogonal_initializer(),reuse=reuse) def rnn(inputs, reuse=False): with tf.variable_scope('model', reuse = reuse): rnn_cells = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]) outputs, _ = tf.nn.dynamic_rnn(rnn_cells, inputs, dtype = tf.float32) return tf.layers.dense(outputs[:,-1], dimension_output) self.X_left = tf.placeholder(tf.int32, [None, None]) self.X_right = tf.placeholder(tf.int32, [None, None]) self.Y = tf.placeholder(tf.float32, [None]) self.batch_size = tf.shape(self.X_left)[0] encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1)) embedded_left = tf.nn.embedding_lookup(encoder_embeddings, self.X_left) embedded_right = tf.nn.embedding_lookup(encoder_embeddings, self.X_right) self.output_left = rnn(embedded_left, False) self.output_right = rnn(embedded_right, True) self.cost, fraction = batch_all_triplet_loss(self.Y, self.output_left, self.output_right, margin=0.5, squared=False) self.distance = tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(self.output_left,self.output_right)),1,keep_dims=True)) self.distance = tf.div(self.distance, tf.add(tf.sqrt(tf.reduce_sum(tf.square(self.output_left),1,keep_dims=True)), tf.sqrt(tf.reduce_sum(tf.square(self.output_right),1,keep_dims=True)))) self.distance = tf.reshape(self.distance, [-1]) self.temp_sim = tf.subtract(tf.ones_like(self.distance), tf.rint(self.distance)) correct_predictions = tf.equal(self.temp_sim, self.Y) self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, "float")) self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost) size_layer = 256 num_layers = 2 embedded_size = 128 learning_rate = 1e-3 dimension_output = 300 maxlen = 50 batch_size = 128 tf.reset_default_graph() sess = tf.InteractiveSession() model = Model(size_layer,num_layers,embedded_size,len(dictionary), learning_rate,dimension_output) sess.run(tf.global_variables_initializer()) from sklearn.cross_validation import train_test_split vectors_left = str_idx(X1_text, dictionary, maxlen) vectors_right = str_idx(X2_text, dictionary, maxlen) train_X_left, test_X_left, train_X_right, test_X_right, train_Y, test_Y = train_test_split(vectors_left, vectors_right, Y, test_size = 0.2) from tqdm import tqdm import time for EPOCH in range(5): lasttime = time.time() train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0 pbar = tqdm(range(0, len(train_X_left), batch_size), desc='train minibatch loop') for i in pbar: batch_x_left = train_X_left[i:min(i+batch_size,train_X_left.shape[0])] batch_x_right = train_X_right[i:min(i+batch_size,train_X_left.shape[0])] batch_y = train_Y[i:min(i+batch_size,train_X_left.shape[0])] acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer], feed_dict = {model.X_left : batch_x_left, model.X_right: batch_x_right, model.Y : batch_y}) assert not np.isnan(loss) train_loss += loss train_acc += acc pbar.set_postfix(cost = loss, accuracy = acc) pbar = tqdm(range(0, len(test_X_left), batch_size), desc='test minibatch loop') for i in pbar: batch_x_left = test_X_left[i:min(i+batch_size,train_X_left.shape[0])] batch_x_right = test_X_right[i:min(i+batch_size,train_X_left.shape[0])] batch_y = test_Y[i:min(i+batch_size,train_X_left.shape[0])] acc, loss = sess.run([model.accuracy, model.cost], feed_dict = {model.X_left : batch_x_left, model.X_right: batch_x_right, model.Y : batch_y}) test_loss += loss test_acc += acc pbar.set_postfix(cost = loss, accuracy = acc) train_loss /= (len(train_X_left) / batch_size) train_acc /= (len(train_X_left) / batch_size) test_loss /= (len(test_X_left) / batch_size) test_acc /= (len(test_X_left) / batch_size) print('time taken:', time.time()-lasttime) print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss, train_acc,test_loss, test_acc)) left = str_idx(['a person is outdoors, on a horse.'], dictionary, maxlen) right = str_idx(['a person on a horse jumps over a broken down airplane.'], dictionary, maxlen) sess.run([model.temp_sim,1-model.distance], feed_dict = {model.X_left : left, model.X_right: right}) left = str_idx(['i love you'], dictionary, maxlen) right = str_idx(['you love i'], dictionary, maxlen) sess.run([model.temp_sim,1-model.distance], feed_dict = {model.X_left : left, model.X_right: right}) ```
github_jupyter
# End-to-End Machine Learning Project In this chapter you will work through an example project end to end, pretending to be a recently hired data scientist at a real estate company. Here are the main steps you will go through: 1. Look at the big picture 2. Get the data 3. Discover and visualize the data to gain insights. 4. Prepare the data for Machine learning algorithms. 5. Select a model and train it 6. Fine-tune your model. 7. Present your solution 8. Launch, monitor, and maintain your system. ## Working with Real Data When you are learning about Machine Leaning, it is best to experimentwith real-world data, not artificial datasets. Fortunately, there are thousands of open datasets to choose from, ranging across all sorts of domains. Here are a few places you can look to get data: * Popular open data repositories: - [UC Irvine Machine Learning Repository](http://archive.ics.uci.edu/ml/) - [Kaggle](https://www.kaggle.com/datasets) datasets - Amazon's [AWS](https://registry.opendata.aws/) datasets * Meta Portals: - [Data Portals](http://dataportals.org/) - [OpenDataMonitor](http://opendatamonitor.eu/) - [Quandl](http://quandl.com) ## Frame the Problem The problem is that your model' output (a prediction of a district's median housing price) will be fed to another ML system along with many other signals*. This downstream will determine whether it is worth investing in a given area or not. Getting this right is critical, as it directly affects revenue. ``` Other Signals | Upstream Components --> (District Data) --> [District Pricing prediction model](your component) --> (District prices) --> [Investment Analaysis] --> Investments ``` ### Pipelines A sequence of data processing components is called a **data pipeline**. Pipelines are very common in Machine Learning systems, since a lot of data needs to manipulated to make sure the machine learning model/algorithms understands the data, as algorithms understand only numbers. ## Download the Data: You could use your web browser and download the data, but it is preferabble to make a function to do the same. ``` import os import tarfile import urllib DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/" HOUSING_PATH = os.path.join("datasets", "housing") HOUSING_URL = DOWNLOAD_ROOT + "datasets/housing/housing.tgz" def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH): """ Function to download the housing_data """ os.makedirs(housing_path, exist_ok=True) tgz_path = os.path.join(housing_path, "housing.tgz") urllib.request.urlretrieve(housing_url, tgz_path) housing_tgz = tarfile.open(tgz_path) housing_tgz.extractall(path=housing_path) housing_tgz.close() import pandas as pd import numpy as np def load_housing_data(housing_path=HOUSING_PATH): csv_path = os.path.join(housing_path, "housing.csv") return pd.read_csv(csv_path) fetch_housing_data() housing = load_housing_data() ``` ## Take a quick look at the Data Structure Each row represents one district. There are 10 attributes: ``` longitude, latitude, housing_median_age, total_rooms, total_bedrooms, population, households, median_income, median_house_value, ocean_proximity ``` The `info()` method is useful to give a quick description of the data. ``` housing.head() housing.info() housing["ocean_proximity"].value_counts() housing.describe() %matplotlib inline import matplotlib.pyplot as plt housing.hist(bins=50, figsize=(20, 15)) plt.show(); ``` > 🔑 **Note:** The `hist()` method relies on Matplotlib, which in turn relies on a user-specified graphical backend to draw on your screen. The simplest option is to use Jupyter's magic command `%matplotlib inline`. This tells jupyter to set up Matplotlib so that it uses Jupyter's own backend. Note that calling `plot()` is optional as Jupyter does this automatically. #### There are few things you might notice in these histograms: 1. First the median income attribute does not look like it is expressed in US dollars (USD). The data has been scaled at 15 for higher median incomes and at 0.5 for lower median incomes. The numbers represent roughly tens of thousands of dollars(e.g., 3 actually means about $30,000). Working with oreoricessed attributes is common in Machine learning and it is not necessarily a problem. But you should try to understand how the data was computed. 2. The housing median age and the median house value were also capped. 3. These attributes have very different scales. 4. Many histograms of this dataset are *tail-heavy* i.e., they extend much farther to the right of the median than to the left. This may make it bit harder for Machine Learning Algorithms to unerstand patterns. We will try transfprming these attributes later on to have more bell shaped-distributions. > ‼️ **Note:** Wait! Before you look at the data any further, you need to create a test set, put it aside and never look at it. ## Create a Test Set Scikit-learn provides a few functions to split datasets into multiple subsets in various ways: 1. The `train_test_split()` function is the simplest and most used function from scikit-learn for this purpose. 2. For Stratified sampling, `StartifiedShuffleSplit()` would be useful 3. And probably so many more functions... ``` from sklearn.model_selection import train_test_split train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42) train_set.shape, test_set.shape from sklearn.model_selection import StratifiedShuffleSplit housing["income_cat"] = pd.cut(housing["median_income"], bins=[0., 1.5, 3.0, 4.5, 6., np.inf], labels=[1, 2, 3, 4, 5]) split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42) for train_i, test_i in split.split(housing, housing["income_cat"]): strat_train_set = housing.loc[train_i] strat_test_set = housing.loc[test_i] strat_train_set.shape # Now remove the income_cat attribute so the data is back to its original state for _ in (strat_train_set, strat_test_set): _.drop("income_cat", axis=1, inplace=True) ``` ## Discover and Visualize the Data to Gain More Insights So far you have only taken a quick glance at the data to get a general understanding of the kind of data you are manipulating. Now the goal is to go into a lttle more depth. First, make sure you have put the test set aside and you are only exploring the training data set. In our case the set is quite small, so you can work directly on the full set. Let's create a copy so that you can play woth it without harming the training set: ``` housing = strat_train_set.copy() ``` ### Visualizing Geopgraphical Data Since there is geographical information (latitude and longitude), it is a good idea to create a scatterplot pf all districts to visualize the data. ``` housing.plot(kind="scatter", x="longitude", y="latitude"); # Setting the alpha optin to 0.1 makes it easier to visualize the places where there is a high -density of data points. housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.1); ``` Now from the above graph, we can clearly see the high-density areas. Our brains are very good at spotting patterns in pictures, but you may need to play around with visualization parameters to make the patterns stand out. Now let's look at the housing prices. The radius of each circle represents the district's populaiton (option `s`), and the color represents the price (option `c`). We will use a predefined color map (option `cmap`) called `jet`, which ranges from blue (low values) to red (high prices): ``` housing.plot(kind="scatter", x="longitude", y="latitude", alpha=0.4, s=housing["population"]/100, label="population", figsize=(10, 7), c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True) plt.legend(); ``` ### Looking for Correlations Since the dataset is not too large, you can easily compute the *standard correlation coeffecient* (also known as *Pearson's r*) between every pair of attributes using the `corr()` method ``` corr_matrix = housing.corr() # Now let's look at how much each attribute correlates with the median house value corr_matrix["median_house_value"].sort_values(ascending=False) ``` #### The Standard Correlation Coeffecient The correlation coeffecient ranges from -1 to 1. When it is close to 1, it means that there is strong positive correlation. While, when the coeffecient is close to -1, it means there is a strong negative correlation. Finally coeffecients close to 0 mean that there is no linear correlation. <img src="Fig..png" alt="Standard correlation coeffecients of various Datasets"/> > 🔑 **Note:** The correlation coeffecient only measures linear correlations ("if x goes up, then y generally goes up/down"). It may completely miss out on nonlinear relationships (e.g., "if x is close to 0, then y generally goes up"). Note how all the plots of the bottom row have a correlation coeffecient equal to 0, despite the fact that that their axes are clearly not independent: these examples are nonlinearly correlated. Another way to check for correlation between attributes is to use the pandas `scatter_matrix()` function, which plots every numerical attribute against every other numerical attribute.Since there are 11 numerical attributes, you would get 11^2 = 121 plots, which too large to fit inour page. So let's just focus on a few promising attributes that seem most correlated with median housing value: ``` from pandas.plotting import scatter_matrix attributes = ["median_house_value", "median_income", "total_rooms", "housing_median_age"] scatter_matrix(housing[attributes], figsize=(12, 12)); # The most promising attribute to predict the median house value is the median income housing.plot(kind="scatter", x="median_income", y="median_house_value", alpha=.1); ``` This plot reveals a few things: 1. The correlation is indeed very strong as you can see clearly the upward trend, and the points are not too dispersed. 2. The price cap that we noticed earlier is clearly visible as a horizontal line at $500,000. There are a few more less-obvious lines that you may want to remove to prevent your algorithms from learning to reproduce these data quirks. ## Experimenting with Attribute Combinations Till now, you identified a few data quirks that you may want to clean up before feeding the data to the Machine Learning algorithms, and you found out interesting correlations between attributes. One last thing you may want to do before preparing the data for Machine learning algorithms, is to try out various attribute combinations. For Example, the total number of rooms in a district is not very useful if you don't know how many households there are. What you really want is the number of rooms per household... and so on. Let's create these new attributes: ``` housing["rooms_per_household"] = housing["total_rooms"]/housing["households"] housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"] housing["population_per_household"] = housing["population"]/housing["households"] corr_matrix = housing.corr() corr_matrix["median_house_value"].sort_values(ascending=False) ``` Hey, not bad! The new attributes have some more correlation ## Prepare the Data for Machine Learning Algorithms It's time to prepare the data for your Machine Learning algorithm. Instead of doing this manually, you should write functions for this purpose, for several good reasons: - This will allow you to reproduce these transformations easily on any dataset (e.g., the next time you get a fres dataset). - You will gradually build a library of transformations functions that you can reuse in your future projects. - You can use these functions in your live system to transform the new data before feeding it to your algorithms. - This will make it possible for you to easily try various transformations and see what works best. ``` # Let's revert to a clean training set housing = strat_train_set.drop("median_house_value", axis=1) housing_labels = strat_train_set["median_house_value"].copy() ``` ### Data Cleaning Most Machine Learning algorithms cannot work with data that have missing features, so let's create a few functions to take care of them. We say earlier that the `total_bedrooms` attribute has some missing values, so let's fix this. You have three options to do so: 1. Get rid of the corresponding districts. 2. Get rid of the whole attribute. 3. Set the values to some value (zero, the mean, the median, the mode, etc.) You can accomplish these easily using DataFrame's `dropna()`, `drop()`, `fillna()` methods: ``` # housing.dropna(subset=["total_bedrooms"]) # housing.drop("total_bedrooms", axis=1) # median = housing["total_bedrooms"].median() # housing["total_bedrooms"].fillna(median, inplace=True) ``` But we'll be using the Scikit-Learning platform. Scikit-Learn provides a handy class to take care of the missing values: `SimpleImputer`. ``` from sklearn.impute import SimpleImputer imputer = SimpleImputer(strategy="median") # Since the median can be computed only on numerical attributes, drop the ocean_proximity attribute which is a String housing_num = housing.drop("ocean_proximity", axis=1) imputer.fit(housing_num) X = imputer.transform(housing_num) # The result is a plain numpy array, converting into a dataframe housing_tr = pd.DataFrame(X, columns=housing_num.columns, index=housing_num.index) imputer.statistics_ housing_tr.info() ``` ### Handling Text and Categorical Attributes So far we have only dealt with numerical attributes, but now let's look at text attributes. In this dataset, there is just one: the `ocean_proximity` attribute. Let's look at its value fo first 10 instances: ``` # First 10 instances housing_cat = housing[["ocean_proximity"]] housing_cat.head(10) housing["ocean_proximity"].value_counts() # It's not arbitary text. Therefore, it is categorical text. # One hot encoding the data from sklearn.preprocessing import OneHotEncoder cat_enc = OneHotEncoder() housing_cat_one_hot = cat_enc.fit_transform(housing_cat) housing_cat_one_hot housing_cat_one_hot.toarray() cat_enc.categories_ ``` ### Custom Trasformers Although Scikit-Learn provides many useful transformers, you will need to write your own for tasks such as custom cleanup operations or combining specific attributes. You will want your transformer to work seamlessely with Scikit-Learn functionalitites (such as `pipelines`), all you need to do is create a class and implement three methods: `fit()`, `transform()`, and `fit_transform()`. ``` from sklearn.base import BaseEstimator, TransformerMixin rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6 class CombinedAttributeAdder(BaseEstimator, TransformerMixin): def __init__(self, add_bedrooms_per_room=True): self.add_bedrooms_per_room = add_bedrooms_per_room def fit(self, X, y=None): return self def transform(self, X, y=None): rooms_per_household = X[:, rooms_ix] / X[:, households_ix] population_per_household = X[:, population_ix] / X[:, households_ix] if self.add_bedrooms_per_room: bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix] return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room] else: return np.c_[X, rooms_per_household, population_per_household] attr_adder = CombinedAttributeAdder(add_bedrooms_per_room=False) housing_extra_attribs = attr_adder.transform(housing.values) ``` ### Feature Scaling One of the most imprtant features you need to apply to your data is *feature scaling*. With a few exceptions, Machine Learning algorithms don't perform well numerical attributes have very different scales. There are two common ways to get all the attributes to have the same scale, namely, *min-max scaling* and *standardization*. Min-Max Scaling (also known as *Normalization*) is the simplest: the values are shifted to a range of 0-1. Standardization is using standard deviation. ### Transformation Pipelines As you can see, there are many data transformation steps that need to be executed in an order. Fortunately, Scikit-Learn provides the `Pipeline` class to help with sequences of transformations. Here is a small pipeline for the numerical attributes: ``` from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler num_pipeline = Pipeline([ ('imputer', SimpleImputer(strategy="median")), ('attribs_adder', CombinedAttributeAdder()), ('std_scaler', StandardScaler()) ]) housing_num_tr = num_pipeline.fit_transform(housing_num) housing from sklearn.compose import ColumnTransformer num_attribs = list(housing_num) cat_attribs = ["ocean_proximity"] full_pipeline = ColumnTransformer([ ("num", num_pipeline, num_attribs), ("cat", OneHotEncoder(), cat_attribs), ]) housing_prepared = full_pipeline.fit_transform(housing) ``` ## Select and Train a Model At last!😃 You framed the problem, you got your data and explored it, you sampled a training set and a test set, and you wrote transformation pipelines to clean up and prepare your data for Machine learning slgorithms automatically. You are now ready to select and train a Machine Learning Model.💗 ### Training Machine Learning Models on the training set and evaluating on the Same The following experiments will be implemented: 1. Linear Regression Model 2. Decision Tree Regression Model 3. Random Forest Regression Model ``` # 1. Linear Regression model from sklearn.linear_model import LinearRegression lin_reg = LinearRegression() lin_reg.fit(housing_prepared, housing_labels) from sklearn.metrics import mean_squared_error lin_reg_predictions = lin_reg.predict(housing_prepared) lin_reg_predictions[:10] lin_reg_results = np.sqrt(mean_squared_error(housing_labels, lin_reg_predictions)) lin_reg_results # 2. Decision Tree Regression Model from sklearn.tree import DecisionTreeRegressor tree_reg = DecisionTreeRegressor() tree_reg.fit(housing_prepared, housing_labels) tree_reg_predictions = tree_reg.predict(housing_prepared) tree_reg_predictions[:10] tree_reg_results = np.sqrt(mean_squared_error(housing_labels, tree_reg_predictions)) tree_reg_results # 3. Random Forest Regressor from sklearn.ensemble import RandomForestRegressor forest_reg = RandomForestRegressor() forest_reg.fit(housing_prepared, housing_labels) forest_reg_predictions = forest_reg.predict(housing_prepared) forest_reg_predictions[:10] forest_reg_results = np.sqrt(mean_squared_error(housing_labels, forest_reg_predictions)) forest_reg_results ``` ### Better Evaluation using Cross-Validation A great feature of Scikit-Learn is its *K-fold cross-validaation* feature. The following code randomy splits the training set into 10 distinct subsets called folds, then it trains and evaluates the Decision Tree model 10 times, picking a different fold for evaluation every time and training other 9 folds. The result is an array containing the 10 evaluation scores. ``` from sklearn.model_selection import cross_val_score scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) tree_rmse_scores = np.sqrt(-scores) tree_rmse_scores.mean() ``` > 🔑 **Note:** Scikit-Learn's cross-validation features expect a utility function (grater is better) rather than a cost function (lower is better), so the scoring function is actually the opposite of MSE (i.e., a negative value), which is why the preceding code computes -scores before calculating the square root. ``` # Function to display the scores of any model from sklearn.model_selection import cross_val_score def display_scores(model): scores = cross_val_score(model, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10) rmse_scores = np.sqrt(-scores) print(f"Scores: {rmse_scores}") print(f"Scores: {rmse_scores.mean()}") print(f"Standard deviation: {rmse_scores.std()}") display_scores(lin_reg) display_scores(tree_reg) display_scores(forest_reg) ```
github_jupyter
``` from utils import * import tensorflow as tf from sklearn.cross_validation import train_test_split import time trainset = sklearn.datasets.load_files(container_path = 'data', encoding = 'UTF-8') trainset.data, trainset.target = separate_dataset(trainset,1.0) print (trainset.target_names) print (len(trainset.data)) print (len(trainset.target)) ONEHOT = np.zeros((len(trainset.data),len(trainset.target_names))) ONEHOT[np.arange(len(trainset.data)),trainset.target] = 1.0 train_X, test_X, train_Y, test_Y, train_onehot, test_onehot = train_test_split(trainset.data, trainset.target, ONEHOT, test_size = 0.2) concat = ' '.join(trainset.data).split() vocabulary_size = len(list(set(concat))) data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size) print('vocab from size: %d'%(vocabulary_size)) print('Most common words', count[4:10]) print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]]) GO = dictionary['GO'] PAD = dictionary['PAD'] EOS = dictionary['EOS'] UNK = dictionary['UNK'] class Model: def __init__(self, size_layer, num_layers, embedded_size, dict_size, dimension_output, learning_rate): def cells(reuse=False): return tf.nn.rnn_cell.LSTMCell(size_layer,initializer=tf.orthogonal_initializer(),reuse=reuse) self.X = tf.placeholder(tf.int32, [None, None]) self.Y = tf.placeholder(tf.float32, [None, dimension_output]) encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1)) encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X) attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units = size_layer, memory = encoder_embedded) bahdanau_cells = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]), attention_mechanism = attention_mechanism, attention_layer_size = size_layer) attention_mechanism = tf.contrib.seq2seq.LuongAttention(num_units = size_layer, memory = encoder_embedded) luong_cells = tf.contrib.seq2seq.AttentionWrapper(cell = tf.nn.rnn_cell.MultiRNNCell([cells() for _ in range(num_layers)]), attention_mechanism = attention_mechanism, attention_layer_size = size_layer) rnn_cells = tf.nn.rnn_cell.MultiRNNCell([bahdanau_cells,luong_cells]) outputs, last_state = tf.nn.dynamic_rnn(rnn_cells, encoder_embedded, dtype = tf.float32) W = tf.get_variable('w',shape=(size_layer, dimension_output),initializer=tf.orthogonal_initializer()) b = tf.get_variable('b',shape=(dimension_output),initializer=tf.zeros_initializer()) self.logits = tf.matmul(outputs[:, -1], W) + b self.cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = self.logits, labels = self.Y)) self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost) correct_pred = tf.equal(tf.argmax(self.logits, 1), tf.argmax(self.Y, 1)) self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) size_layer = 128 num_layers = 2 embedded_size = 128 dimension_output = len(trainset.target_names) learning_rate = 1e-3 maxlen = 50 batch_size = 128 tf.reset_default_graph() sess = tf.InteractiveSession() model = Model(size_layer,num_layers,embedded_size,vocabulary_size+4,dimension_output,learning_rate) sess.run(tf.global_variables_initializer()) EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 5, 0, 0, 0 while True: lasttime = time.time() if CURRENT_CHECKPOINT == EARLY_STOPPING: print('break epoch:%d\n'%(EPOCH)) break train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0 for i in range(0, (len(train_X) // batch_size) * batch_size, batch_size): batch_x = str_idx(train_X[i:i+batch_size],dictionary,maxlen) acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer], feed_dict = {model.X : batch_x, model.Y : train_onehot[i:i+batch_size]}) train_loss += loss train_acc += acc for i in range(0, (len(test_X) // batch_size) * batch_size, batch_size): batch_x = str_idx(test_X[i:i+batch_size],dictionary,maxlen) acc, loss = sess.run([model.accuracy, model.cost], feed_dict = {model.X : batch_x, model.Y : test_onehot[i:i+batch_size]}) test_loss += loss test_acc += acc train_loss /= (len(train_X) // batch_size) train_acc /= (len(train_X) // batch_size) test_loss /= (len(test_X) // batch_size) test_acc /= (len(test_X) // batch_size) if test_acc > CURRENT_ACC: print('epoch: %d, pass acc: %f, current acc: %f'%(EPOCH,CURRENT_ACC, test_acc)) CURRENT_ACC = test_acc CURRENT_CHECKPOINT = 0 else: CURRENT_CHECKPOINT += 1 print('time taken:', time.time()-lasttime) print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss, train_acc,test_loss, test_acc)) EPOCH += 1 logits = sess.run(model.logits, feed_dict={model.X:str_idx(test_X,dictionary,maxlen)}) print(metrics.classification_report(test_Y, np.argmax(logits,1), target_names = trainset.target_names)) ```
github_jupyter
# $P_\ell(k)$ measurements for SIMBIG CMASS In this notebook, I demonstrate how we measure the power spectrum multipoles, $P_\ell(k)$ from the forward modeled SIMBIG CMASS mocks ``` import os, time import numpy as np from simbig import halos as Halos from simbig import galaxies as Galaxies from simbig import forwardmodel as FM from simbig import obs as CosmoObs # --- plotting --- import matplotlib as mpl import matplotlib.pyplot as plt #mpl.rcParams['text.usetex'] = True mpl.rcParams['font.family'] = 'serif' mpl.rcParams['axes.linewidth'] = 1.5 mpl.rcParams['axes.xmargin'] = 1 mpl.rcParams['xtick.labelsize'] = 'x-large' mpl.rcParams['xtick.major.size'] = 5 mpl.rcParams['xtick.major.width'] = 1.5 mpl.rcParams['ytick.labelsize'] = 'x-large' mpl.rcParams['ytick.major.size'] = 5 mpl.rcParams['ytick.major.width'] = 1.5 mpl.rcParams['legend.frameon'] = False ``` # Forward model SIMBIG CMASS ## 1. Read in `Quijote` Halo Catalog I'm using `i=1118`th cosmology in the LHC because that's the close to the fiducial cosmology ``` # read in halo catalog halos = Halos.Quijote_LHC_HR(1118, z=0.5) print('Om, Ob, h, ns, s8:') print(Halos.Quijote_LHC_cosmo(1118)) ``` ## 2. Populate halos with HOD We'll use best-fit HOD parameters for CMASS from Reid et al. (2014) ``` theta_hod = Galaxies.thetahod_literature('reid2014_cmass') print(theta_hod) # apply HOD hod = Galaxies.hodGalaxies(halos, theta_hod, seed=0) ``` ## 3. Apply forward model ``` # apply forward model without veto mask, without fiber collisions gals = FM.BOSS(hod, sample='cmass-south', seed=0, veto=False, fiber_collision=False, silent=False) ``` # Measure $P_\ell(k)$ Now we'll measure $P_\ell(k)$ for SIMBIG CMASS. First we'll measure $P_\ell$ for a periodic box for sanity check ``` # apply RSD to hod catalog in a box pos_rsd = FM.Box_RSD(hod, LOS=[0,0,1], Lbox=1000) hod_rsd = hod.copy() hod_rsd['Position'] = pos_rsd plk_box = CosmoObs.Plk_box(hod_rsd) ``` Now lets measure $P_\ell$ for SIMBIG CMASS. This requires first constructing a random catalog. ``` # get randoms rand = FM.BOSS_randoms(gals, sample='cmass-south', veto=False) # measure plk plk = CosmoObs.Plk_survey(gals, rand, Ngrid=360, dk=0.005, P0=1e4, silent=False) ``` $P_\ell$ is calcaulated assuming a fiducial cosmology (same as in observations). However, the fiducial cosmology is different than the true cosmology of the simulation. So there will be discrepancies in the SIMBIG CMASS $P_\ell$ and $P_\ell$ calculated for the periodic box. As a sanity check, lets calculate SIMBIG CMASS $P_\ell$ with the true cosmology ``` _plk_realcosmo = CosmoObs.Plk_survey(gals, rand, cosmo=gals.cosmo, Ngrid=360, dk=0.005, P0=1e4, silent=False) fig = plt.figure(figsize=(5,5)) sub = fig.add_subplot(111) sub.plot(plk_box[0], plk_box[1], c='k', ls=':', label='Periodic Box') sub.plot(plk[0], plk[1], c='k', label='SIMBIG CMASS') sub.plot(_plk_realcosmo[0], _plk_realcosmo[1], c='C0', ls='-.', label='real cosmo.') sub.legend(loc='lower left', fontsize=15) sub.set_ylabel(r'$P_\ell(k)$', fontsize=25) sub.set_yscale('log') sub.set_ylim(1e3, 2e5) sub.set_xlabel('$k$', fontsize=25) sub.set_xlim([3e-3, 1.]) sub.set_xscale('log') fig = plt.figure(figsize=(5,5)) sub = fig.add_subplot(111) sub.plot(plk_box[0], plk_box[0] * plk_box[1], c='k', ls=':', label='Periodic Box') sub.plot(_plk_realcosmo[0], _plk_realcosmo[0] * _plk_realcosmo[1], c='C0', ls='-.', label='SIMBIG CMASS (real cosmo.)') sub.legend(loc='lower left', fontsize=15) sub.set_ylabel(r'$k\,P_\ell(k)$', fontsize=25) # sub.set_yscale('log') sub.set_xlim([0.01, 0.5]) sub.set_xlabel('$k$', fontsize=25) sub.set_xlim([1e-2, 0.15]) sub.set_xscale('log') ``` SIMBIG CMASS $P_\ell$ measured using the true cosmology is in good agreement with the periodic box. # $P_\ell$ with different levels of survey realism The comparison above only imposes survey geometry. Lets check that everything looks sensible when we impose veto mask and fiber collisions ``` # apply forward model without veto mask, without fiber collisions gals_veto = FM.BOSS(hod, sample='cmass-south', seed=0, veto=True, fiber_collision=False, silent=False) rand_veto = FM.BOSS_randoms(gals_veto, veto=True, sample='cmass-south') plk_veto = CosmoObs.Plk_survey(gals_veto, rand_veto, Ngrid=360, dk=0.005, P0=1e4, silent=False) fig = plt.figure(figsize=(5,5)) sub = fig.add_subplot(111) sub.plot(plk[0], plk[1], c='k', label='Survey Geometry') sub.plot(plk_veto[0], plk_veto[1], c='C0', label='+ veto mask') sub.legend(loc='lower left', fontsize=15) sub.set_ylabel(r'$P_0(k)$', fontsize=25) sub.set_yscale('log') sub.set_ylim(1e3, 2e5) sub.set_xlabel('$k$', fontsize=25) sub.set_xlim([3e-3, 1.]) sub.set_xscale('log') # apply forward model with veto mask and fiber collisions gals_veto_fc = FM.BOSS(hod, sample='cmass-south', seed=0, veto=True, fiber_collision=True, silent=False) rand_veto_fc = FM.BOSS_randoms(gals_veto_fc, veto=True, sample='cmass-south') plk_veto_fc = CosmoObs.Plk_survey(gals_veto_fc, rand_veto_fc, Ngrid=360, dk=0.005, P0=1e4, silent=False) fig = plt.figure(figsize=(5,5)) sub = fig.add_subplot(111) sub.plot(plk[0], plk[1], c='k', lw=1, label='Survey Geometry') sub.plot(plk_veto[0], plk_veto[1], c='C0', lw=1, label='+ veto mask') sub.plot(plk_veto_fc[0], plk_veto_fc[1], c='C1', lw=1, label='+ fiber coll.') sub.legend(loc='lower left', fontsize=15) sub.set_ylabel(r'$P_0(k)$', fontsize=25) sub.set_yscale('log') sub.set_ylim(1e3, 2e5) sub.set_xlabel('$k$', fontsize=25) sub.set_xlim([3e-3, 1.]) sub.set_xscale('log') fig = plt.figure(figsize=(5,5)) sub = fig.add_subplot(111) sub.plot(plk[0], plk[0] * plk[1], c='k', lw=1, label='Survey Geometry') sub.plot(plk_veto[0], plk_veto[0] * plk_veto[1], c='C0', lw=1, label='+ veto mask') sub.plot(plk_veto_fc[0], plk_veto_fc[0]* plk_veto_fc[1], c='C1', lw=1, label='+ fiber coll.') sub.legend(loc='lower left', fontsize=10) sub.set_ylabel(r'$k P_0(k)$', fontsize=25) sub.set_ylim(500, 2200) sub.set_xlabel('$k$', fontsize=25) sub.set_xlim([0.01, 0.15]) ``` The normalization difference between the $P_\ell$ with only survey geometry and $P_\ell$ with survey geometry + veto mask is an expected systematic of the veto mask with many small scale features ([de Mattia+2019](https://ui.adsabs.harvard.edu/abs/2019JCAP...08..036D/abstract), [de Mattia+2021](https://ui.adsabs.harvard.edu/abs/2021MNRAS.501.5616D/abstract)). # Lets compare the SIMBIG CMASS $P_\ell$ to the observed BOSS CMASS $P_\ell$ ``` dat_dir = '/tigress/chhahn/simbig/' k_cmass = np.loadtxt(os.path.join(dat_dir, 'obs.cmass_sgc.k.dat'), skiprows=1) p0k_cmass_wall = np.loadtxt(os.path.join(dat_dir, 'obs.cmass_sgc.p0k.w_all.dat'), skiprows=1) p2k_cmass_wall = np.loadtxt(os.path.join(dat_dir, 'obs.cmass_sgc.p2k.w_all.dat'), skiprows=1) p4k_cmass_wall = np.loadtxt(os.path.join(dat_dir, 'obs.cmass_sgc.p4k.w_all.dat'), skiprows=1) p0k_cmass_wnofc = np.loadtxt(os.path.join(dat_dir, 'obs.cmass_sgc.p0k.w_nofc.dat'), skiprows=1) p2k_cmass_wnofc = np.loadtxt(os.path.join(dat_dir, 'obs.cmass_sgc.p2k.w_nofc.dat'), skiprows=1) p4k_cmass_wnofc = np.loadtxt(os.path.join(dat_dir, 'obs.cmass_sgc.p4k.w_nofc.dat'), skiprows=1) fig = plt.figure(figsize=(24,6)) for i in range(3): sub = fig.add_subplot(1,3,i+1) sub.plot(plk_veto_fc[0], plk_veto_fc[i+1], c='k', ls='--', label=r'SIMBIG CMASS') sub.plot(k_cmass, [p0k_cmass_wnofc, p2k_cmass_wnofc, p4k_cmass_wnofc][i], c='C0', label=r'CMASS (no $w_{\rm fc}$)') sub.plot(k_cmass, [p0k_cmass_wall, p2k_cmass_wall, p4k_cmass_wall][i], c='C1', label=r'CMASS (all $w$)') if i == 0: sub.legend(loc='lower left', fontsize=20) sub.set_ylabel(r'$P_%i(k)$' % (i * 2), fontsize=25) sub.set_yscale('log') sub.set_ylim(1e2, 1e6) sub.set_xlabel('$k$', fontsize=25) sub.set_xlim([3e-3, 1.]) sub.set_xscale('log') fig.subplots_adjust(wspace=0.3) ```
github_jupyter
# MPI P2P Communication Modes > "Intro to MPI modes for P2P communication" - toc:true - branch: master - badges: false - comments: false - author: Alexandros Giavaras - categories: [programming, MPI, parallel-computing, C++, distributed-computing] ## <a name="overview"></a> Overview In the previous <a href="https://pockerman.github.io/qubit_opus/programming/mpi/parallel-computing/c++/2021/07/07/mpi-basic-point-to-point-communication.html">post</a>, we saw the standard communication mode that is used under the hoods with ```MPI_Send```. Here, we describe a few more communication modes supported by the MPI standard. ## <a name="ekf"></a> MPI P2P communication modes MPI has three additional modes for P2P communication [1]: - Buffered - Synchronous - Ready In the buffered mode, the sending operation is always locally blocking and just like with standard communication mode, it will return as soon as the message is copied to a buffer. The difference here is that the buffer is user-provided [1]. The synchronous mode is a globally blocking operation [1]. In this mode, the sending operation will return only when the retrival of the message has been initiated by the receiving process. However, the message receiving may not be complete [1]. --- **Remark** The buffered and synchronous modes constitute two symmetrical endpoints. In the buffered mode we trade the waiting with memory whilst in the synchronous mode we don't mind o wait for the message to reach the destination. --- In the ready mode, the send operation will succeed only if a matching receive operation has been initiated already [1]. Otherwise, the function returns with an error code. The purpose of this mode is to reduce the overhead of handshaking operations [1]. So how can we distinguish between these different commnunication modes? This is done by prefixing the initial letter of each mode before the ```Send``` [1]. Thus, we have - ```MPI_Bsend``` - ```MPI_Ssend``` - ```MPI_Rsend``` The resr of the functions signatures is the same as that of ```MPI_Send``` [1] ``` int [ MPI_Bsend | MPI_Ssend | MPI_Rsend ] (void∗ buf , int count , MPI_Datatype datatype , int dest , int tag , MPI_Comm comm ) ; ``` --- **Remark** Bear in mind that blocking sends can be matched with non blocking receives, and vice versa [1]. However, the tuple (communicator, rank, message tag) should match in order to do so. --- ## <a name="refs"></a> Summary In this post, we introduced three more communication modes supported by MPI for P2P message exchange. The fact that we have in our disposal different means for P2P communucation means that we can adjust the application to better suit the hardware it is running on. The interafces of the supplied functions are the same with that of ```MPI_Send```. This greatly facilitates development. We can, for example, create an array of function pointers so that we group these functions in one place and call the specified function based on some given configuration parameter. ## <a name="refs"></a> References 1. Gerassimos Barlas, ```Multicore and GPU Programming An Integrated Approach```, Morgan Kaufmann
github_jupyter
### How To Break Into the Field Now you have had a closer look at the data, and you saw how I approached looking at how the survey respondents think you should break into the field. Let's recreate those results, as well as take a look at another question. ``` import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import HowToBreakIntoTheField as t %matplotlib inline df = pd.read_csv('./survey_results_public.csv') schema = pd.read_csv('./survey_results_schema.csv') df.head() ``` #### Question 1 **1.** In order to understand how to break into the field, we will look at the **CousinEducation** field. Use the **schema** dataset to answer this question. Write a function called **get_description** that takes the **schema dataframe** and the **column** as a string, and returns a string of the description for that column. ``` def get_description(column_name, schema=schema): ''' INPUT - schema - pandas dataframe with the schema of the developers survey column_name - string - the name of the column you would like to know about OUTPUT - desc - string - the description of the column ''' desc = list(schema[schema['Column'] == column_name]['Question'])[0] return desc #test your code #Check your function against solution - you shouldn't need to change any of the below code get_description(df.columns[0]) # This should return a string of the first column description #Check your function against solution - you shouldn't need to change any of the below code descrips = set(get_description(col) for col in df.columns) t.check_description(descrips) ``` The question we have been focused on has been around how to break into the field. Use your **get_description** function below to take a closer look at the **CousinEducation** column. ``` get_description('CousinEducation') ``` #### Question 2 **2.** Provide a pandas series of the different **CousinEducation** status values in the dataset. Store this pandas series in **cous_ed_vals**. If you are correct, you should see a bar chart of the proportion of individuals in each status. If it looks terrible, and you get no information from it, then you followed directions. However, we should clean this up! ``` cous_ed_vals = df.CousinEducation.value_counts()#Provide a pandas series of the counts for each CousinEducation status cous_ed_vals # assure this looks right # The below should be a bar chart of the proportion of individuals in your ed_vals # if it is set up correctly. (cous_ed_vals/df.shape[0]).plot(kind="bar"); plt.title("Formal Education"); ``` We definitely need to clean this. Above is an example of what happens when you do not clean your data. Below I am using the same code you saw in the earlier video to take a look at the data after it has been cleaned. ``` possible_vals = ["Take online courses", "Buy books and work through the exercises", "None of these", "Part-time/evening courses", "Return to college", "Contribute to open source", "Conferences/meet-ups", "Bootcamp", "Get a job as a QA tester", "Participate in online coding competitions", "Master's degree", "Participate in hackathons", "Other"] def clean_and_plot(df, title='Method of Educating Suggested', plot=True): ''' INPUT df - a dataframe holding the CousinEducation column title - string the title of your plot axis - axis object plot - bool providing whether or not you want a plot back OUTPUT study_df - a dataframe with the count of how many individuals Displays a plot of pretty things related to the CousinEducation column. ''' study = df['CousinEducation'].value_counts().reset_index() study.rename(columns={'index': 'method', 'CousinEducation': 'count'}, inplace=True) study_df = t.total_count(study, 'method', 'count', possible_vals) study_df.set_index('method', inplace=True) if plot: (study_df/study_df.sum()).plot(kind='bar', legend=None); plt.title(title); plt.show() props_study_df = study_df/study_df.sum() return props_study_df props_df = clean_and_plot(df) ``` #### Question 4 **4.** I wonder if some of the individuals might have bias towards their own degrees. Complete the function below that will apply to the elements of the **FormalEducation** column in **df**. ``` def higher_ed(formal_ed_str): ''' INPUT formal_ed_str - a string of one of the values from the Formal Education column OUTPUT return 1 if the string is in ("Master's degree", "Doctoral", "Professional degree") return 0 otherwise ''' if formal_ed_str in ("Master's degree", "Doctoral", "Professional degree"): return 1 else: return 0 df["FormalEducation"].apply(higher_ed)[:5] #Test your function to assure it provides 1 and 0 values for the df # Check your code here df['HigherEd'] = df["FormalEducation"].apply(higher_ed) higher_ed_perc = df['HigherEd'].mean() t.higher_ed_test(higher_ed_perc) ``` #### Question 5 **5.** Now we would like to find out if the proportion of individuals who completed one of these three programs feel differently than those that did not. Store a dataframe of only the individual's who had **HigherEd** equal to 1 in **ed_1**. Similarly, store a dataframe of only the **HigherEd** equal to 0 values in **ed_0**. Notice, you have already created the **HigherEd** column using the check code portion above, so here you only need to subset the dataframe using this newly created column. ``` ed_1 = df[df['HigherEd'] == 1] # Subset df to only those with HigherEd of 1 ed_0 = df[df['HigherEd'] == 0] # Subset df to only those with HigherEd of 0 print(ed_1['HigherEd'][:5]) #Assure it looks like what you would expect print(ed_0['HigherEd'][:5]) #Assure it looks like what you would expect #Check your subset is correct - you should get a plot that was created using pandas styling #which you can learn more about here: https://pandas.pydata.org/pandas-docs/stable/style.html ed_1_perc = clean_and_plot(ed_1, 'Higher Formal Education', plot=False) ed_0_perc = clean_and_plot(ed_0, 'Max of Bachelors Higher Ed', plot=False) comp_df = pd.merge(ed_1_perc, ed_0_perc, left_index=True, right_index=True) comp_df.columns = ['ed_1_perc', 'ed_0_perc'] comp_df['Diff_HigherEd_Vals'] = comp_df['ed_1_perc'] - comp_df['ed_0_perc'] comp_df.style.bar(subset=['Diff_HigherEd_Vals'], align='mid', color=['#d65f5f', '#5fba7d']) ``` #### Question 6 **6.** What can you conclude from the above plot? Change the dictionary to mark **True** for the keys of any statements you can conclude, and **False** for any of the statements you cannot conclude. ``` sol = {'Everyone should get a higher level of formal education': False, 'Regardless of formal education, online courses are the top suggested form of education': True, 'There is less than a 1% difference between suggestions of the two groups for all forms of education': False, 'Those with higher formal education suggest it more than those who do not have it': True} t.conclusions(sol) ``` This concludes another look at the way we could compare education methods by those currently writing code in industry.
github_jupyter
# Reading and writing LAS files This notebook goes with [the Agile blog post](https://agilescientific.com/blog/2017/10/23/x-lines-of-python-load-curves-from-las) of 23 October. Set up a `conda` environment with: conda create -n welly python=3.6 matplotlib=2.0 scipy pandas You'll need `welly` in your environment: conda install tqdm # Should happen automatically but doesn't pip install welly This will also install the latest versions of `striplog` and `lasio`. ``` import welly ls ../data/*.LAS ``` ### 1. Load the LAS file with `lasio` ``` import lasio l = lasio.read('../data/P-129.LAS') # Line 1. ``` That's it! But the object itself doesn't tell us much — it's really just a container: ``` l ``` ### 2. Look at the WELL section of the header ``` l.header['Well'] # Line 2. ``` ### 3. Look at the curve data The curves are all present one big NumPy array: ``` l.data ``` Or we can go after a single curve object: ``` l.curves.GR # Line 3. ``` And there's a shortcut to its data: ``` l['GR'] # Line 4. ``` ...so it's easy to make a plot against depth: ``` import matplotlib.pyplot as plt %matplotlib inline plt.figure(figsize=(15,3)) plt.plot(l['DEPT'], l['GR']) plt.show() ``` ### 4. Inspect the curves as a `pandas` dataframe ``` l.df().head() # Line 5. ``` ### 5. Load the LAS file with `welly` ``` from welly import Well w = Well.from_las('../data/P-129.LAS') # Line 6. ``` `welly` Wells know how to display some basics: ``` w ``` And the `Well` object also has `lasio`'s access to a pandas DataFrame: ``` w.df().head() ``` ### 6. Look at `welly`'s Curve object Like the `Well`, a `Curve` object can report a bit about itself: ``` gr = w.data['GR'] # Line 7. gr ``` One important thing about Curves is that each one knows its own depths — they are stored as a property called `basis`. (It's not actually stored, but computed on demand from the start depth, the sample interval (which must be constant for the whole curve) and the number of samples in the object.) ``` gr.basis ``` ### 7. Plot part of a curve We'll grab the interval from 300 m to 1000 m and plot it. ``` gr.to_basis(start=300, stop=1000).plot() # Line 8. ``` ### 8. Smooth a curve Curve objects are, fundamentally, NumPy arrays. But they have some extra tricks. We've already seen `Curve.plot()`. Using the `Curve.smooth()` method, we can easily smooth a curve, eg by 15 m (passing `samples=True` would smooth by 15 samples): ``` sm = gr.smooth(window_length=15, samples=False) # Line 9. sm.plot() ``` ### 9. Export a set of curves as a matrix You can get at all the data through the lasio `l.data` object: ``` print("Data shape: {}".format(w.las.data.shape)) w.las.data ``` But we might want to do some other things, such as specify which curves you want (optionally using aliases like GR1, GRC, NGC, etc for GR), resample the data, or specify a start and stop depth — `welly` can do all this stuff. This method is also wrapped by `Project.data_as_matrix()` which is nice because it ensures that all the wells are exported at the same sample interval. Here are the curves in this well: ``` w.data.keys() keys=['CALI', 'DT', 'DTS', 'RHOB', 'SP'] w.plot(tracks=['TVD']+keys) X, basis = w.data_as_matrix(keys=keys, start=275, stop=1850, step=0.5, return_basis=True) w.data['CALI'].shape ``` So CALI had 12,718 points in it... since we downsampled to 0.5 m and removed the top and tail, we should have substantially fewer points: ``` X.shape plt.figure(figsize=(15,3)) plt.plot(X.T[0]) plt.show() ``` ### 10+. BONUS: fix the lat, lon OK, we're definitely going to go over our budget on this one. Did you notice that the location of the well did not get loaded properly? ``` w.location ``` Let's look at some of the header: # LAS format log file from PETREL # Project units are specified as depth units #================================================================== ~Version information VERS. 2.0: WRAP. YES: #================================================================== ~WELL INFORMATION #MNEM.UNIT DATA DESCRIPTION #---- ------ -------------- ----------------------------- STRT .M 1.0668 :START DEPTH STOP .M 1939.13760 :STOP DEPTH STEP .M 0.15240 :STEP NULL . -999.25 :NULL VALUE COMP . Elmworth Energy Corporation :COMPANY WELL . Kennetcook #2 :WELL FLD . Windsor Block :FIELD LOC . Lat = 45* 12' 34.237" N :LOCATION PROV . Nova Scotia :PROVINCE UWI. Long = 63* 45'24.460 W :UNIQUE WELL ID LIC . P-129 :LICENSE NUMBER CTRY . CA :COUNTRY (WWW code) DATE. 10-Oct-2007 :LOG DATE {DD-MMM-YYYY} SRVC . Schlumberger :SERVICE COMPANY LATI .DEG :LATITUDE LONG .DEG :LONGITUDE GDAT . :GeoDetic Datum SECT . 45.20 Deg N :Section RANG . PD 176 :Range TOWN . 63.75 Deg W :Township Look at **LOC** and **UWI**. There are two problems: 1. These items are in the wrong place. (Notice **LATI** and **LONG** are empty.) 2. The items are malformed, with lots of extraneous characters. We can fix this in two steps: 1. Remap the header items to fix the first problem. 2. Parse the items to fix the second one. We'll define these in reverse because the remapping uses the transforming function. ``` import re def transform_ll(text): """ Parses malformed lat and lon so they load properly. """ def callback(match): d = match.group(1).strip() m = match.group(2).strip() s = match.group(3).strip() c = match.group(4).strip() if c.lower() in ('w', 's') and d[0] != '-': d = '-' + d return ' '.join([d, m, s]) pattern = re.compile(r""".+?([-0-9]+?).? ?([0-9]+?).? ?([\.0-9]+?).? +?([NESW])""", re.I) text = pattern.sub(callback, text) return welly.utils.dms2dd([float(i) for i in text.split()]) ``` Make sure that works! ``` print(transform_ll("""Lat = 45* 12' 34.237" N""")) remap = { 'LATI': 'LOC', # Use LOC for the parameter LATI. 'LONG': 'UWI', # Use UWI for the parameter LONG. 'LOC': None, # Use nothing for the parameter SECT. 'SECT': None, # Use nothing for the parameter SECT. 'RANG': None, # Use nothing for the parameter RANG. 'TOWN': None, # Use nothing for the parameter TOWN. } funcs = { 'LATI': transform_ll, # Pass LATI through this function before loading. 'LONG': transform_ll, # Pass LONG through it too. 'UWI': lambda x: "No UWI, fix this!" } w = Well.from_las('../data/P-129.LAS', remap=remap, funcs=funcs) w.location.latitude, w.location.longitude w.uwi ``` Let's just hope the mess is the same mess in every well. (LOL, no-one's that lucky.) <hr> **&copy; 2017 [agilescientific.com](https://www.agilescientific.com/) and licensed [CC-BY 4.0](https://creativecommons.org/licenses/by/4.0/)**
github_jupyter
# DaKanjiRecognizer - Single Kanji CNN : Create dataset ## Setup Import the needed libraries. ``` #std lib import sys import os import random import math import multiprocessing as mp import gc import time import datetime from typing import Tuple, List from shutil import copy from tqdm import tqdm import tensorflow as tf #reading the dataset from etldr.etl_data_reader import ETLDataReader from etldr.etl_character_groups import ETLCharacterGroups from etldr.etl_data_names import ETLDataNames from DataGenerator import generate_images, check_font_char_support #data handling import PIL from PIL import Image as PImage from PIL import ImageFilter, ImageFont, ImageDraw import numpy as np import cv2 #plotting/showing graphics %matplotlib inline import matplotlib.pyplot as plt from IPython.display import Image #define a font to show japanese characters in matplotlib figures import matplotlib.font_manager as fm show_sample_font = fm.FontProperties(fname=os.path.join("..", "fonts", "NotoSerifCJKjp-Regular.otf"), size=20) ``` ## Loading the data The [ETL Character data set](http://etlcdb.db.aist.go.jp/) which I am using is a data set with multiple sub sets (ETL1 - ETL7, ETL8B, ETL8G, ETL9B and ETL9G). <br/> After unpacking the data set I renamed all folders and files to have a uniform naming scheme: "ETLX/ETLX_Y". "X" is the number of the subset and Y the part of the subset. Also ETL7S was removed (ETL7L just smaller), the following renaming was also done: <br/> ETL8B $\rightarrow$ ETL1, ETL8G $\rightarrow$ ETL9, ETL9B $\rightarrow$ ETL10 and ETL9G $\rightarrow$ ETL11.<br/> This leads to the following data set structure: <br/> | name | type | content | res | Bit depth | code | samples perlabel | total samples | |:-----:|:-------:|:-----------------------------------------------------------------------:|:-------:|:---------:|:----------:|:----------------:|:-------------:| | ETL1 | M-Type | Numbers <br/> Roman <br/> Symbols <br/> Katakana | 64x63 | 4 | JIS X 0201 | ~1400 | 141319 | | ETL2 | K-Type | Hiragana <br/> Katakana <br/> Kanji <br/> Roman <br/> Symbols | 60x60 | 6 | CO59 | ~24 | 52796 | | ETL3 | C-Type | Numeric <br/> Capital Roman <br/> Symbols | 72x76 | 4 | JIS X 0201 | 200 | 9600 | | ETL4 | C-Type | Hiragana | 72x76 | 4 | JIS X 0201 | 120 | 6120 | | ETL5 | C-Type | Katakana | 72x76 | 4 | JIS X 0201 | ~200 | 10608 | | ETL6 | M-Type | Katakana <br/> Symbols | 64x63 | 4 | JIS X 0201 | 1383 | 157662 | | ETL7 | M-Type | Hiragana <br/> Symbols | 64x63 | 4 | JIS X 0201 | 160 | 16800 | | ETL8 | 8B-Type | Hiragana <br/> Kanji | 64x63 | 1 | JIS X 0208 | 160 | 157662 | | ETL9 | 8G-Type | Hiragana <br/> Kanji | 128x127 | 4 | JIS X 0208 | 200 | 607200 | | ETL10 | 9B-Type | Hiragana <br/> Kanji | 64x63 | 1 | JIS X 0208 | 160 | 152960 | | ETL11 | 9G-Type | Hiragana <br/> Kanji | 128x127 | 4 | JIS X 0208 | 200 | 607200 | Because the provided data set is distributed in a proprietary binary data format and therefore hard to handle I created a ```ETL_data_reader```-package. This package can be found [here](https://github.com/CaptainDario/ETLCDB_data_reader). The specific dataformat is C-struct like for types: M, 8B, 8G, 9B, 9G. But the types C and K are 6-bit encoded. All codes can be found on the [official website.](http://etlcdb.db.aist.go.jp/file-formats-and-sample-unpacking-code) I used the [struct module](https://docs.python.org/3/library/struct.html) and the [bitstring module](https://pypi.org/project/bitstring/) to unpack the binary data. <br/> First an instance of the ```ERL_data_reader``` -class is needed. The path parameter should lead to the folder in which all parts of the ETL data set can be found. ``` path = "Z:\data_sets\etlcdb_binary" reader = ETLDataReader(path) ``` Define a convenience function for showing characters and their label. ``` def show_image(img : np.array, label : str): plt.figure(figsize=(2.2, 2.2)) plt.title(label=label, font=show_sample_font) plt.axis("off") plt.imshow(img.astype(np.float64), cmap="gray") ``` Now load all samples which contain Kanji, Hiragana and Katakana. ``` types = [ETLCharacterGroups.kanji, ETLCharacterGroups.katakana, ETLCharacterGroups.hiragana] x, y = reader.read_dataset_whole(types, 16) print(x.shape, y.shape) ``` With the loaded data we can take a look at the class distributions. ``` unique, counts = np.unique(y, return_counts=True) balance = dict(zip(unique, counts)) plt.bar(range(0, len(counts)), counts, width=1.0) plt.show() ``` Because the data is quite imbalanced we need more data. First remove samples so that a class has maximum 10all_jis2_charsamples. ``` del_inds, cnt = [], 0 for _x, _y in zip(x, y): ind = np.where(unique == _y) if(counts[ind] > 1000): del_inds.append(cnt) counts[ind] -= 1 cnt += 1 x = np.delete(x, del_inds, axis=0) y = np.delete(y, del_inds) unique, counts = np.unique(y, return_counts=True) balance = dict(zip(unique, counts)) plt.bar(range(0, len(counts)), counts, width=1.0) plt.show() ``` ### Save etlcdb images to disk To use the data later with keras we save them to disk in an appropriate folder structure. <br/> The ETL_data_reader package provides a handy function for this. ``` reader.save_to_file(x, y, r"Z:\data_sets\dakanji_single_kanji_cnn", name=0) ``` ## Create samples for missing JIS-2 Kanji Because not all JIS 2 characters are in the etlcdb we need to get samples for them. <br/> First find the characters which are in JIS2 but not in the data set. ``` chars_to_gen = {} # add samples for the already existing classes for u, c in zip(unique, counts): if(c < 2000): chars_to_gen[u] = 2000 - c with open("jis2_characters.txt", encoding="utf8", mode="r") as f: all_jis2_chars = f.read().replace(" ", "").replace("\n", "") all_jis2_chars = list(all_jis2_chars) missing_jis2_chars = [c for c in all_jis2_chars if c not in unique] # add samples for missing jis2 characters for c in missing_jis2_chars: chars_to_gen[c] = 2000 ``` Copy samples from DaJapanaeseDataGenerator dataset ``` da_data_dir = r"Z:\data_sets\da_japanese_data_generator" with open(os.path.join(da_data_dir, "encoding.txt"), encoding="utf8", mode="r") as f: d = eval(f.read()) da_data_encoding = {v : k for k, v in d.items()} single_kanji_data_dir = r"Z:\data_sets\dakanji_single_kanji_cnn" with open(os.path.join(single_kanji_data_dir, "encoding.txt"), encoding="utf8", mode="r") as f: single_kanji_data_encoding = eval(f.read()) single_kanji_data_encoding["キ"] chars_to_gen["あ"] for char, cnt in chars_to_gen.items(): # if(char not in single_kanji_data_encoding): #print(char) os.mkdir(os.path.join(single_kanji_data_dir, str(len(single_kanji_data_encoding)))) single_kanji_data_encoding[char] = [str(len(single_kanji_data_encoding)), 0] # for i in range(cnt): _from = os.path.join(da_data_dir, str(da_data_encoding[char]), str(i) + ".png") _to = os.path.join(single_kanji_data_dir, single_kanji_data_encoding[char][0], str(single_kanji_data_encoding[char][1]) + ".png") #print(_from, _to) copy(_from, _to) single_kanji_data_encoding[char][1] += 1 with open(os.path.join(single_kanji_data_dir, "encoding.txt"), encoding="utf8", mode="w+") as f: f.write(str(single_kanji_data_encoding)) ```
github_jupyter
#### Jupyter notebooks This is a [Jupyter](http://jupyter.org/) notebook using Python. You can install Jupyter locally to edit and interact with this notebook. # Finite difference methods for transient PDE ## Method of Lines Our method for solving time-dependent problems will be to discretize in space first, resulting in a system of ordinary differential equations $$ M \dot u = f(u) $$ where the "mass matrix" $M$ might be diagonal and $f(u)$ represents a spatial discretization that has the form $f(u) = A u$ for linear problems. ``` %matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn') def ode_euler(f, u0, tfinal=1, h=0.1): u = np.array(u0) t = 0 thist = [t] uhist = [u0] while t < tfinal: h = min(h, tfinal - t) u += h * f(t, u) t += h thist.append(t) uhist.append(u.copy()) return np.array(thist), np.array(uhist) tests = [] class fcos: def __init__(self, k=5): self.k = k def __repr__(self): return 'fcos(k={:d})'.format(self.k) def f(self, t, u): return -self.k * (u - np.cos(t)) def u(self, t, u0): k2p1 = self.k**2+1 return (u0 - self.k**2/k2p1) * np.exp(-self.k*t) + self.k*(np.sin(t) + self.k*np.cos(t))/k2p1 tests.append(fcos(k=2)) tests.append(fcos(k=10)) u0 = np.array([.2]) plt.figure() for test in tests: thist, uhist = ode_euler(test.f, u0, h=.1, tfinal=6) plt.plot(thist, uhist, '.', label=repr(test)+' Forward Euler') plt.plot(thist, test.u(thist, u0), label=repr(test)+' exact') plt.plot(thist, np.cos(thist), label='cos') plt.legend(loc='upper right'); ``` ### Midpoint Method What if instead of evaluating the function at the end of the time step, we evaluated in the middle of the time step using the average of the endpoint values. $$ \tilde u(h) = u(0) + h f\left(\frac h 2, \frac{\tilde u(h) + u(0)}{2} \right) $$ For the linear problem, this reduces to $$ \Big(I - \frac h 2 A \Big) u(h) = \Big(I + \frac h 2 A\Big) u(0) .$$ ``` def ode_midpoint_linear(A, u0, tfinal=1, h=0.1): u = u0.copy() t = 0 thist = [t] uhist = [u0] I = np.eye(len(u)) while t < tfinal: h = min(h, tfinal - t) u = np.linalg.solve(I - .5*h*A, (I + .5*h*A) @ u) t += h thist.append(t) uhist.append(u.copy()) return np.array(thist), np.array(uhist) thist, uhist = ode_midpoint_linear(test.A, u0, h=.2, tfinal=15) plt.figure() plt.plot(thist, uhist, '*') plt.plot(thist, test.u(thist, u0)) plt.title('Midpoint'); ``` ## $\theta$ method The above methods are all special cases of the $\theta$ method $$ \tilde u(h) = u(0) + h f\left(\theta h, \theta\tilde u(h) + (1-\theta)u(0) \right) $$ which, for linear problems, is solved as $$ (I - h \theta A) u(h) = \Big(I + h (1-\theta) A \Big) u(0) . $$ $\theta=0$ is explicit Euler, $\theta=1$ is implicit Euler, and $\theta=1/2$ is the midpoint rule. The stability function is $$ R(z) = \frac{1 + (1-\theta)z}{1 - \theta z}. $$ ``` for theta in [.2, .5, .8]: plot_stability(xx, yy, (1 + (1-theta)*zz)/(1 - theta*zz), '$\\theta={:3.1f}$'.format(theta)) ``` We will generalize slightly to allow solution of a linear differential algebraic equation $$ M \dot u = A u + f(t,x) $$ where $M$ is (for now) a diagonal matrix that has zero rows at boundary conditions. With this generalization, the $\theta$ method becomes $$ (M - h \theta A) u(h) = \Big(M + h (1-\theta) A \Big) u(0) + h f(h\theta, x) . $$ We will assume that $M$ is nonsingular if $\theta=0$. ``` def dae_theta_linear(M, A, u0, rhsfunc, bcs=[], tfinal=1, h=0.1, theta=.5): u = u0.copy() t = 0 hist = [(t,u0)] while t < tfinal: if tfinal - t < 1.01*h: h = tfinal - t tnext = tfinal else: tnext = t + h h = min(h, tfinal - t) rhs = (M + (1-theta)*h*A) @ u + h*rhsfunc(t+theta*h) for i, f in bcs: rhs[i] = theta*h*f(t+theta*h, x[i]) u = np.linalg.solve(M - theta*h*A, rhs) t = tnext hist.append((t, u.copy())) return hist ``` ### Stiff decay to cosine ``` test = fcos(k=5000) u0 = np.array([.2]) hist = dae_theta_linear(np.eye(1), -test.k, u0, lambda t: test.k*np.cos(t), h=.1, tfinal=6, theta=.5) hist = np.array(hist) plt.plot(hist[:,0], hist[:,1], 'o') tt = np.linspace(0, 6, 200) plt.plot(tt, test.u(tt,u0)); ``` #### Observations * $\theta=1$ is robust * $\theta=1/2$ gets correct long-term behavior, but has oscillations at early times * $\theta < 1/2$ allows oscillations to grow ### Definition: $A$-stability A method is $A$-stable if the stability region $$ \{ z : |R(z)| \le 1 \} $$ contains the entire left half plane $$ \Re[z] \le 0 .$$ This means that the method can take arbitrarily large time steps without becoming unstable (diverging) for any problem that is indeed physically stable. ### Definition: $L$-stability A time integrator with stability function $R(z)$ is $L$-stable if $$ \lim_{z\to\infty} R(z) = 0 .$$ For the $\theta$ method, we have $$ \lim_{z\to \infty} \frac{1 + (1-\theta)z}{1 - \theta z} = \frac{1-\theta}{\theta} . $$ Evidently only $\theta=1$ is $L$-stable. ## Transient PDE ### Diffusion (heat equation) Let's first consider diffusion of a quantity $u(t,x)$ $$ \dot u(t,x) - u''(t,x) = f(t,x) \qquad t > 0, -1 < x < 1 \\ u(0,x) = g(x) \qquad u(t,-1) = h_L(t) \qquad u'(t,1) = h_R(t) .$$ Let's use a Chebyshev discretization in space. ``` %run fdtools.py # define cosspace, vander_chebyshev, and chebeval def diffusion_cheb(n, left, right): """Solve the diffusion PDE on (-1,1) using n elements with rhsfunc(x) forcing. The left and right boundary conditions are specified as a pair (deriv, func) where * deriv=0 for Dirichlet u(x_endpoint) = func(x_endpoint) * deriv=1 for Neumann u'(x_endpoint) = func(x_endpoint)""" x = cosspace(-1, 1, n+1) # n+1 points is n "elements" T = chebeval(x) L = -T[2] bcs = [] for i,deriv,func in [(0, *left), (-1, *right)]: L[i] = T[deriv][i] bcs.append((i, func)) M = np.eye(n+1) M[[0,-1]] = 0 return x, M, -L @ np.linalg.inv(T[0]), bcs x, M, A, bcs = diffusion_cheb(80, (0, lambda t,x: 0*x), (0, lambda t,x: 0*x+.5)) hist = dae_theta_linear(M, A, np.exp(-(x*8)**2), lambda t: 0*x, bcs, h=.005, theta=.5, tfinal=0.3) for t, u in hist[::10]: plt.plot(x, u, label='$t={:4.2f}$'.format(t)) plt.legend(loc='lower left'); ``` #### Observations * Sharp central spike is diffused very quickly. * Artifacts with $\theta < 1$. #### Manufactured solution ``` class exact_tanh: def __init__(self, k=1, x0=0): self.k = k self.x0 = x0 def u(self, t, x): return np.tanh(self.k*(x - t - self.x0)) def u_x(self, t, x): return self.k * np.cosh(self.k*(x - t - self.x0))**(-2) def u_t(self, t, x): return -self.u_x(t, x) def u_xx(self, t, x): return -2 * self.k**2 * np.tanh(self.k*(x - t - self.x0)) * np.cosh(self.k*(x - t - self.x0))**(-2) def heatrhs(self, t, x): return self.u_t(t,x) - self.u_xx(t,x) ex = exact_tanh(2, -.3) x, M, A, bcs = diffusion_cheb(20, (0, ex.u), (1, ex.u_x)) hist = dae_theta_linear(M, A, ex.u(0,x), lambda t: ex.heatrhs(t,x), bcs) for t, u in hist: plt.plot(x, u, label='$t={:3.1f}$'.format(t)) plt.legend(loc='lower right'); def mms_error(n): x, M, A, bcs = diffusion_cheb(n, (0, ex.u), (1, ex.u_x)) hist = dae_theta_linear(M, A, ex.u(0,x), lambda t: ex.heatrhs(t,x), bcs, h=1/n**2, theta=1) return np.linalg.norm(hist[-1][1] - ex.u(hist[-1][0], x), np.inf) ns = np.logspace(.8, 1.6, 10).astype(int) errors = [mms_error(n) for n in ns] plt.loglog(ns, errors, 'o', label='numerical') for p in range(1,4): plt.loglog(ns, 1/ns**(p), label='$n^{-%d}$'%p) plt.xlabel('n') plt.ylabel('error') plt.legend(loc='lower left'); ``` #### Observations * Errors are limited by time (not spatial) discretization error. This is a result of using the (spectrally accurate) Chebyshev method in space. * $\theta=1$ is more accurate than $\theta = 1/2$, despite the latter being second order accurate in time. This is analogous to the stiff relaxation to cosine test. #### Largest eigenvalues ``` def maxeig(n): x, M, A, bcs = diffusion_cheb(n, (0, ex.u), (1, ex.u_x)) lam = np.linalg.eigvals(-A) return max(lam) plt.loglog(ns, [maxeig(n) for n in ns], 'o', label='cheb') for p in range(1,5): plt.loglog(ns, ns**(p), label='$n^{%d}$'%p) plt.xlabel('n') plt.ylabel('$\max \sigma(A)$') plt.legend(loc='lower left'); ``` ### Finite difference method ``` def maxeig_fd(n): dx = 2/n A = 1/dx**2 * (2 * np.eye(n+1) - np.eye(n+1, k=1) - np.eye(n+1, k=-1)) return max(np.linalg.eigvals(A)) plt.loglog(2/ns, [maxeig_fd(n) for n in ns], 'o', label='fd') for p in range(1,4): plt.loglog(2/ns, 4*(2/ns)**(-p), label='$4 h^{-%d}$'%p) plt.xlabel('h') plt.ylabel('$\max \sigma(A)$') plt.legend(loc='upper right'); ``` #### Question: max explicit Euler time step Express the maximum stable time step $\Delta t$ using explicit Euler in terms of the grid spacing $\Delta x$. ## Hyperbolic (wave) equations The simplest hyperbolic equation is linear advection $$ \dot u(t,x) + c u'(t,x) = f(t,x) $$ where $c$ is the wave speed and $f$ is a source term. In the homogenous ($f = 0$) case, the solution is given by characteristics $$ u(t,x) = u(0, x - ct) . $$ This PDE also requires boundary conditions, but as a first-order equation, we can only enforce boundary conditions at one boundary. It turns out that this needs to be the _inflow_ boundary, so if $c > 0$, that is the left boundary condition $u(t, -1) = g(t)$. We can solve this system using Chebyshev methods. ``` def advection_cheb(n, c, left=(None,None), right=(None,None)): """Discretize the advection PDE on (-1,1) using n elements with rhsfunc(x) forcing. The left boundary conditions are specified as a pair (deriv, func) where * deriv=0 for Dirichlet u(x_endpoint) = func(x_endpoint) * deriv=1 for Neumann u'(x_endpoint) = func(x_endpoint)""" x = cosspace(-1, 1, n+1) # n+1 points is n "elements" T = chebeval(x) A = -c*T[1] M = np.eye(n+1) bcs = [] for i,deriv,func in [(0, *left), (-1, *right)]: if deriv is None: continue A[i] = T[deriv][i] M[i] = 0 bcs.append((i, func)) return x, M, A @ np.linalg.inv(T[0]), bcs x, M, A, bcs = advection_cheb(40, 1, left=(0, lambda t,x: 0*x)) hist = dae_theta_linear(M, A, np.exp(-(x*4)**2), lambda t: 0*x, bcs, h=.001, theta=1) for t, u in hist[::len(hist)//10]: plt.plot(x, u, label='$t={:3.1f}$'.format(t)) plt.legend(loc='lower left') np.linalg.cond(A) lam = np.linalg.eigvals(A[:,:]) print(A[0,:5]) plt.plot(lam.real, lam.imag, '.'); ``` #### Observations * $\theta > 1/2$ causes decay in amplitude * $\theta < 1/2$ causes growth -- unstable * An undershoot develops behind the traveling wave and increasing resolution doesn't make it go away * We need an *upwind* boundary condition, otherwise the system is unstable * Only Dirichlet inflow conditions are appropriate -- Neumann conditions produce a singular matrix ### Finite difference ``` def advection_fd(n, c, stencil=2, bias=0, left=None, right=None): x = np.linspace(-1, 1, n+1) A = np.zeros((n+1,n+1)) for i in range(n+1): sleft = max(0, i - stencil//2 + bias) sleft = min(sleft, n+1 - stencil) A[i,sleft:sleft+stencil] = -c*fdstencil(x[i], x[sleft:sleft+stencil])[1] M = np.eye(n+1) bcs = [] for i, func in [(0, left), (-1, right)]: if func is None: continue A[i] = 0 A[i,i] = 1 M[i] = 0 bcs.append((i, func)) return x, M, A, bcs x, M, A, bcs = advection_fd(40, c=1, stencil=3, bias=0, left=lambda t,x: 0*x) hist = dae_theta_linear(M, A, np.exp(-(x*4)**2), lambda t: 0*x, bcs, h=2/(len(x)-1), theta=.5) for t, u in hist[::len(hist)//10]: plt.plot(x, u, label='$t={:3.1f}$'.format(t)) plt.legend(loc='lower left') print('stencil', A[3,:7]) print('cond', np.linalg.cond(A)) lam = np.linalg.eigvals(A[1:,1:]) plt.plot(lam.real, lam.imag, '.') #plt.spy(A[:6,:6]); ``` #### Observations * Centered methods have an undershoot behind the traveling wave * Upwind biasing of the stencil tends to reduce artifacts, but only `stencil=2` removes undershoots * Downwind biasing is usually unstable * With upwinded `stencil=2`, we can use an explicit integrator, but the time step must satisfy $$ c \Delta t < \Delta x $$ * The upwind methods are in general dissipative -- amplitude is lost even with very accurate time integration * The higher order upwind methods always produce artifacts for sharp transitions ### Phase analysis We can apply the advection differencing stencils to the test functions $$ \phi(x, \theta) = e^{i \theta x}$$ and compare to the exact derivative $$ \frac{d \phi}{d x} = i \theta \phi(x, \theta) . $$ ``` x = np.arange(-1, 1+1) s1 = fdstencil(0, x)[1] print(s1) theta = np.linspace(0, np.pi) phi = np.exp(1j*np.outer(x, theta)) plt.plot(theta, np.sin(theta)) plt.plot(theta, np.abs(s1 @ phi), '.') plt.plot(theta, theta); ``` # Runge-Kutta methods The methods we have considered thus far can all be expressed as Runge-Kutta methods, which are expressed in terms of $s$ "stage" equations (possibly coupled) and a completion formula. For the ODE $$ \dot u = f(t, u) $$ the Runge-Kutta method is $$\begin{split} Y_i = u(t) + h \sum_j a_{ij} f(t+c_j h, Y_j) \\ u(t+h) = u(t) + h \sum_j b_j f(t+c_j h, Y_j) \end{split}$$ where $c$ is a vector of *abscissa*, $A$ is a table of coefficients, and $b$ is a vector of completion weights. These coefficients are typically expressed in a Butcher Table $$ \left[ \begin{array}{c|c} c & A \\ \hline & b^T \end{array} \right] = \left[ \begin{array}{c|cc} c_0 & a_{00} & a_{01} \\ c_1 & a_{10} & a_{11} \\ \hline & b_0 & b_1 \end{array} \right] . $$ We will see that, for consistency, the abscissa $c$ are always the row sums of $A$ and that $\sum_i b_i = 1$. If the matrix $A$ is strictly lower triangular, then the method is **explicit** (does not require solving equations). We have seen forward Euler $$ \left[ \begin{array}{c|cc} 0 & 0 \\ \hline & 1 \end{array} \right] ,$$ backward Euler $$ \left[ \begin{array}{c|c} 1 & 1 \\ \hline & 1 \end{array} \right] ,$$ and Midpoint $$ \left[ \begin{array}{c|c} \frac 1 2 & \frac 1 2 \\ \hline & 1 \end{array} \right]. $$ Indeed, the $\theta$ method is $$ \left[ \begin{array}{c|c} \theta & \theta \\ \hline & 1 \end{array} \right] $$ and an alternative "endpoint" variant of $\theta$ (a generalization of the trapezoid rule) is $$ \left[ \begin{array}{c|cc} 0 & 0 & 0 \\ 1 & 1-\theta & \theta \\ \hline & 1-\theta & \theta \end{array} \right]. $$ ## Stability To develop an algebraic expression for stability in terms of the Butcher Table, we consider the test equation $$ \dot u = \lambda u $$ and apply the RK method to yield $$ \begin{split} Y_i = u(0) + h \sum_j a_{ij} \lambda Y_j \\ u(h) = u(0) + h \sum_j b_j \lambda Y_j \end{split} $$ or, in matrix form, $$ \begin{split} Y = \mathbb 1 u(0) + h \lambda A Y \\ u(h) = u(0) + h \lambda b^T Y \end{split} $$ where $\mathbb 1$ is a column vector of length $s$ consisting of all ones. This reduces to $$ u(h) = \underbrace{\Big( 1 + h\lambda b^T (I - h \lambda A)^{-1} \mathbb 1 \Big)}_{R(h\lambda)} u(0) . $$ ``` def Rstability(A, b, z): s = len(b) def R(z): return 1 + z*b.dot(np.linalg.solve(np.eye(s) - z*A, np.ones(s))) f = np.vectorize(R) return f(z) def rk_butcher_theta(theta): A = np.array([[theta]]) b = np.array([1]) return A, b def zmeshgrid(xlen=5, ylen=5): xx = np.linspace(-xlen, xlen, 100) yy = np.linspace(-ylen, ylen, 100) x, y = np.meshgrid(xx, yy) z = x + 1j*y return x, y, z def plot_rkstability(A, b, label=''): from matplotlib import plt, ticker, cm, axis import np as np x, y, z = zmeshgrid() data = np.abs(Rstability(A, b, z)) cs = plt.contourf(x, y, data, np.arange(0, 2, 0.1), cmap=cm.coolwarm) cbar = plt.colorbar(cs, ticks=np.linspace(0, 2, 5)) plt.axhline(y=0, xmin=-20.0, xmax=20.0, linewidth=1, linestyle='--', color='grey') plt.axvline(x=0, ymin=-20.0, ymax=20.0, linewidth=1, linestyle='--', color='grey') cs = plt.contour(x, y, data, np.arange(0, 2, 0.5), colors='k') plt.clabel(cs, fontsize=6) for c in cs.collections: plt.setp(c, linewidth=1) plt.title('Stability region' + (': ' + label if label else '')) A, b = rk_butcher_theta(.5) plot_rkstability(A, b, label='$\\theta$') def rk_butcher_theta_endpoint(theta): A = np.array([[0, 0], [1-theta, theta]]) b = np.array([1-theta, theta]) return A, b A, b = rk_butcher_theta_endpoint(.5) plot_rkstability(A, b, label='$\\theta$ endpoint') ``` Evidently the endpoint variant of $\theta$ has the same stability function as the original (midpoint) variant that we've been using. These methods are equivalent for linear problems, but different for nonlinear problems. ## Higher order explicit methods: Heun's and RK4 Explicit Euler steps can be combined to create more accurate methods. One such example is Heun's method, $$ \left[ \begin{array}{c|cc} 0 & 0 & 0 \\ 1 & 1 & 0 \\ \hline & \frac 1 2 & \frac 1 2 \end{array} \right]. $$ Another explicit method is the famous four-stage RK4, $$ \left[ \begin{array}{c|cccc} 0 & 0 & 0 & 0 & 0 \\ \frac 1 2 & \frac 1 2 & 0 & 0 & 0 \\ \frac 1 2 & 0 & \frac 1 2 & 0 & 0 \\ 1 & 0 & 0 & 1 & 0 \\ \hline & \frac 1 6 & \frac 1 3 & \frac 1 3 & \frac 1 6 \end{array} \right] . $$ ``` def rk_butcher_heun(): A = np.array([[0, 0],[1,0]]) b = np.array([.5, .5]) return A, b A, b = rk_butcher_heun() plot_rkstability(A, b, label='Heun') def rk_butcher_4(): A = np.array([[0,0,0,0],[.5,0,0,0],[0,.5,0,0],[0,0,1,0]]) b = np.array([1/6, 1/3, 1/3, 1/6]) return A, b A, b = rk_butcher_4() plot_rkstability(A, b, label='RK4') ``` Finally a method with lots of stability along the imaginary axis. Let's try it on some test problems. ``` def ode_rkexplicit(f, u0, butcher=None, tfinal=1, h=.1): if butcher is None: A, b = rk_butcher_4() else: A, b = butcher c = np.sum(A, axis=1) s = len(c) u = u0.copy() t = 0 hist = [(t,u0)] while t < tfinal: if tfinal - t < 1.01*h: h = tfinal - t tnext = tfinal else: tnext = t + h h = min(h, tfinal - t) fY = np.zeros((len(u0), s)) for i in range(s): Yi = u.copy() for j in range(i): Yi += h * A[i,j] * fY[:,j] fY[:,i] = f(t + h*c[i], Yi) u += h * fY @ b t = tnext hist.append((t, u.copy())) return hist test = linear(np.array([[0, 1],[-1, 0]])) u0 = np.array([.5, 0]) hist = ode_rkexplicit(test.f, u0, rk_butcher_4(), tfinal=50, h=.8) times = [t for t,u in hist] plt.plot(times, [u for t,u in hist], '.') plt.plot(times, test.u(times, u0)); ``` #### Observations * Solutions look pretty good and we didn't need a solve. * We needed to evaluate the right hand side $s$ times per step ``` def mms_error(h, rk_butcher): hist = ode_rkexplicit(test.f, u0, rk_butcher(), tfinal=20, h=h) times = [t for t,u in hist] u = np.array([u for t,u in hist]) return np.linalg.norm(u - test.u(times, u0), np.inf) hs = np.logspace(-1.5, .5, 20) error_heun = [mms_error(h, rk_butcher_heun) for h in hs] error_rk4 = [mms_error(h, rk_butcher_4) for h in hs] plt.loglog(hs, error_heun, 'o', label='Heun') plt.loglog(hs, error_rk4, 's', label='RK4') for p in [2,3,4]: plt.loglog(hs, hs**p, label='$h^%d$'%p) plt.title('Accuracy') plt.legend(loc='lower right') plt.ylabel('Error') plt.xlabel('$h$'); ``` ## Work-precision diagrams for comparing methods Since these methods do not cost the same per step, it is more enlightening to compare them using some measure of cost. For large systems of ODE, such as arise by discretizing a PDE, the cost of time integration is dominated by evaluating the right hand side (discrete spatial operator) on each stage. Measuring CPU time is a more holistic measure of cost, but the results depend on the implementation, computer, and possible operating system interference/variability. Counting right hand side function evaluations is a convenient, reproducible measure of cost. ``` plt.loglog(20*2/hs, error_heun, 'o', label='Heun') plt.loglog(20*4/hs, error_rk4, 's', label='RK4') plt.title('Error vs cost') plt.ylabel('Error') plt.xlabel('# function evaluations') plt.legend(loc='upper right'); test = linear(np.array([[0, 1, 0],[-1, 0, 0],[10, 0, -10]])) print(np.linalg.eigvals(test.A)) u0 = np.array([.5, 0, 0]) hist = ode_rkexplicit(test.f, u0, rk_butcher_4(), tfinal=5, h=.1) times = [t for t,u in hist] plt.plot(times, [u for t,u in hist], '.') plt.plot(times, test.u(times, u0)); hs = np.logspace(-2, -.7, 20) error_heun = [mms_error(h, rk_butcher_heun) for h in hs] error_rk4 = [mms_error(h, rk_butcher_4) for h in hs] plt.loglog(20*2/hs, error_heun, 'o', label='Heun') plt.loglog(20*4/hs, error_rk4, 's', label='RK4') plt.title('Error vs cost') plt.ylabel('Error') plt.xlabel('# function evaluations') plt.legend(loc='upper right'); ``` Evidently Heun becomes resolved at lower cost than RK4. ## Refinement in space and time When solving a transient PDE, we should attempt to balance spatial discretization error with temporal discretization error. If we wish to use the same type of method across a range of accuracies, we need to 1. choose spatial and temporal discretizations with the same order of accuracy, * choose grid/step sizes so the leading error terms are of comparable size, and * ensure that both spatial and temporal discretizations are stable throughout the refinement range. Since temporal discretization errors are proportional to the duration, simulations that run for a long time will need to use more accurate time discretizations. # Runge-Kutta order conditions We consider the autonomous differential equation $$ \dot u = f(u) . $$ Higher derivatives of the exact soultion can be computed using the chain rule, e.g., \begin{align*} \ddot u(t) &= f'(u) \dot u = f'(u) f(u) \\ \dddot u(t) &= f''(u) f(u) f(u) + f'(u) f'(u) f(u) . \\ \end{align*} Note that if $f(u)$ is linear, $f''(u) = 0$. Meanwhile, the numerical solution is a function of the time step $h$, $$\begin{split} Y_i(h) &= u(0) + h \sum_j a_{ij} f(Y_j) \\ U(h) &= u(0) + h \sum_j b_j f(Y_j). \end{split}$$ We will take the limit $h\to 0$ and equate derivatives of the numerical solution. First we differentiate the stage equations, \begin{split} Y_i(0) &= u(0) \\ \dot Y_i(0) &= \sum_j a_{ij} f(Y_j) \\ \ddot Y_i(0) &= 2 \sum_j a_{ij} \dot f(Y_j) \\ &= 2 \sum_j a_{ij} f'(Y_j) \dot Y_j \\ &= 2\sum_{j,k} a_{ij} a_{jk} f'(Y_j) f(Y_k) \\ \dddot Y_i(0) &= 3 \sum_j a_{ij} \ddot f (Y_j) \\ &= 3 \sum_j a_{ij} \Big( \sum_k f''(Y_j) \dot Y_j \dot Y_k + f'(Y_j) \ddot Y_j \Big) \\ &= 3 \sum_{j,k,\ell} a_{ij} a_{jk} \Big( a_{j\ell} f''(Y_j) f(Y_k) f(Y_\ell) + 2 a_{k\ell} f'(Y_j) f'(Y_k) f(Y_\ell) \Big) \end{split} where we have used Liebnitz's formula for the $m$th derivative, $$ (h \phi(h))^{(m)}|_{h=0} = m \phi^{(m-1)}(0) .$$ Similar formulas apply for $\dot U(0)$, $\ddot U(0)$, and $\dddot U(0)$, with $b_j$ in place of $a_{ij}$. Equating terms $\dot u(0) = \dot U(0)$ yields $$ \sum_j b_j = 1, $$ equating $\ddot u(0) = \ddot U(0)$ yields $$ 2 \sum_{j,k} b_j a_{jk} = 1 , $$ and equating $\dddot u(0) = \dddot U(0)$ yields the two equations \begin{split} 3\sum_{j,k,\ell} b_j a_{jk} a_{j\ell} &= 1 \\ 6 \sum_{j,k,\ell} b_j a_{jk} a_{k\ell} &= 1 . \end{split} #### Observations * These are systems of nonlinear equations for the coefficients $a_{ij}$ and $b_j$. There is no guarantee that they have solutions. * The number of equations grows rapidly as the order increases. | | $u^{(1)}$ | $u^{(2)}$ | $u^{(3)}$ | $u^{(4)}$ | $u^{(5)}$ | $u^{(6)}$ | $u^{(7)}$ | $u^{(8)}$ | $u^{(9)}$ | $u^{(10)}$ | | ------------- |-------------| -----| --- | | # terms | 1 | 1 | 2 | 4 | 9 | 20 | 48 | 115 | 286 | 719 | | cumulative | 1 | 2 | 4 | 8 | 17 | 37 | 85 | 200 | 486 | 1205 | * Usually the number of order conditions does not exactly match the number of free parameters, meaning that the remaining parameters can be optimized (usually numerically) for different purposes, such as to minimize the leading error terms or to maximize stability in certain regions of the complex plane. Finding globally optimal solutions can be extremely demanding. * The arithmetic managing the derivatives gets messy, but can be managed using rooted trees. ![Trees and elementary differentials up to order 5. From Hairer, Nørsett, and Wanner.](figures/rktrees.png) #### Theorem (from Hairer, Nørsett, and Wanner) A Runge-Kutta method is of order $p$ if and only if $$ \gamma(\mathcal t) \sum_{j} b_j \Phi_j(t) = 1 $$ for all trees $t$ of order $\le p$. For a linear autonomous equation $$ \dot u = A u $$ we only need one additional order condition per order of accuracy because $f'' = 0$. These conditions can also be derived by equating derivatives of the stability function $R(z)$ with the exponential $e^z$. For a linear non-autonomous equation $$ \dot u = A(t) u + g(t) $$ or more generally, an autonomous system with quadratic right hand side, $$ \dot u = B (u \otimes u) + A u + C $$ where $B$ is a rank 3 tensor, we have $f''' = 0$, thus limiting the number of order conditions. # Embedded error estimation and adaptive control It is often possible to design Runge-Kutta methods with multiple completion orders, say of order $p$ and $p-1$. $$\left[ \begin{array}{c|c} c & A \\ \hline & b^T \\ & \tilde b^T \end{array} \right] . $$ The classical RK4 does not come with an embedded method, but most subsequent RK methods do. The [Bogacki-Shampine method](https://en.wikipedia.org/wiki/Bogacki%E2%80%93Shampine_method) is given by ``` def rk_butcher_bs3(): A = np.array([[0, 0, 0, 0], [1/2, 0, 0, 0], [0, 3/4, 0, 0], [2/9, 1/3, 4/9, 0]]) b = np.array([[2/9, 1/3, 4/9, 0], [7/24, 1/4, 1/3, 1/8]]) return A, b A, b = rk_butcher_bs3() plot_rkstability(A, b[0], label='Bogacki-Shampine 3') plt.figure() plot_rkstability(A, b[1], label='Bogacki-Shampine 2') ``` While this method has four stages, it has the "first same as last" (FSAL) property meaning that the fourth stage exactly matches the completion formula, thus the first stage of the next time step. This means it can be implemented using only three function evaluations per time step. Higher order methods with embedded error estimation include * [Fehlberg](https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method), a 6-stage, 5th order method for which the 4th order embedded formula has been optimized for accuracy. * [Dormand-Prince](https://en.wikipedia.org/wiki/Dormand%E2%80%93Prince_method), a 7-stage, 5th order method with the FSAL property, with the 5th order completion formula optimized for accuracy. ``` # We can import and clean these coefficient tables directly from Wikipedia import pandas from fractions import Fraction dframe = pandas.read_html('https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method')[0] dframe # Clean up unicode minus sign, NaN, and convert to float dfloat = dframe.applymap(lambda s: s.replace('−', '-') if isinstance(s, str) else s) \ .fillna(0).applymap(Fraction).astype(float) dfloat # Extract the Butcher table darray = np.array(dfloat) A = darray[:6,2:] b = darray[6:,2:] pandas.DataFrame(A) # Labeled tabular display plot_rkstability(A, b[0], label='Fehlberg 5') plt.figure() plot_rkstability(A, b[1], label='Fehlberg 4') dframe = pandas.read_html('https://en.wikipedia.org/wiki/Dormand%E2%80%93Prince_method')[0] dfloat = dframe.applymap(lambda s: s.replace('−', '-') if isinstance(s, str) else s).fillna(0).applymap(Fraction).astype(float) darray = np.array(dfloat) A = darray[:7,2:] b = darray[7:,2:] pandas.DataFrame(A) plot_rkstability(A, b[0], label='DP 5') plt.figure() plot_rkstability(A, b[1], label='DP 4') ``` ## Adaptive control Given a completion formula $b^T$ of order $p$ and $\tilde b^T$ of order $p-1$, an estimate of the local truncation error (on this step) is given by $$ e_{\text{loc}}(h) = \lVert h (b - \tilde b)^T f(Y) \rVert \in O(h^p) . $$ Given a tolerance $\epsilon$, we would like to find $h_*$ such that $$ e_{\text{loc}}(h_*) < \epsilon . $$ If $$e_{\text{loc}}(h) = c h^p$$ for some constant $c$, then $$ c h_*^p < \epsilon $$ implies $$ h_* < \left( \frac{\epsilon}{c} \right)^{1/p} . $$ Given the estimate with the current $h$, $$ c = e_{\text{loc}}(h) / h^p $$ we conclude $$ \frac{h_*}{h} < \left( \frac{\epsilon}{e_{\text{loc}}(h)} \right)^{1/p} . $$ #### Notes * Usually a "safety factor" less than 1 is included so the predicted error is less than the threshold to reject a time step. * We have used an absolute tolerance above. If the values of solution variables vary greatly in time, a relative tolerance $e_{\text{loc}}(h) / \lVert u(t) \rVert$ or a combination thereof is desirable. * There is a debate about whether one should optimize the rate at which error is accumulated with respect to work (estimate above) or with respect to simulated time (as above, but with error behaving as $O(h^{p-1})$). For problems with a range of time scales at different periods, this is usually done with respect to work. * Global error control is an active research area. # Homework 4: Due 2018-12-03 (Monday) * Implement an explicit Runge-Kutta integrator that takes an initial time step $h_0$ and an error tolerance $\epsilon$. * You can use the Bogacki-Shampine method or any other method with an embedded error estimate. * A step should be rejected if the local truncation error exceeds the tolerance. * Test your method on the nonlinear equation $$ \begin{bmatrix} \dot u_0 \\ \dot u_1 \end{bmatrix} = \begin{bmatrix} u_1 \\ k (1-u_0^2) u_1 - u_0 \end{bmatrix} $$ for $k=2$, $k=5$, and $k=20$. * Make a work-precision diagram for your adaptive method and for constant step sizes. * State your conclusions or ideas (in a README, or Jupyter notebook) about appropriate (efficient, accurate, reliable) methods for this type of problem. # Implicit Runge-Kutta methods We have been considering examples of high-order explicit Runge-Kutta methods. For processes like diffusion, the time step becomes limited (under grid refinement, but usually for practical resolution) by stability rather than accuracy. Implicit methods, especially $A$-stable and $L$-stable methods, allow much larger time steps. ### Diagonally implicit A Runge-Kutta method is called **diagonally implicit** if the Butcher matrix $A$ is lower triangular, in which case the stages can be solved sequentially. Each stage equation has the form $$ Y_i - h a_{ii} f(Y_i) = u(0) + h \sum_{j<i} a_{ij} f(Y_j) $$ where all terms in the right hand side are known. For stiff problems, it is common to multiply though by $\alpha = (h a_{ii})^{-1}$, yielding $$ \alpha Y_i - f(Y_i) = \alpha u(0) + \sum_{j<i} \frac{a_{ij}}{a_{ii}} f(Y_j) . $$ * It is common for solvers to reuse a linearization associated with $f(Y_i)$. * It is common to have setup costs associated with the solution of the "shifted" problem. Methods with constant diagonals, $a_{ii} = a_{jj}$, are often desired to amortize setup costs. These methods are called **singly diagonally implicit**. There are also related methods called Rosenbrock or Roserbrock-W that more aggressively amortize setup costs.
github_jupyter
# Valuación de opciones asiáticas - Las opciones que tratamos la clase pasada dependen sólo del valor del precio del subyacente $S_t$, en el instante que se ejerce. - Cambios bruscos en el precio, cambian que la opción esté *in the money* a estar *out the money*. - **Posibilidad de evitar esto** $\longrightarrow$ suscribir un contrato sobre el valor promedio del precio del subyacente. - **Opciones exóticas**: opciones cuya estructura de resultados es diferente a la de las opciones tradicionales, y que han surgido con la intención, bien de **abaratar el coste de las primas** de dichas opciones tradicionales, o bien, para ajustarse más adecuadamente a determinadas situaciones. > ![image.png](attachment:image.png) > Referencia: ver información adicional acerca de las distintas opciones exóticas en el siguiente enlace: [link](http://rabida.uhu.es/dspace/bitstream/handle/10272/5546/Opciones_exoticas.pdf?sequence=2) - <font color ='red'> Puede proveer protección contra fluctuaciones extremas del precio en mercados volátiles. </font> - **Nombre**: Banco Trust de Tokio ofreció este tipo de opciones ### Justificación - Debido a que los contratos que solo dependen del precio final del subyacente son más vulnerables a cambios repentinos de gran tamaño o manipulación de precios, las opciones asiáticas son menos sensibles a dichos fenómenos (menos riesgosas). - Algunos agentes prefieren opciones asiáticas como instrumentos de cobertura, ya que pueden estar expuestos a la evolución del subyacente en un intervalo de tiempo. - Son más baratas que sus contrapartes **plain vanilla** $\longrightarrow$ la volatilidad del promedio por lo general será menor que la del subyacente. **Menor sensibilidad de la opción ante cambios en el subyacente que para una opción vanilla con el mismo vencimiento.** > Información adicional: **[link](https://reader.elsevier.com/reader/sd/pii/S0186104216300304?token=FE78324CCB90A9B00930E308E5369EB593916F99C8F78EA9190DF9F8FFF55547D8EB557F77801D84C6E01FE63B92F9A3)** ### ¿Dónde se negocian? - Mercados OTC (Over the Counter / Independientes). - Las condiciones para el cálculo matemático del promedio y otras condiciones son especificadas en el contrato. Lo que las hace un poco más “personalizables”. Existen diversos tipos de opciones asiáticas y se clasiflcan de acuerdo con lo siguiente. 1. La media que se utiliza puede ser **aritmética** o geométrica. - Media aritmética: $$ \bar x = \frac{1}{n}\sum_{i=1}^{n} x_i$$ - Media geométrica: $$ {\bar {x}}={\sqrt[{n}]{\prod _{i=1}^{n}{x_{i}}}}={\sqrt[{n}]{x_{1}\cdot x_{2}\cdots x_{n}}}$$ * **Ventajas**: - Considera todos los valores de la distribución. - Es menos sensible que la media aritmética a los valores extremos. * **Desventajas** - Es de significado estadístico menos intuitivo que la media aritmética. - Su cálculo es más difícil. - Si un valor $x_i = 0$ entonces la media geométrica se anula o no queda determinada. La media aritmética de un conjunto de números positivos siempre es igual o superior a la media geométrica: $$ \sqrt[n]{x_1 \cdot x_2 \dots x_n} \le \frac{x_1+ \dots + x_n}{n} $$ 2. Media se calcula para $S_t \longrightarrow$ "Precio de ejercicio fijo". Media se calcula para precio de ejercicio $\longrightarrow$ "Precio de ejercicio flotante". 3. Si la opción sólo se puede ejercer al final del tiempo del contrato se dice que es asiática de tipo europeo o **euroasiática**, y si puede ejercer en cualquier instante, durante la vigencia del contrato se denomina **asiática de tipo americano.** Los tipos de opciones euroasiáticas son: - Call con precio de ejercicio fijo, función de pago: $\max\{A-K,0\}$. - Put con precio de ejercicio fijo, función de pago: $\max\{K-A,0\}$. - Call con precio de ejercicio flotante, función de pago: $\max\{S_T-A,0\}$. - Put con precio de ejercicio flotante, función de pago: $\max\{A-S_T,0\}$. Donde $A$ es el promedio del precio del subyacente. $$\text{Promedio aritmético} \quad A={1\over T} \int_0^TS_tdt$$ $$\text{Promedio geométrico} \quad A=\exp\Big({1\over T} \int_0^T Ln(S_t) dt\Big)$$ De aquí en adelante denominaremos **Asiática** $\longrightarrow$ Euroasiática y se analizará el call asiático con **K Fijo**. Se supondrá un solo activo con riesgo, cuyos proceso de precios $\{S_t | t\in [0,T]\}$ satisface un movimiento browniano geométrico, en un mercado que satisface las suposiciones del modelo de Black y Scholes. __Suposiciones del modelo__: - El precio del activo sigue un movimiento browniano geométrico. $$\frac{dS_t}{S_t}=\mu dt + \sigma dW_t,\quad 0\leq t \leq T, S_0 >0$$ - El comercio puede tener lugar continuamente sin ningún costo de transacción o impuestos. - Se permite la venta en corto y los activos son perfectamente divisibles. Por lo tanto, se pueden vender activos que no son propios y se puede comprar y vender cualquier número (no necesariamente un número entero) de los activos subyacentes. - La tasa de interés libre de riesgo continuamente compuesta es constante. - Los inversores pueden pedir prestado o prestar a la misma tasa de interés sin riesgo. - No hay oportunidades de arbitraje sin riesgo. De ello se deduce que todas las carteras libres de riesgo deben obtener el mismo rendimiento. Recordemos que bajo esta medida de probabilidad, $P^*$, denominada de riesgo neutro, bajo la cual el precio del activo, $S_t$, satisface: $$dS_t = rS_tdt+\sigma S_tdW_t,\quad 0\leq t \leq T, S_0 >0$$ Para un call asiático de promedio aritmético y con precio de ejercicios fijo, está dado por $$\max \{A(T)-K,0\} = (A(T)-K)_+$$ con $A(x)={1\over x} \int_0^x S_u du$ Se puede ver que el valor en el tiempo t de la opción call asiática está dado por: $$ V_t(K) = e^{-r(T-t)}E^*[(A(T)-K)_+]$$ Para el caso de interés, *Valución de la opción*, donde $t_0=0$ y $t=0$, se tiene: $$\textbf{Valor call asiático}\longrightarrow V_0(K)=e^{-rT}E\Bigg[ \Big({1\over T} \int_0^T S_u du -K\Big)_+\Bigg]$$ ## Usando Monte Carlo Para usar este método es necesario que se calcule el promedio $S_u$ en el intervalo $[0,T]$. Para esto se debe aproximar el valor de la integral por los siguiente dos métodos. Para los dos esquemas se dividirá el intervalo $[0,T]$ en N subintervalos de igual longitud, $h={T\over N}$, esto determina los tiempos $t_0,t_1,\cdots,t_{N-1},t_N $, en donde $t_i=ih$ para $i=0,1,\cdots,N$ ### 1. Sumas de Riemann $$\int_0^T S_u du \approx h \sum_{i=0}^{n-1} S_{t_i}$$ De este modo, si con el método de Monte Carlo se generan $M$ trayectorias, entonces la aproximación de el valor del call asiático estaría dada por: $$\hat V_0^{(1)}= {e^{-rT} \over M} \sum_{j=1}^{M} \Bigg({1\over N} \sum_{i=0}^{N-1} S_{t_i}-K \Bigg)_+$$ ### 2. Mejorando la aproximación de las sumas de Riemann (esquema del trapecio) ![imagen.png](attachment:imagen.png) Desarrollando la exponencial en serie de taylor y suponiendo que $h$ es pequeña, sólo se conservan los términos de orden uno, se tiene la siguiente aproximación: $$\int_0^T S_u du \approx {h \over 2}\sum_{i=0}^{N-1}S_{t_i}(2+rh+(W_{t_{i+1}}-W_{t_i})\sigma)$$ Reemplazando esta aproximación en el precio del call, se tiene la siguiente estimación: $$\hat V_0^{(2)}= {e^{-rT} \over M} \sum_{j=1}^{M} \Bigg({h\over 2T} \sum_{i=0}^{N-1} S_{t_i}(2+rh+(W_{t_{i+1}}-W_{t_i})\sigma)-K \Bigg)_+$$ **recordar que $h = \frac{T}{N}$** > **Referencia**: http://mat.izt.uam.mx/mat/documentos/notas%20de%20clase/cfenaoe3.pdf ## Ejemplo Como caso de prueba se seleccionó el de un call asiático con precio inicial, $S_0 = 100$, precio de ejercicio $K = 100$, tasa libre de riesgo $r = 0.10$, volatilidad $\sigma = 0.20$ y $T = 1$ año. Cuyo precio es $\approx 7.04$. ``` #importar los paquetes que se van a usar import pandas as pd import pandas_datareader.data as web import numpy as np import datetime import matplotlib.pyplot as plt import scipy.stats as st import seaborn as sns %matplotlib inline #algunas opciones para Pandas pd.set_option('display.notebook_repr_html', True) pd.set_option('display.max_columns', 9) pd.set_option('display.max_rows', 10) pd.set_option('display.width', 78) pd.set_option('precision', 3) # Programar la solución de la ecuación de Black-Scholes # St = S0*exp((r-sigma^2/2)*t+ sigma*DeltaW) np.random.seed(5555) NbTraj = 2 NbStep = 100 S0 = 100 r = 0.10 sigma = 0.2 K = 100 DeltaT = 1 / NbStep SqDeltaT = np.sqrt(DeltaT) DeltaW = np.random.randn(NbStep - 1, NbTraj) * SqDeltaT nu = r-sigma**2/2 increments = nu*DeltaT + sigma * DeltaW # Ln St = Ln S0 + (r-sigma^2/2)*t+ sigma*DeltaW) concat = np.concatenate([np.log(S0)*np.ones([NbStep, 1]), increments], axis=1) LogSt = np.cumsum(concat, axis=1) St = np.exp(LogSt) def BSprices(mu,sigma,S0,NbTraj,NbStep): """ Expresión de la solución de la ecuación de Black-Scholes St = S0*exp((r-sigma^2/2)*t+ sigma*DeltaW) Parámetros --------- mu : Tasa libre de riesgo sigma : Desviación estándar de los rendimientos S0 : Precio inicial del activo subyacente NbTraj: Cantidad de trayectorias a simular NbStep: Número de días a simular """ # Datos para la fórmula de St nu = mu-(sigma**2)/2 DeltaT = 1/NbStep SqDeltaT = np.sqrt(DeltaT) DeltaW = SqDeltaT*np.random.randn(NbTraj,NbStep-1) # Se obtiene --> Ln St = Ln S0+ nu*DeltaT + sigma*DeltaW increments = nu*DeltaT + sigma*DeltaW concat = np.concatenate((np.log(S0)*np.ones([NbTraj,1]),increments),axis=1) # Se utiliza cumsum por que se quiere simular los precios iniciando desde S0 LogSt = np.cumsum(concat,axis=1) # Se obtienen los precios simulados para los NbStep fijados St = np.exp(LogSt) # Vector con la cantidad de días simulados t = np.arange(0,NbStep) return St.T,t def calc_daily_ret(closes): return np.log(closes/closes.shift(1)).iloc[1:] np.random.seed(5555) NbTraj = 2 NbStep = 100 S0 = 100 r = 0.10 sigma = 0.2 K = 100 # Resolvemos la ecuación de black scholes para obtener los precios St,t = BSprices(r,sigma,S0,NbTraj,NbStep) # t = t*NbStep prices = pd.DataFrame(St,index=t) prices # Graficamos los precios simulados ax = prices.plot(label='precios originales')#plt.plot(t,St,label='precios') # Explorar el funcionamiento de la función expanding y rolling para ver cómo calcular el promedio # y graficar sus diferencias Average_t = prices.expanding(1, axis=0).mean() Average_t_roll = prices.rolling(window=20).mean() Average_t.plot(ax=ax) Average_t_roll.plot(ax=ax) plt.legend() (1.5+3)/2, 2.333*2-1.5 data # Ilustración función rolling y expanding data = pd.DataFrame([ ['a', 1], ['a', 2], ['a', 4], ['b', 5], ], columns = ['category', 'value']) print('expanding\n',data.value.expanding(2).sum()) print('rolling\n',data.value.rolling(window=2).sum()) # Ilustración resultado función expanding pan = pd.DataFrame(np.matrix([[1,2,3],[4,5,6],[7,8,9],[1,1,1]])) pan.expanding(1,axis=0).mean() ``` ## Valuación opciones asiáticas ### 1. Método sumas de Riemann ``` #### Sumas de Riemann # Strike strike = K # Tiempo de cierre del contrato T = 1 # Valuación de la opción call = pd.DataFrame({'Prima_asiatica': np.exp(-r*T) * np.fmax(Average_t - strike, 0).mean(axis=1)}, index=t) call.plot() print('La prima estimada usando %i trayectorias es: %2.2f'%(NbTraj,call.iloc[-1].Prima_asiatica)) # intervalos de confianza confianza = 0.95 sigma_est = call.sem().Prima_asiatica mean_est = call.iloc[-1].Prima_asiatica i1 = st.t.interval(confianza, NbTraj - 1, loc=mean_est, scale=sigma_est) i2 = st.norm.interval(confianza, loc=mean_est, scale=sigma_est) print('El intervalor de confianza usando t-dist es:', i1) print('El intervalor de confianza usando norm-dist es:', i2) call.iloc[-1].Prima ``` Ahora hagamos pruebas variando la cantidad de trayectorias `NbTraj` y la cantidad de números de puntos `NbStep` para ver como aumenta la precisión del método. Primero creemos una función que realice la aproximación de Riemann ``` # Función donde se almacenan todos los resultados def Riemann_approach(K:'Strike price',r:'Tasa libre de riesgo',S0:'Precio inicial', NbTraj:'Número trayectorias',NbStep:'Cantidad de pasos a simular', sigma:'Volatilidad',T:'Tiempo de cierre del contrato en años', flag=None): # Resolvemos la ecuación de black scholes para obtener los precios St,t = BSprices(r,sigma,S0,NbTraj,NbStep) # Almacenamos los precios en un dataframe prices = pd.DataFrame(St,index=t) # Obtenemos los precios promedios Average_t = prices.expanding().mean() # Definimos el dataframe de strikes strike = K # Calculamos el call de la opción según la formula obtenida para Sumas de Riemann call = pd.DataFrame({'Prima': np.exp(-r*T) \ *np.fmax(Average_t - strike, 0).mean(axis=1)}, index=t) # intervalos de confianza confianza = 0.95 sigma_est = call.sem().Prima mean_est = call.iloc[-1].Prima i1 = st.norm.interval(confianza, loc=mean_est, scale=sigma_est) # return np.array([call.iloc[-1].Prima,i1[0],i1[1]]) # if flag==True: # # calcular intervarlo # return call, intervalo # else return call.iloc[-1].Prima ``` ## Ejemplo Valuar la siguiente opción asiática con los siguientes datos $S_0= 100$, $r=0.10$, $\sigma=0.2$, $K=100$, $T=1$ en años, usando la siguiente combinación de trayectorias y número de pasos: ``` NbTraj = [1000, 10000, 20000] NbStep = [10, 50, 100] # Visualización de datos filas = ['Nbtray = %i' %i for i in NbTraj] col = ['NbStep = %i' %i for i in NbStep] df = pd.DataFrame(index=filas,columns=col) df # Resolverlo acá S0 = 100 r = 0.10 sigma = 0.2 K = 100 T = 1 df.loc[:, :] = list(map(lambda N_tra: list(map(lambda N_ste: Riemann_approach(K, r, S0, N_tra, N_ste, sigma, T), NbStep)), NbTraj)) df ``` # Tarea Implementar el método de esquemas del trapecio, para valuar la opción call y put asiática con precio inicial, $S_0 = 100$, precio de ejercicio $K = 100$, tasa libre de riesgo $r = 0.10$, volatilidad $\sigma = 0.20$ y $T = 1$ año. Cuyo precio es $\approx 7.04$. Realizar la simulación en base a la siguiente tabla: ![imagen.png](attachment:imagen.png) Observe que en esta tabla se encuentran los intervalos de confianza de la aproximación obtenida y además el tiempo de simulación que tarda en encontrar la respuesta cada método. - Se debe entonces realizar una simulación para la misma cantidad de trayectorias y número de pasos y construir una Dataframe de pandas para reportar todos los resultados obtenidos.**(70 puntos)** - Compare los resultados obtenidos con los resultados arrojados por la función `Riemann_approach`. Concluya. **(30 puntos)** Se habilitará un enlace en canvas donde se adjuntará los resultados de dicha tarea >**Nota:** Para generar índices de manera como se especifica en la tabla referirse a: > - https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html > - https://jakevdp.github.io/PythonDataScienceHandbook/03.05-hierarchical-indexing.html > - https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.MultiIndex.html <script> $(document).ready(function(){ $('div.prompt').hide(); $('div.back-to-top').hide(); $('nav#menubar').hide(); $('.breadcrumb').hide(); $('.hidden-print').hide(); }); </script> <footer id="attribution" style="float:right; color:#808080; background:#fff;"> Created with Jupyter by Oscar David Jaramillo Z. </footer>
github_jupyter
``` import numpy as np import matplotlib.pyplot as plt import tensorflow as tf tf.enable_eager_execution() a = '' if a is None: print("not None") if not a : print("not a") a = np.arange(10) print(a[9:]) print(a[:9]) print(a.reshape([1,10,1])) a = np.arange(6).reshape(3,2) idxs = np.random.randint(a.shape[0], size=[4, a.shape[0]]) print(idxs.shape, a[idxs].shape) print(a) print(idxs) print(a[idxs]) ``` # TensorStandardScaler ``` class TensorStandardScaler: """Helper class for automatically normalizing inputs into the network. """ def __init__(self, x_dim): """Initializes a scaler. Arguments: x_dim (int): The dimensionality of the inputs into the scaler. Returns: None. """ self.fitted = False with tf.variable_scope("Scaler"): self.mu = tf.get_variable( name="scaler_mu", shape=[1, x_dim], initializer=tf.constant_initializer(0.0), trainable=False ) self.sigma = tf.get_variable( name="scaler_std", shape=[1, x_dim], initializer=tf.constant_initializer(1.0), trainable=False ) self.cached_mu, self.cached_sigma = np.zeros([0, x_dim]), np.ones([1, x_dim]) def fit(self, data): """Runs two ops, one for assigning the mean of the data to the internal mean, and another for assigning the standard deviation of the data to the internal standard deviation. This function must be called within a 'with <session>.as_default()' block. Arguments: data (np.ndarray): A numpy array containing the input Returns: None. """ mu = np.mean(data, axis=0, keepdims=True) sigma = np.std(data, axis=0, keepdims=True) sigma[sigma < 1e-12] = 1.0 # equal to tf.assign in tf2 self.mu.load(mu) self.sigma.load(sigma) self.fitted = True self.cache() def transform(self, data): """Transforms the input matrix data using the parameters of this scaler. Arguments: data (np.array): A numpy array containing the points to be transformed. Returns: (np.array) The transformed dataset. """ return (data - self.mu) / self.sigma def inverse_transform(self, data): """Undoes the transformation performed by this scaler. Arguments: data (np.array): A numpy array containing the points to be transformed. Returns: (np.array) The transformed dataset. """ return self.sigma * data + self.mu def get_vars(self): """Returns a list of variables managed by this object. Returns: (list<tf.Variable>) The list of variables. """ return [self.mu, self.sigma] def cache(self): """Caches current values of this scaler. Returns: None. """ # use a default session, return the value of this variable self.cached_mu = self.mu.eval() self.cached_sigma = self.sigma.eval() def load_cache(self): """Loads values from the cache Returns: None. """ self.mu.load(self.cached_mu) self.sigma.load(self.cached_sigma) # scaler = TensorStandardScaler(4) data = np.arange(12).reshape(3,4) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) # print(scaler.fitted) # print(scaler.cached_mu, scaler.cached_sigma) mu, var = sess.run(scaler.get_vars()) print(mu, var) scaler.fit(data) tran_data = sess.run(scaler.transform(data)) print(tran_data) mu, var = sess.run(scaler.get_vars()) print(mu, var) ``` # shuffle rows ``` def shuffle_rows(arr): idxs = np.argsort(np.random.uniform(size=arr.shape), axis=-1) return arr[np.arange(arr.shape[0])[:, None], idxs] a = np.random.uniform(size=(3,3,3)) b = a.shape print(b) a = shuffle_rows(a) c = a.shape print(c) a = np.arange(12).reshape(3,2,2) print(a) idxs = a.argsort() b = a[np.array([0,1,2])[:,None], idxs] print(b.shape) np.random.randint() np.random.permutation() ``` # MPC ._compile_cost ``` # action seq a = np.arange(12).reshape(2,2,3) b = a[:, :, None] c = np.tile(b, [1, 1, 5, 1]) print(a.shape, b.shape, c.shape) # obs seq a = np.arange(6).reshape(2,3)[None] b = np.tile(a, [2, 1]) print(a.shape, b.shape) print(a, "\n") print(b, "\n") random_array = tf.random_uniform([8, 5]) sort_idxs = tf.nn.top_k( random_array, k=5 ).indices with tf.Session() as sess: sess.run(tf.global_variables_initializer()) random_array, idx = sess.run([random_array, sort_idxs]) print(random_array, "\n",idx, "\n", idx.shape) tmp = tf.tile(tf.range(8)[:, None], [1, 5])[:, :, None] with tf.Session() as sess: result = sess.run(tmp) print(result.shape) idxs = np.concatenate([result,idx[:,:,None]],axis=-1) print(idxs.shape) def make_bool(arg): if arg == "False" or arg == "false" or not bool(arg): return False else: return True print(make_bool('False')) def create_read_only(message): def read_only(arg): raise RuntimeError(message) return read_only a = create_read_only("nsembled models must have more than one net.") a(3) print(a) # tf.enable_eager_execution() pop=2 plan_hor=3 dU = 4 npart=2 ac_seqs = tf.range(0, 24) ac_seqs = tf.reshape(ac_seqs,[2,3,4]) print(ac_seqs) ac_seqs1 = tf.reshape(tf.tile( tf.transpose(ac_seqs, [1, 0, 2])[:, :, None], [1, 1, npart, 1] ), [plan_hor, -1, dU]) print(ac_seqs1) ac_seqs2 = tf.reshape(tf.tile( tf.transpose(ac_seqs, [1, 0, 2])[:, :, None], [1, 1, npart, 1] ), [plan_hor, -1, dU]) print(ac_seqs1) ```
github_jupyter
``` import random suits = ('Hearts','Diamonds','Spades','Clubes') ranks = ('Two','Three','Four','Five','Six','Seven','Eight','Nine','Ten','Jack','Queen','king','Ace') values = {'Two':2,'Three':3,'Four':4,'Five':5,'Six':6, 'Seven':7,'Eight':8,'Nine':9,'Ten':10,'Jack':10,'Queen':10,'king':10,'Ace':11} playing = True class Card(): def __init__(self,suit,rank): self.suit = suit self.rank = rank self.value = values[rank] def __str__(self): return self.rank +" of " + self.suit class Deck(): def __init__(self): self.all_cards = [] for suit in suits: for rank in ranks: created_card = Card(suit,rank) self.all_cards.append(created_card) def shuffle_deck(self): random.shuffle(self.all_cards) def __str__(self): #you can only return string here so #if you have more than one string use string concatination l = ' ' for i in self.all_cards: l += '\n' + i.__str__() return l def deal_one(self): return self.all_cards.pop(0) class Hand: def __init__(self): self.cards = [] # start with an empty list as we did in the Deck class self.value = 0 # start with zero value self.aces = 0 # add an attribute to keep track of aces def add_card(self,card): self.cards.append(card) self.value += values[card.rank] if card.rank == 'Ace': self.aces +=1 def adjust_for_ace(self): # self.aces > 1 (In python 0 is considered as False and Other +ve and -ve integer are as True) # thats why only self.aces is used in while loop while self.value >21 and self.aces: self.value -= 10 self.aces -=1 player = Deck() player.shuffle_deck() test_hand = Hand() test_hand.add_card(player.deal_one()) # reduced variable if -1: print('h') class Chips(): def __init__(self,total = 100): self.total = total self.bet = 0 def win_bet(self): self.total += self.bet def loose_bet(self): self.total -= self.bet def take_bet(chips): while True: try: chips.bet = int(input("Enter No of Chips? ")) except: print("Enter Integer Only ! ") else: if chips.bet > chips.total: print('Bet{} is more than Total {}'.format(chips.bet,chips.total)) else: break def hit(deck,hand): hand.add_card(deck.deal_one()) hand.adjust_for_ace() def hit_or_stand(deck,hand): global playing while True: x = input(" Hit Or Stand ! Enter h or s ") if x[0].lower() == 'h': print("Done") hit(deck,hand) elif x[0].lower() == 's': print(" Player Stands !,Dealers Turn ") print("Done") playing = False else: print(" Enter H or S only ") continue break def show_some(player,dealer): print("\nDealer's Hand:") print(" <card hidden>") print('',dealer.cards[1]) #to show all cards we can use list unpacking * (sep is used to give new line b/w unpacking) print("\nPlayer's Hand:", *player.cards, sep='\n ') def show_all(player,dealer): print("\nDealer's Hand:", *dealer.cards, sep='\n ') print("Dealer's Hand =",dealer.value) print("\nPlayer's Hand:", *player.cards, sep='\n ') print("Player's Hand =",player.value) def player_busts(chips): print("Player busts!") chips.lose_bet() def player_wins(chips): print("Player wins!") chips.win_bet() def dealer_busts(chips): print("Dealer busts!") chips.win_bet() def dealer_wins(chips): print("Dealer wins!") chips.lose_bet() def push(player,dealer): print("Dealer and Player tie! It's a push.") while True: print(" Welcome to the Game ") # deck created and shuffled deck = Deck() deck.shuffle_deck() # players hand player_hand = Hand() player_hand.add_card(deck.deal_one()) player_hand.add_card(deck.deal_one()) # dealers hand dealer_hand = Hand() dealer_hand.add_card(deck.deal_one()) dealer_hand.add_card(deck.deal_one()) #set player chips player_chips = Chips() #hit take_bet(player_chips) show_some(player_hand,dealer_hand) while playing: # hit or stand to continue hit_or_stand(deck,player_hand) show_some(player_hand,dealer_hand) if player_hand.value > 21: player_busts(player_chips) break if player_hand.value < 21: while dealer_hand.value < player_hand.value: hit(deck,dealer_hand) show_all(player_hand,dealer_hand) if dealer_hand.value > 21: dealer_busts(player_chips) elif dealer_hand.value > player_hand.value: dealer_wins(player_chips) elif player_hand.value > dealer_hand.value: player_wins(player_chips) else: push(player_hand,dealer_hand) break print('\n') print('players Remaning Chips : {}'.format(player_chips.total)) new_game = input("Would you like to play another hand? Enter 'y' or 'n' ") if new_game[0].lower()=='y': playing = True continue else: print("Thanks for playing!") break ```
github_jupyter
# TensorFlow Tutorial Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow: - Initialize variables - Start your own session - Train algorithms - Implement a Neural Network Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code. ## 1 - Exploring the Tensorflow Library To start, you will import the library: ``` import math import numpy as np import h5py import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.python.framework import ops from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict %matplotlib inline np.random.seed(1) ``` Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example. $$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$ ``` y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36. y = tf.constant(39, name='y') # Define y. Set to 39 loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss init = tf.global_variables_initializer() # When init is run later (session.run(init)), # the loss variable will be initialized and ready to be computed with tf.Session() as session: # Create a session and print the output session.run(init) # Initializes the variables print(session.run(loss)) # Prints the loss ``` Writing and running programs in TensorFlow has the following steps: 1. Create Tensors (variables) that are not yet executed/evaluated. 2. Write operations between those Tensors. 3. Initialize your Tensors. 4. Create a Session. 5. Run the Session. This will run the operations you'd written above. Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run `init=tf.global_variables_initializer()`. That initialized the loss variable, and in the last line we were finally able to evaluate the value of `loss` and print its value. Now let us look at an easy example. Run the cell below: ``` a = tf.constant(2) b = tf.constant(10) c = tf.multiply(a,b) print(c) ``` As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it. ``` sess = tf.Session() print(sess.run(c)) ``` Great! To summarize, **remember to initialize your variables, create a session and run the operations inside the session**. Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later. To specify values for a placeholder, you can pass in values by using a "feed dictionary" (`feed_dict` variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session. ``` # Change the value of x in the feed_dict x = tf.placeholder(tf.int64, name = 'x') print(sess.run(2 * x, feed_dict = {x: 3})) sess.close() ``` When you first defined `x` you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you **feed data** to these placeholders when running the session. Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph. ### 1.1 - Linear function Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector. **Exercise**: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1): ```python X = tf.constant(np.random.randn(3,1), name = "X") ``` You might find the following functions helpful: - tf.matmul(..., ...) to do a matrix multiplication - tf.add(..., ...) to do an addition - np.random.randn(...) to initialize randomly ``` # GRADED FUNCTION: linear_function def linear_function(): """ Implements a linear function: Initializes W to be a random tensor of shape (4,3) Initializes X to be a random tensor of shape (3,1) Initializes b to be a random tensor of shape (4,1) Returns: result -- runs the session for Y = WX + b """ np.random.seed(1) ### START CODE HERE ### (4 lines of code) X = tf.constant(np.random.randn(3, 1), name='X') W = tf.constant(np.random.randn(4, 3), name='W') b = tf.constant(np.random.randn(4, 1), name='b') Y = tf.matmul(W, X) + b ### END CODE HERE ### # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate ### START CODE HERE ### sess = tf.Session() result = sess.run(Y) ### END CODE HERE ### # close the session sess.close() return result print( "result = " + str(linear_function())) ``` *** Expected Output ***: <table> <tr> <td> **result** </td> <td> [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]] </td> </tr> </table> ### 1.2 - Computing the sigmoid Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like `tf.sigmoid` and `tf.softmax`. For this exercise lets compute the sigmoid function of an input. You will do this exercise using a placeholder variable `x`. When running the session, you should use the feed dictionary to pass in the input `z`. In this exercise, you will have to (i) create a placeholder `x`, (ii) define the operations needed to compute the sigmoid using `tf.sigmoid`, and then (iii) run the session. ** Exercise **: Implement the sigmoid function below. You should use the following: - `tf.placeholder(tf.float32, name = "...")` - `tf.sigmoid(...)` - `sess.run(..., feed_dict = {x: z})` Note that there are two typical ways to create and use sessions in tensorflow: **Method 1:** ```python sess = tf.Session() # Run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) sess.close() # Close the session ``` **Method 2:** ```python with tf.Session() as sess: # run the variables initialization (if needed), run the operations result = sess.run(..., feed_dict = {...}) # This takes care of closing the session for you :) ``` ``` # GRADED FUNCTION: sigmoid def sigmoid(z): """ Computes the sigmoid of z Arguments: z -- input value, scalar or vector Returns: results -- the sigmoid of z """ ### START CODE HERE ### ( approx. 4 lines of code) # Create a placeholder for x. Name it 'x'. x = tf.placeholder(tf.float32, name='x') # compute sigmoid(x) sigmoid = tf.sigmoid(x) # Create a session, and run it. Please use the method 2 explained above. # You should use a feed_dict to pass z's value to x. with tf.Session() as sess: # Run session and call the output "result" result = sess.run(sigmoid, feed_dict={x: z}) ### END CODE HERE ### return result print ("sigmoid(0) = " + str(sigmoid(0))) print ("sigmoid(12) = " + str(sigmoid(12))) ``` *** Expected Output ***: <table> <tr> <td> **sigmoid(0)** </td> <td> 0.5 </td> </tr> <tr> <td> **sigmoid(12)** </td> <td> 0.999994 </td> </tr> </table> <font color='blue'> **To summarize, you how know how to**: 1. Create placeholders 2. Specify the computation graph corresponding to operations you want to compute 3. Create the session 4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values. ### 1.3 - Computing the Cost You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{[2](i)}$ and $y^{(i)}$ for i=1...m: $$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$ you can do it in one line of code in tensorflow! **Exercise**: Implement the cross entropy loss. The function you will use is: - `tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)` Your code should input `z`, compute the sigmoid (to get `a`) and then compute the cross entropy cost $J$. All this can be done using one call to `tf.nn.sigmoid_cross_entropy_with_logits`, which computes $$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{[2](i)}) + (1-y^{(i)})\log (1-\sigma(z^{[2](i)})\large )\small\tag{2}$$ ``` # GRADED FUNCTION: cost def cost(logits, labels): """     Computes the cost using the sigmoid cross entropy          Arguments:     logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)     labels -- vector of labels y (1 or 0) Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" in the TensorFlow documentation. So logits will feed into z, and labels into y.          Returns:     cost -- runs the session of the cost (formula (2)) """ ### START CODE HERE ### # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines) z = tf.placeholder(tf.float32, name='logits') y = tf.placeholder(tf.float32, name='labels') # Use the loss function (approx. 1 line) cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y) # Create a session (approx. 1 line). See method 1 above. sess = tf.Session() # Run the session (approx. 1 line). cost = sess.run(cost, feed_dict={z: logits, y: labels}) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return cost logits = sigmoid(np.array([0.2,0.4,0.7,0.9])) cost = cost(logits, np.array([0,0,1,1])) print ("cost = " + str(cost)) ``` ** Expected Output** : <table> <tr> <td> **cost** </td> <td> [ 1.00538719 1.03664088 0.41385433 0.39956614] </td> </tr> </table> ### 1.4 - Using One Hot encodings Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows: <img src="images/onehot.png" style="width:600px;height:150px;"> This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code: - tf.one_hot(labels, depth, axis) **Exercise:** Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use `tf.one_hot()` to do this. ``` # GRADED FUNCTION: one_hot_matrix def one_hot_matrix(labels, C): """ Creates a matrix where the i-th row corresponds to the ith class number and the jth column corresponds to the jth training example. So if example j had a label i. Then entry (i,j) will be 1. Arguments: labels -- vector containing the labels C -- number of classes, the depth of the one hot dimension Returns: one_hot -- one hot matrix """ ### START CODE HERE ### # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line) C = tf.constant(C, name='C') # Use tf.one_hot, be careful with the axis (approx. 1 line) one_hot_matrix = tf.one_hot(indices=labels, depth=C, axis=0) # Create the session (approx. 1 line) sess = tf.Session() # Run the session (approx. 1 line) one_hot = sess.run(one_hot_matrix) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return one_hot labels = np.array([1,2,3,0,2,1]) one_hot = one_hot_matrix(labels, C = 4) print ("one_hot = " + str(one_hot)) ``` **Expected Output**: <table> <tr> <td> **one_hot** </td> <td> [[ 0. 0. 0. 1. 0. 0.] [ 1. 0. 0. 0. 0. 1.] [ 0. 1. 0. 0. 1. 0.] [ 0. 0. 1. 0. 0. 0.]] </td> </tr> </table> ### 1.5 - Initialize with zeros and ones Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is `tf.ones()`. To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively. **Exercise:** Implement the function below to take in a shape and to return an array (of the shape's dimension of ones). - tf.ones(shape) ``` # GRADED FUNCTION: ones def ones(shape): """ Creates an array of ones of dimension shape Arguments: shape -- shape of the array you want to create Returns: ones -- array containing only ones """ ### START CODE HERE ### # Create "ones" tensor using tf.ones(...). (approx. 1 line) ones = tf.ones(shape) # Create the session (approx. 1 line) sess = tf.Session() # Run the session to compute 'ones' (approx. 1 line) ones = sess.run(ones) # Close the session (approx. 1 line). See method 1 above. sess.close() ### END CODE HERE ### return ones print ("ones = " + str(ones([3]))) ``` **Expected Output:** <table> <tr> <td> **ones** </td> <td> [ 1. 1. 1.] </td> </tr> </table> # 2 - Building your first neural network in tensorflow In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model: - Create the computation graph - Run the graph Let's delve into the problem you'd like to solve! ### 2.0 - Problem statement: SIGNS Dataset One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language. - **Training set**: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number). - **Test set**: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number). Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs. Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels. <img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> **Figure 1**</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center> Run the following code to load the dataset. ``` # Loading the dataset X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset() ``` Change the index below and run the cell to visualize some examples in the dataset. ``` # Example of a picture index = 0 plt.imshow(X_train_orig[index]) print ("y = " + str(np.squeeze(Y_train_orig[:, index]))) ``` As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so. ``` # Flatten the training and test images X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T # Normalize image vectors X_train = X_train_flatten/255. X_test = X_test_flatten/255. # Convert training and test labels to one hot matrices Y_train = convert_to_one_hot(Y_train_orig, 6) Y_test = convert_to_one_hot(Y_test_orig, 6) print ("number of training examples = " + str(X_train.shape[1])) print ("number of test examples = " + str(X_test.shape[1])) print ("X_train shape: " + str(X_train.shape)) print ("Y_train shape: " + str(Y_train.shape)) print ("X_test shape: " + str(X_test.shape)) print ("Y_test shape: " + str(Y_test.shape)) ``` **Note** that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing. **Your goal** is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one. **The model** is *LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX*. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes. ### 2.1 - Create placeholders Your first task is to create placeholders for `X` and `Y`. This will allow you to later pass your training data in when you run your session. **Exercise:** Implement the function below to create the placeholders in tensorflow. ``` # GRADED FUNCTION: create_placeholders def create_placeholders(n_x, n_y): """ Creates the placeholders for the tensorflow session. Arguments: n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288) n_y -- scalar, number of classes (from 0 to 5, so -> 6) Returns: X -- placeholder for the data input, of shape [n_x, None] and dtype "float" Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float" Tips: - You will use None because it let's us be flexible on the number of examples you will for the placeholders. In fact, the number of examples during test/train is different. """ ### START CODE HERE ### (approx. 2 lines) X = tf.placeholder(tf.float32, shape=(n_x, None), name='X') Y = tf.placeholder(tf.float32, shape=(n_y, None), name='Y') ### END CODE HERE ### return X, Y X, Y = create_placeholders(12288, 6) print ("X = " + str(X)) print ("Y = " + str(Y)) ``` **Expected Output**: <table> <tr> <td> **X** </td> <td> Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1) </td> </tr> <tr> <td> **Y** </td> <td> Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2) </td> </tr> </table> ### 2.2 - Initializing the parameters Your second task is to initialize the parameters in tensorflow. **Exercise:** Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use: ```python W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1)) b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer()) ``` Please use `seed = 1` to make sure your results match ours. ``` # GRADED FUNCTION: initialize_parameters def initialize_parameters(): """ Initializes parameters to build a neural network with tensorflow. The shapes are: W1 : [25, 12288] b1 : [25, 1] W2 : [12, 25] b2 : [12, 1] W3 : [6, 12] b3 : [6, 1] Returns: parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3 """ tf.set_random_seed(1) # so that your "random" numbers match ours ### START CODE HERE ### (approx. 6 lines of code) W1 = tf.get_variable('W1', [25, 12288], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b1 = tf.get_variable('b1', [25, 1], initializer=tf.zeros_initializer()) W2 = tf.get_variable('W2', [12, 25], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b2 = tf.get_variable('b2', [12, 1], initializer=tf.zeros_initializer()) W3 = tf.get_variable('W3', [6, 12], initializer=tf.contrib.layers.xavier_initializer(seed=1)) b3 = tf.get_variable('b3', [6, 1], initializer=tf.zeros_initializer()) ### END CODE HERE ### parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2, "W3": W3, "b3": b3} return parameters tf.reset_default_graph() with tf.Session() as sess: parameters = initialize_parameters() print("W1 = " + str(parameters["W1"])) print("b1 = " + str(parameters["b1"])) print("W2 = " + str(parameters["W2"])) print("b2 = " + str(parameters["b2"])) ``` **Expected Output**: <table> <tr> <td> **W1** </td> <td> < tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref > </td> </tr> <tr> <td> **b1** </td> <td> < tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref > </td> </tr> <tr> <td> **W2** </td> <td> < tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref > </td> </tr> <tr> <td> **b2** </td> <td> < tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref > </td> </tr> </table> As expected, the parameters haven't been evaluated yet. ### 2.3 - Forward propagation in tensorflow You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are: - `tf.add(...,...)` to do an addition - `tf.matmul(...,...)` to do a matrix multiplication - `tf.nn.relu(...)` to apply the ReLU activation **Question:** Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at `z3`. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need `a3`! ``` # GRADED FUNCTION: forward_propagation def forward_propagation(X, parameters): """ Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX Arguments: X -- input dataset placeholder, of shape (input size, number of examples) parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" the shapes are given in initialize_parameters Returns: Z3 -- the output of the last LINEAR unit """ # Retrieve the parameters from the dictionary "parameters" W1 = parameters['W1'] b1 = parameters['b1'] W2 = parameters['W2'] b2 = parameters['b2'] W3 = parameters['W3'] b3 = parameters['b3'] ### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents: Z1 = tf.matmul(W1, X) + b1 # Z1 = np.dot(W1, X) + b1 A1 = tf.nn.relu(Z1) # A1 = relu(Z1) Z2 = tf.matmul(W2, A1) + b2 # Z2 = np.dot(W2, a1) + b2 A2 = tf.nn.relu(Z2) # A2 = relu(Z2) Z3 = tf.matmul(W3, A2) + b3 # Z3 = np.dot(W3,Z2) + b3 ### END CODE HERE ### return Z3 tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) print("Z3 = " + str(Z3)) ``` **Expected Output**: <table> <tr> <td> **Z3** </td> <td> Tensor("Add_2:0", shape=(6, ?), dtype=float32) </td> </tr> </table> You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation. ### 2.4 Compute cost As seen before, it is very easy to compute the cost using: ```python tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...)) ``` **Question**: Implement the cost function below. - It is important to know that the "`logits`" and "`labels`" inputs of `tf.nn.softmax_cross_entropy_with_logits` are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you. - Besides, `tf.reduce_mean` basically does the summation over the examples. ``` # GRADED FUNCTION: compute_cost def compute_cost(Z3, Y): """ Computes the cost Arguments: Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples) Y -- "true" labels vector placeholder, same shape as Z3 Returns: cost - Tensor of the cost function """ # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...) logits = tf.transpose(Z3) labels = tf.transpose(Y) ### START CODE HERE ### (1 line of code) cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels)) ### END CODE HERE ### return cost tf.reset_default_graph() with tf.Session() as sess: X, Y = create_placeholders(12288, 6) parameters = initialize_parameters() Z3 = forward_propagation(X, parameters) cost = compute_cost(Z3, Y) print("cost = " + str(cost)) ``` **Expected Output**: <table> <tr> <td> **cost** </td> <td> Tensor("Mean:0", shape=(), dtype=float32) </td> </tr> </table> ### 2.5 - Backward propagation & parameter updates This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model. After you compute the cost function. You will create an "`optimizer`" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate. For instance, for gradient descent the optimizer would be: ```python optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost) ``` To make the optimization you would do: ```python _ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ``` This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs. **Note** When coding, we often use `_` as a "throwaway" variable to store values that we won't need to use later. Here, `_` takes on the evaluated value of `optimizer`, which we don't need (and `c` takes the value of the `cost` variable). ### 2.6 - Building the model Now, you will bring it all together! **Exercise:** Implement the model. You will be calling the functions you had previously implemented. ``` def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001, num_epochs = 1500, minibatch_size = 32, print_cost = True): """ Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX. Arguments: X_train -- training set, of shape (input size = 12288, number of training examples = 1080) Y_train -- test set, of shape (output size = 6, number of training examples = 1080) X_test -- training set, of shape (input size = 12288, number of training examples = 120) Y_test -- test set, of shape (output size = 6, number of test examples = 120) learning_rate -- learning rate of the optimization num_epochs -- number of epochs of the optimization loop minibatch_size -- size of a minibatch print_cost -- True to print the cost every 100 epochs Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables tf.set_random_seed(1) # to keep consistent results seed = 3 # to keep consistent results (n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set) n_y = Y_train.shape[0] # n_y : output size costs = [] # To keep track of the cost # Create Placeholders of shape (n_x, n_y) ### START CODE HERE ### (1 line) X, Y = create_placeholders(n_x, n_y) ### END CODE HERE ### # Initialize parameters ### START CODE HERE ### (1 line) parameters = initialize_parameters() ### END CODE HERE ### # Forward propagation: Build the forward propagation in the tensorflow graph ### START CODE HERE ### (1 line) Z3 = forward_propagation(X, parameters) ### END CODE HERE ### # Cost function: Add cost function to tensorflow graph ### START CODE HERE ### (1 line) cost = compute_cost(Z3, Y) ### END CODE HERE ### # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer. ### START CODE HERE ### (1 line) optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) ### END CODE HERE ### # Initialize all the variables init = tf.global_variables_initializer() # Start the session to compute the tensorflow graph with tf.Session() as sess: # Run the initialization sess.run(init) # Do the training loop for epoch in range(num_epochs): epoch_cost = 0. # Defines a cost related to an epoch num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set seed = seed + 1 minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed) for minibatch in minibatches: # Select a minibatch (minibatch_X, minibatch_Y) = minibatch # IMPORTANT: The line that runs the graph on a minibatch. # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y). ### START CODE HERE ### (1 line) _ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y}) ### END CODE HERE ### epoch_cost += minibatch_cost / num_minibatches # Print the cost every epoch if print_cost == True and epoch % 100 == 0: print ("Cost after epoch %i: %f" % (epoch, epoch_cost)) if print_cost == True and epoch % 5 == 0: costs.append(epoch_cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() # lets save the parameters in a variable parameters = sess.run(parameters) print ("Parameters have been trained!") # Calculate the correct predictions correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y)) # Calculate accuracy on the test set accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float")) print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train})) print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test})) return parameters ``` Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes! ``` parameters = model(X_train, Y_train, X_test, Y_test) ``` **Expected Output**: <table> <tr> <td> **Train Accuracy** </td> <td> 0.999074 </td> </tr> <tr> <td> **Test Accuracy** </td> <td> 0.716667 </td> </tr> </table> Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy. **Insights**: - Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting. - Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters. ### 2.7 - Test with your own image (optional / ungraded exercise) Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Write your image's name in the following code 4. Run the code and check if the algorithm is right! ``` import scipy from PIL import Image from scipy import ndimage ## START CODE HERE ## (PUT YOUR IMAGE NAME) my_image = "thumbs_up.jpg" ## END CODE HERE ## # We preprocess your image to fit your algorithm. fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T my_image_prediction = predict(my_image, parameters) plt.imshow(image) print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction))) ``` You indeed deserved a "thumbs-up" although as you can see the algorithm seems to classify it incorrectly. The reason is that the training set doesn't contain any "thumbs-up", so the model doesn't know how to deal with it! We call that a "mismatched data distribution" and it is one of the various of the next course on "Structuring Machine Learning Projects". <font color='blue'> **What you should remember**: - Tensorflow is a programming framework used in deep learning - The two main object classes in tensorflow are Tensors and Operators. - When you code in tensorflow you have to take the following steps: - Create a graph containing Tensors (Variables, Placeholders ...) and Operations (tf.matmul, tf.add, ...) - Create a session - Initialize the session - Run the session to execute the graph - You can execute the graph multiple times as you've seen in model() - The backpropagation and optimization is automatically done when running the session on the "optimizer" object.
github_jupyter
``` import tensorflow as tf from tensorflow import keras as keras import time import numpy as np import matplotlib.pyplot as plt import matplotlib import matplotlib.image as mpimg from tensorflow.keras.models import Sequential, load_model from tensorflow.keras.layers import Dense, Dropout, Lambda, LayerNormalization from tensorflow.keras.callbacks import ModelCheckpoint, LearningRateScheduler, History, EarlyStopping from sample.functionImplemented import get_model, custom_loss, get_threshold, schedule, custom_scaler, get_opt_action, update_replay_helper, populate_replay_memory, update_replay_memory from sample.car import Car from sample.track import Track img = mpimg.imread("tracks/track_pic9.jpg")[:,:,0] track1=(img<50).astype('int') print(track1.shape) track_rows, track_cols=track1.shape pos_pos=np.where(track1==1) spawning_positions=np.zeros((len(pos_pos[0]), 2)) spawning_positions[:, 0]=pos_pos[0] spawning_positions[:, 1]=pos_pos[1] spawning_positions=spawning_positions.astype('int') track=Track(track1, 5) l=spawning_positions[np.random.choice(range(len(spawning_positions)), size=(20, ))] for (i,j) in l: track.add_checkpoints(i,j) track.checkpoints=np.asarray(track.checkpoints) track.spawn_at=np.asarray(track.spawn_at) plt.imshow(track1) plt.show() throttle_quant=np.linspace(-1,1,9) steer_quant=np.linspace(-1,1,7) actions=np.asarray([(throttle, steer) for throttle in throttle_quant for steer in steer_quant]) data_scaler=np.asarray([ 100, 100, 100, 100, 100, 100, 100, 100, 50, 1, 1 ]) usescaler=True gamma=0.9 trainedModel=tf.keras.models.load_model("TrainedModels/trainedModelspa1.h5", custom_objects={'cl':custom_loss(gamma)}) new_car=Car(track, 80, 10.0) # new_car.sampling_frequency=10.0 throttle_trace=[] steer_trace=[] speed_trace=[] def get_plot(positions, superimposeon_this): x, y=positions for x_diff in range(-5, 7): for y_diff in range(-5, 7): if np.sqrt(x_diff**2+y_diff**2)<14: superimposeon_this[x+x_diff][y+y_diff]=1 f=plt.figure(figsize=(10, 20)) plt.imshow(superimposeon_this+new_car.track.track) plt.show() return base_fig=np.zeros((track_rows, track_cols)) for iteration in range(200): r, c=new_car.integer_position_ for x_diff in range(-3, 4): for y_diff in range(-3, 4): if np.sqrt(x_diff**2+y_diff**2)<4: if r+x_diff<new_car.track.track.shape[0] and c+y_diff<new_car.track.track.shape[1]: base_fig[r+x_diff][c+y_diff]=1 throttle, steer=get_opt_action(new_car, trainedModel, actions, data_scaler, usescaler) throttle_trace.append(throttle) steer_trace.append(steer) speed_trace.append(new_car.speed) theta=new_car.car_angle f1, f2=throttle*np.sin(theta)-steer*np.cos(theta), throttle*np.cos(theta)+steer*np.sin(theta) # print(steer, new_car.speed, new_car.car_angle, new_car.current_position) new_car.execute_forces(f1, f2, max_magnitudes=20) # new_car.speed=20.0 if new_car.collided_on_last: print("boom") break get_plot(new_car.integer_position_, base_fig) telemetry_plts=plt.figure(figsize=(10, 10)) ax1=telemetry_plts.add_subplot(3, 1, 1) ax1.plot(speed_trace) ax2=telemetry_plts.add_subplot(3, 1, 2) ax2.plot(throttle_trace) ax3=telemetry_plts.add_subplot(3, 1, 3) ax3.plot(steer_trace) ax1.set_title("Speed") ax2.set_title("throttle") ax3.set_title("Steering") telemetry_plts.suptitle("Telemetry") telemetry_plts.show() trainedModel. ```
github_jupyter
# Machine Learning Engineer Nanodegree ## Introduction and Foundations ## Project 0: Titanic Survival Exploration In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions. > **Tip:** Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook. # Getting Started To begin working with the RMS Titanic passenger data, we'll first need to `import` the functionality we need, and load our data into a `pandas` DataFrame. Run the code cell below to load our data and display the first few entries (passengers) for examination using the `.head()` function. > **Tip:** You can run a code cell by clicking on the cell and using the keyboard shortcut **Shift + Enter** or **Shift + Return**. Alternatively, a code cell can be executed using the **Play** button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. [Markdown](http://daringfireball.net/projects/markdown/syntax) allows you to write easy-to-read plain text that can be converted to HTML. ``` import numpy as np import pandas as pd # RMS Titanic data visualization code from titanic_visualizations import survival_stats from IPython.display import display %matplotlib inline # Load the dataset in_file = 'titanic_data.csv' full_data = pd.read_csv(in_file) # Print the first few entries of the RMS Titanic data display(full_data.head()) ``` From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship: - **Survived**: Outcome of survival (0 = No; 1 = Yes) - **Pclass**: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class) - **Name**: Name of passenger - **Sex**: Sex of the passenger - **Age**: Age of the passenger (Some entries contain `NaN`) - **SibSp**: Number of siblings and spouses of the passenger aboard - **Parch**: Number of parents and children of the passenger aboard - **Ticket**: Ticket number of the passenger - **Fare**: Fare paid by the passenger - **Cabin** Cabin number of the passenger (Some entries contain `NaN`) - **Embarked**: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton) Since we're interested in the outcome of survival for each passenger or crew member, we can remove the **Survived** feature from this dataset and store it as its own separate variable `outcomes`. We will use these outcomes as our prediction targets. Run the code cell below to remove **Survived** as a feature of the dataset and store it in `outcomes`. ``` # Store the 'Survived' feature in a new variable and remove it from the dataset outcomes = full_data['Survived'] data = full_data.drop('Survived', axis = 1) # Show the new dataset with 'Survived' removed display(data.head()) ``` The very same sample of the RMS Titanic data now shows the **Survived** feature removed from the DataFrame. Note that `data` (the passenger data) and `outcomes` (the outcomes of survival) are now *paired*. That means for any passenger `data.loc[i]`, they have the survival outcome `outcome[i]`. To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how *accurate* our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our `accuracy_score` function and test a prediction on the first five passengers. **Think:** *Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?* ``` def accuracy_score(truth, pred): """ Returns accuracy score for input truth and predictions. """ # Ensure that the number of predictions matches number of outcomes if len(truth) == len(pred): # Calculate and return the accuracy as a percent return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100) else: return "Number of predictions does not match number of outcomes!" # Test the 'accuracy_score' function predictions = pd.Series(np.ones(5, dtype = int)) print accuracy_score(outcomes[:5], predictions) ``` > **Tip:** If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off. # Making Predictions If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking. The `predictions_0` function below will always predict that a passenger did not survive. ``` def predictions_0(data): """ Model with no features. Always predicts a passenger did not survive. """ predictions = [] for _, passenger in data.iterrows(): # Predict the survival of 'passenger' predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_0(data) ``` ### Question 1 *Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?* **Hint:** Run the code cell below to see the accuracy of this prediction. ``` print accuracy_score(outcomes, predictions) ``` **Answer:** 61.62% *** Let's take a look at whether the feature **Sex** has any indication of survival rates among passengers using the `survival_stats` function. This function is defined in the `titanic_visualizations.py` Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across. Run the code cell below to plot the survival outcomes of passengers based on their sex. ``` survival_stats(data, outcomes, 'Sex') ``` Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females *did* survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can access the values of each feature for a passenger like a dictionary. For example, `passenger['Sex']` is the sex of the passenger. ``` def predictions_1(data): """ Model with one feature: - Predict a passenger survived if they are female. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if passenger.Sex == 'male': predictions.append(0) else: predictions.append(1) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_1(data) ``` ### Question 2 *How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?* **Hint:** Run the code cell below to see the accuracy of this prediction. ``` print accuracy_score(outcomes, predictions) ``` **Answer**: 78.68% *** Using just the **Sex** feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the **Age** of each male, by again using the `survival_stats` function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the **Sex** 'male' will be included. Run the code cell below to plot the survival outcomes of male passengers based on their age. ``` survival_stats(data, outcomes, 'Age', ["Sex == 'male'"]) ``` Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older *did not survive* the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive. Fill in the missing code below so that the function will make this prediction. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_1`. ``` def predictions_2(data): """ Model with two features: - Predict a passenger survived if they are female. - Predict a passenger survived if they are male and younger than 10. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if (passenger.Sex == 'female'): predictions.append(1) elif (passenger.Sex == 'male' and passenger.Age < 10): predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_2(data) ``` ### Question 3 *How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?* **Hint:** Run the code cell below to see the accuracy of this prediction. ``` print accuracy_score(outcomes, predictions) ``` **Answer**: 79.35% *** Adding the feature **Age** as a condition in conjunction with **Sex** improves the accuracy by a small margin more than with simply using the feature **Sex** alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions. **Pclass**, **Sex**, **Age**, **SibSp**, and **Parch** are some suggested features to try. Use the `survival_stats` function below to to examine various survival statistics. **Hint:** To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: `["Sex == 'male'", "Age < 18"]` ``` survival_stats(data, outcomes, 'Embarked', ["Pclass == 3", "Age < 30", "Sex == female", "SibSp == 2"]) ``` After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction. Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model. **Hint:** You can start your implementation of this function using the prediction code you wrote earlier from `predictions_2`. ``` def predictions_3(data): """ Model with multiple features. Makes a prediction with an accuracy of at least 80%. """ predictions = [] for _, passenger in data.iterrows(): # Remove the 'pass' statement below # and write your prediction conditions here if (passenger.Sex == 'female' and passenger.Pclass <> 3): predictions.append(1) elif (passenger.Sex == 'female' and passenger.Pclass == 3 and passenger.Age < 28 and passenger.SibSp == 0): predictions.append(1) elif (passenger.Sex == 'male' and passenger.Pclass <> 3 and passenger.Age < 10): predictions.append(1) elif (passenger.Sex == 'male' and passenger.Pclass == 1 and passenger.Age > 31 and passenger.Age < 44 and passenger.Fare > 5.000): predictions.append(1) else: predictions.append(0) # Return our predictions return pd.Series(predictions) # Make the predictions predictions = predictions_3(data) ``` ### Question 4 *Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?* **Hint:** Run the code cell below to see the accuracy of your predictions. ``` print accuracy_score(outcomes, predictions) ``` **Answer**: 82.15% **My Steps**: * First, based on the comparison between Male and Female, I could see that just considering the Female our accuracy was very good. * Second, I've tried to get more about the Female, so by comparing the Female data regarding Pclass I could see that classes 1 and 2 were very good but class 3 needed a review. So I added my **first rule** that was: **Female at Classes 1 and 2 - survived**. And then I investigated the Females in Class 3 and I saw that under 30 they were surviving a lot, so I refined a few and went my **second rule** that was: **Females at Class 3, under 28 and without siblings - survived**. * Third I started to try to understand the Males, and I saw that under 10 they were also surviving, considering that Class 3 was the worst. So I added **my third rule** that was: **Males under 10 and that were on Classes 1 and 2 - survived**. * Fourth, I've tried to refine the profile of Males older than 10 years. I saw that the majority of the ones that survived were on Class 1, so then a identified a range of age something between 30 and 40 years and that payed more than 5.000. This went to my **fourth rule** that was: **Males between 31 and 44 years, that were at Class 1 and payed more than 5.000 - survived**. # Conclusion After several iterations of exploring and conditioning on the data, you have built a useful algorithm for predicting the survival of each passenger aboard the RMS Titanic. The technique applied in this project is a manual implementation of a simple machine learning model, the *decision tree*. A decision tree splits a set of data into smaller and smaller groups (called *nodes*), by one feature at a time. Each time a subset of the data is split, our predictions become more accurate if each of the resulting subgroups are more homogeneous (contain similar labels) than before. The advantage of having a computer do things for us is that it will be more exhaustive and more precise than our manual exploration above. [This link](http://www.r2d3.us/visual-intro-to-machine-learning-part-1/) provides another introduction into machine learning using a decision tree. A decision tree is just one of many models that come from *supervised learning*. In supervised learning, we attempt to use features of the data to predict or model things with objective outcome labels. That is to say, each of our data points has a known outcome value, such as a categorical, discrete label like `'Survived'`, or a numerical, continuous value like predicting the price of a house. ### Question 5 *Think of a real-world scenario where supervised learning could be applied. What would be the outcome variable that you are trying to predict? Name two features about the data used in this scenario that might be helpful for making the predictions.* **Answer**: I think supervised learning could be applied to support Human Resources, by analysing all employees in a company, considering as data their Job Role, Salary, Age, Sex, How long he is in the current role, how long he is in the copany, Employee Satisfaction Score, etc. The outcome could be if an Employee will leave or stay in the company, so the HR Manager can use this algorithm to check if good employees are "almost leaving" the company and give them promotions so they will stay on their jobs more time. **Sample** The employee John Doe is a key contributor to the company, but he is in the company for more than 6 years and his salary is below the avarage salary for his role. He is now being considered as **LEAVING THE COMPANY** by our algorithm. Knownig this, the HR Manager can check with John Doe's manager and take actions to change the **LEAVING THE COMPANY** Status, doing something like salary increase pro promotion, for example. > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.
github_jupyter
##### Copyright 2019 The TensorFlow Authors. ``` #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ``` # Text classification with TensorFlow Lite Model Maker <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/lite/tutorials/model_maker_text_classification"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_text_classification.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> The TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications. This notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used text classification model to classify movie reviews on a mobile device. The text classification model classifies text into predefined categories.The inputs should be preprocessed text and the outputs are the probabilities of the categories. The dataset used in this tutorial are positive and negative movie reviews. ## Prerequisites ### Install the required packages To run this example, install the required packages, including the Model Maker package from the [GitHub repo](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). **If you run this notebook on Colab, you may see an error message about `tensorflowjs` and `tensorflow-hub` version imcompatibility. It is safe to ignore this error as we do not use `tensorflowjs` in this workflow.** ``` !pip install -q tflite-model-maker ``` Import the required packages. ``` import numpy as np import os from tflite_model_maker import configs from tflite_model_maker import ExportFormat from tflite_model_maker import model_spec from tflite_model_maker import text_classifier from tflite_model_maker import TextClassifierDataLoader import tensorflow as tf assert tf.__version__.startswith('2') tf.get_logger().setLevel('ERROR') ``` ### Download the sample training data. In this tutorial, we will use the [SST-2](https://nlp.stanford.edu/sentiment/index.html) (Stanford Sentiment Treebank) which is one of the tasks in the [GLUE](https://gluebenchmark.com/) benchmark. It contains 67,349 movie reviews for training and 872 movie reviews for testing. The dataset has two classes: positive and negative movie reviews. ``` data_dir = tf.keras.utils.get_file( fname='SST-2.zip', origin='https://dl.fbaipublicfiles.com/glue/data/SST-2.zip', extract=True) data_dir = os.path.join(os.path.dirname(data_dir), 'SST-2') ``` The SST-2 dataset is stored in TSV format. The only difference between TSV and CSV is that TSV uses a tab `\t` character as its delimiter instead of a comma `,` in the CSV format. Here are the first 5 lines of the training dataset. label=0 means negative, label=1 means positive. | sentence | label | | | | |-------------------------------------------------------------------------------------------|-------|---|---|---| | hide new secretions from the parental units | 0 | | | | | contains no wit , only labored gags | 0 | | | | | that loves its characters and communicates something rather beautiful about human nature | 1 | | | | | remains utterly satisfied to remain the same throughout | 0 | | | | | on the worst revenge-of-the-nerds clichés the filmmakers could dredge up | 0 | | | | Next, we will load the dataset into a Pandas dataframe and change the current label names (`0` and `1`) to a more human-readable ones (`negative` and `positive`) and use them for model training. ``` import pandas as pd def replace_label(original_file, new_file): # Load the original file to pandas. We need to specify the separator as # '\t' as the training data is stored in TSV format df = pd.read_csv(original_file, sep='\t') # Define how we want to change the label name label_map = {0: 'negative', 1: 'positive'} # Excute the label change df.replace({'label': label_map}, inplace=True) # Write the updated dataset to a new file df.to_csv(new_file) # Replace the label name for both the training and test dataset. Then write the # updated CSV dataset to the current folder. replace_label(os.path.join(os.path.join(data_dir, 'train.tsv')), 'train.csv') replace_label(os.path.join(os.path.join(data_dir, 'dev.tsv')), 'dev.csv') ``` ## Quickstart There are five steps to train a text classification model: **Step 1. Choose a text classification model archiecture.** Here we use the average word embedding model architecture, which will produce a small and fast model with decent accuracy. ``` spec = model_spec.get('average_word_vec') ``` Model Maker also supports other model architectures such as [BERT](https://arxiv.org/abs/1810.04805). If you are interested to learn about other architecture, see the [Choose a model architecture for Text Classifier](#scrollTo=kJ_B8fMDOhMR) section below. **Step 2. Load the training and test data, then preprocess them according to a specific `model_spec`.** Model Maker can take input data in the CSV format. We will load the training and test dataset with the human-readable label name that were created earlier. Each model architecture requires input data to be processed in a particular way. `TextClassifierDataLoader` reads the requirement from `model_spec` and automatically execute the necessary preprocessing. ``` train_data = TextClassifierDataLoader.from_csv( filename='train.csv', text_column='sentence', label_column='label', model_spec=spec, is_training=True) test_data = TextClassifierDataLoader.from_csv( filename='dev.csv', text_column='sentence', label_column='label', model_spec=spec, is_training=False) ``` **Step 3. Train the TensorFlow model with the training data.** The average word embedding model use `batch_size = 32` by default. Therefore you will see that it takes 2104 steps to go through the 67,349 sentences in the training dataset. We will train the model for 10 epochs, which means going through the training dataset 10 times. ``` model = text_classifier.create(train_data, model_spec=spec, epochs=10) ``` **Step 4. Evaluate the model with the test data.** After training the text classification model using the sentences in the training dataset, we will use the remaining 872 sentences in the test dataset to evaluate how the model perform against new data it has never seen before. As the default batch size is 32, it will take 28 steps to go through the 872 sentences in the test dataset. ``` loss, acc = model.evaluate(test_data) ``` **Step 5. Export as a TensorFlow Lite model.** Let's export the text classification that we have trained in the TensorFlow Lite format. We will specify which folder to export the model. You may see an warning about `vocab.txt` file does not exist in the metadata but they can be safely ignore. ``` model.export(export_dir='average_word_vec') ``` You can download the TensorFlow Lite model file using the left sidebar of Colab. Go into the `average_word_vec` folder as we specified in `export_dir` parameter above, right-click on the `model.tflite` file and choose `Download` to download it to your local computer. This model can be integrated into an Android or an iOS app using the [NLClassifier API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/nl_classifier) of the [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview). See the [TFLite Text Classification sample app](https://github.com/tensorflow/examples/blob/master/lite/examples/text_classification/android/lib_task_api/src/main/java/org/tensorflow/lite/examples/textclassification/client/TextClassificationClient.java#L54) for more details on how the model is used in an working app. *Note 1: Android Studio Model Binding does not support text classification yet so please use the TensorFlow Lite Task Library.* *Note 2: There is a `model.json` file in the same folder with the TFLite model. It contains the JSON representation of the [metadata](https://www.tensorflow.org/lite/convert/metadata) bundled inside the TensorFlow Lite model. Model metadata helps the TFLite Task Library know what the model does and how to pre-process/post-process data for the model. You don't need to download the `model.json` file as it is only for informational purpose and its content is already inside the TFLite file.* *Note 3: If you train a text classification model using MobileBERT or BERT-Base architecture, you will need to use [BertNLClassifier API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_nl_classifier) instead to integrate the trained model into a mobile app.* The following sections walk through the example step by step to show more details. ## Choose a model architecture for Text Classifier Each `model_spec` object represents a specific model for the text classifier. TensorFlow Lite Model Maker currently supports [MobileBERT](https://arxiv.org/pdf/2004.02984.pdf), averaging word embeddings and [BERT-Base](https://arxiv.org/pdf/1810.04805.pdf) models. | Supported Model | Name of model_spec | Model Description | Model size | |--------------------------|-------------------------|-----------------------------------------------------------------------------------------------------------------------|---------------------------------------------| | Averaging Word Embedding | 'average_word_vec' | Averaging text word embeddings with RELU activation. | <1MB | | MobileBERT | 'mobilebert_classifier' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device applications. | 25MB w/ quantization <br/> 100MB w/o quantization | | BERT-Base | 'bert_classifier' | Standard BERT model that is widely used in NLP tasks. | 300MB | In the quick start, we have used the average word embedding model. Let's switch to [MobileBERT](https://arxiv.org/pdf/2004.02984.pdf) to train a model with higher accuracy. ``` mb_spec = model_spec.get('mobilebert_classifier') ``` ## Load training data You can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab. <img src="https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_text_classification.png" alt="Upload File" width="800" hspace="100"> If you prefer not to upload your dataset to the cloud, you can also locally run the library by following the [guide](https://github.com/tensorflow/examples/tree/master/tensorflow_examples/lite/model_maker). To keep it simple, we will reuse the SST-2 dataset downloaded earlier. Let's use the `TestClassifierDataLoader.from_csv` method to load the data. Please be noted that as we have changed the model architecture, we will need to reload the training and test dataset to apply the new preprocessing logic. ``` train_data = TextClassifierDataLoader.from_csv( filename='train.csv', text_column='sentence', label_column='label', model_spec=mb_spec, is_training=True) test_data = TextClassifierDataLoader.from_csv( filename='dev.csv', text_column='sentence', label_column='label', model_spec=mb_spec, is_training=False) ``` The Model Maker library also supports the `from_folder()` method to load data. It assumes that the text data of the same class are in the same subdirectory and that the subfolder name is the class name. Each text file contains one movie review sample. The `class_labels` parameter is used to specify which the subfolders. ## Train a TensorFlow Model Train a text classification model using the training data. *Note: As MobileBERT is a complex model, each training epoch will takes about 10 minutes on a Colab GPU. Please make sure that you are using a GPU runtime.* ``` model = text_classifier.create(train_data, model_spec=mb_spec, epochs=3) ``` Examine the detailed model structure. ``` model.summary() ``` ## Evaluate the model Evaluate the model that we have just trained using the test data and measure the loss and accuracy value. ``` loss, acc = model.evaluate(test_data) ``` ## Quantize the model In many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster. Model Maker automatically applies the recommended quantization scheme for each model architecture but you can customize the quantization config as below. ``` config = configs.QuantizationConfig.create_dynamic_range_quantization(optimizations=[tf.lite.Optimize.OPTIMIZE_FOR_LATENCY]) config.experimental_new_quantizer = True ``` ## Export as a TensorFlow Lite model Convert the trained model to TensorFlow Lite model format with [metadata](https://www.tensorflow.org/lite/convert/metadata) so that you can later use in an on-device ML application. The label file and the vocab file are embedded in metadata. The default TFLite filename is `model.tflite`. ``` model.export(export_dir='mobilebert/', quantization_config=config) ``` The TensorFlow Lite model file can be integrated in a mobile app using the [BertNLClassifier API](https://www.tensorflow.org/lite/inference_with_metadata/task_library/bert_nl_classifier) in [TensorFlow Lite Task Library](https://www.tensorflow.org/lite/inference_with_metadata/task_library/overview). Please note that this is **different** from the `NLClassifier` API used to integrate the text classification trained with the average word vector model architecture. The export formats can be one or a list of the following: * `ExportFormat.TFLITE` * `ExportFormat.LABEL` * `ExportFormat.VOCAB` * `ExportFormat.SAVED_MODEL` By default, it exports only the TensorFlow Lite model file containing the model metadata. You can also choose to export other files related to the model for better examination. For instance, exporting only the label file and vocab file as follows: ``` model.export(export_dir='mobilebert/', export_format=[ExportFormat.LABEL, ExportFormat.VOCAB]) ``` You can evaluate the TFLite model with `evaluate_tflite` method to measure its accuracy. Converting the trained TensorFlow model to TFLite format and apply quantization can affect its accuracy so it is recommended to evaluate the TFLite model accuracy before deployment. ``` accuracy = model.evaluate_tflite('mobilebert/model.tflite', test_data) print('TFLite model accuracy: ', accuracy) ``` ## Advanced Usage The `create` function is the driver function that the Model Maker library uses to create models. The `model_spec` parameter defines the model specification. The `AverageWordVecModelSpec` and `BertClassifierModelSpec` classes are currently supported. The `create` function comprises of the following steps: 1. Creates the model for the text classifier according to `model_spec`. 2. Trains the classifier model. The default epochs and the default batch size are set by the `default_training_epochs` and `default_batch_size` variables in the `model_spec` object. This section covers advanced usage topics like adjusting the model and the training hyperparameters. ### Customize the MobileBERT model hyperparameters The model parameters you can adjust are: * `seq_len`: Length of the sequence to feed into the model. * `initializer_range`: The standard deviation of the `truncated_normal_initializer` for initializing all weight matrices. * `trainable`: Boolean that specifies whether the pre-trained layer is trainable. The training pipeline parameters you can adjust are: * `model_dir`: The location of the model checkpoint files. If not set, a temporary directory will be used. * `dropout_rate`: The dropout rate. * `learning_rate`: The initial learning rate for the Adam optimizer. * `tpu`: TPU address to connect to. For instance, you can set the `seq_len=256` (default is 128). This allows the model to classify longer text. ``` new_model_spec = model_spec.get('mobilebert_classifier') new_model_spec.seq_len = 256 ``` ### Customize the average word embedding model hyperparameters You can adjust the model infrastructure like the `wordvec_dim` and the `seq_len` variables in the `AverageWordVecModelSpec` class. For example, you can train the model with a larger value of `wordvec_dim`. Note that you must construct a new `model_spec` if you modify the model. ``` new_model_spec = model_spec.AverageWordVecModelSpec(wordvec_dim=32) ``` Get the preprocessed data. ``` new_train_data = TextClassifierDataLoader.from_csv( filename='train.csv', text_column='sentence', label_column='label', model_spec=new_model_spec, is_training=True) ``` Train the new model. ``` model = text_classifier.create(new_train_data, model_spec=new_model_spec) ``` ### Tune the training hyperparameters You can also tune the training hyperparameters like `epochs` and `batch_size` that affect the model accuracy. For instance, * `epochs`: more epochs could achieve better accuracy, but may lead to overfitting. * `batch_size`: the number of samples to use in one training step. For example, you can train with more epochs. ``` model = text_classifier.create(new_train_data, model_spec=new_model_spec, epochs=20) ``` Evaluate the newly retrained model with 20 training epochs. ``` new_test_data = TextClassifierDataLoader.from_csv( filename='dev.csv', text_column='sentence', label_column='label', model_spec=new_model_spec, is_training=False) loss, accuracy = model.evaluate(new_test_data) ``` ### Change the Model Architecture You can change the model by changing the `model_spec`. The following shows how to change to BERT-Base model. Change the `model_spec` to BERT-Base model for the text classifier. ``` spec = model_spec.get('bert_classifier') ``` The remaining steps are the same.
github_jupyter